Jump to content
David Schwartz

how to run git commands from Delphi app

Recommended Posts

When you install git on a Windows machine, it installs MINGwin or MINGw64 as well as a bash shell and a gui app.

 

I'm building this thing that I'm calling a "workbench". It's built around our service tickets. So you enter a ticket# and some related info, and it goes to a folder and displays the files in it. Then it helps you manage the files and do your work. When you're finished you move on.

 

I'd like to execute some git commands from a Delphi app. Like, when you enter the ticket#, it runs something like "git branch <ticket#>". At some point, you can tell it to add some modified files to the branch. And when you're done it will switch back to "git branch master" or whatever. 

 

I know how to spawn command line EXEs from Delphi. In this case, I'm curious about a couple of things. First, is there a DLL or API or something that can be used? Second, I know git.exe is accessible from a regular command prompt; is this sufficient, or do I need the support found in the bash shell's environment? 

Share this post


Link to post

Best practice has always been to just build the command line and execute that. No hassle with DLL changes, no issues with 32Bit vs. 64Bit and last but by far not least: easy testing by just copying the generated command line into a terminal and checking the output.

  • Like 2

Share this post


Link to post

So if I run a git command the same way I run anything else on the "command line" (spawning a child process), it'll work fine? I'm mainly concerned about dependencies that may exist on features in the bash shell that don't exist in cmd. (Or do I need to run bash.exe and then provide git.exe as an argument to that, along with its parameters?)

Share this post


Link to post
9 hours ago, Fr0sT.Brutal said:

There is libgit and https://github.com/libgit2/GitForDelphi but calling exe will be easier indeed

From 2012 ... 😮   (I'm not a fan of DLLs, TBH, and I know that the base code has evolved quite a bit since 2012. If I need to have an external dependency, I'd rather use the EXE than a DLL. But this is good to know.)

Edited by David Schwartz

Share this post


Link to post
1 hour ago, David Schwartz said:

Or do I need to run bash.exe and then provide git.exe as an argument to that, along with its parameters?

No. Git bash is just an alternate shell to cmd with Linux-like commands. Yo can run git from your old and rusty cmd directly (ShellExecute / CreateProcess from Delphi) and it will work just as good as from Git Bash.

I have a .cmd file to check all of my local git repositories and if there is an update upstream it pull-rebases all local branches, taking care of automatic stashing if necessary - it only stops for merge conflicts.

 

Just double-click it and everything is as fresh as they are on their official repository.

  • Thanks 1

Share this post


Link to post
14 hours ago, David Schwartz said:

So if I run a git command the same way I run anything else on the "command line" (spawning a child process), it'll work fine? I'm mainly concerned about dependencies that may exist on features in the bash shell that don't exist in cmd. (Or do I need to run bash.exe and then provide git.exe as an argument to that, along with its parameters?)

I use Git from cmd and never call gitbash so no problem.

 

  • Thanks 1

Share this post


Link to post
On 8/26/2020 at 12:18 PM, aehimself said:

 

I have a .cmd file to check all of my local git repositories and if there is an update upstream it pull-rebases all local branches, taking care of automatic stashing if necessary - it only stops for merge conflicts.

 

Just double-click it and everything is as fresh as they are on their official repository.

Would you mind sharing that? I'm curious what all is involved. 🙂 

 

I have a process that a colleague documented that he uses, and I'd like to implement that to make it more of a one-click option.

Share this post


Link to post

It's pretty undocumented, but kind of easy to understand. Feel free to modify it to your needs:

@ECHO OFF
SET GITDIR=C:\LocalWork\_DelphiComponents
SET OLDDIR=%CD%
FOR /F "tokens=1 delims=" %%a IN ('DIR /B /A:D %GITDIR%') DO CALL :CHECK "%%a"
GOTO :END

:CHECK
CD /D %GITDIR%\%~1 > nul 2>&1
IF ERRORLEVEL 1 GOTO :eof
"C:\Program Files\Git\cmd\git.exe" fetch > nul 2>&1
IF ERRORLEVEL 1 GOTO :eof
ECHO/|SET /P=%~1...
"C:\Program Files\Git\cmd\git.exe" status > "%TEMP%\gitstatus.tmp" 2>&1
IF ERRORLEVEL 1 (ECHO  querying status failed! & GOTO :DELEOF)
TYPE "%TEMP%\gitstatus.tmp" | FIND /I "is behind" > nul 2>&1
IF ERRORLEVEL 1 (ECHO  up to date. & GOTO :DELEOF)
TYPE "%TEMP%\gitstatus.tmp" | FIND /I "nothing to commit, working tree clean" > nul 2>&1
IF ERRORLEVEL 1 (SET STASHED=1) ELSE (SET STASHED=0)
IF %STASHED%==0 GOTO :PULL
"C:\Program Files\Git\cmd\git.exe" stash > nul 2>&1
IF ERRORLEVEL 1 (ECHO  could not stash changes! & GOTO :DELEOF)
:PULL
"C:\Program Files\Git\cmd\git.exe" pull --rebase > nul 2>&1
IF ERRORLEVEL 1 (ECHO  could not download updates!) ELSE (ECHO  update successful.)
IF %STASHED%==0 GOTO :DELEOF
"C:\Program Files\Git\cmd\git.exe" stash pop > nul 2>&1
IF ERRORLEVEL 1 ECHO  could not restore changes!
:DELEOF
IF EXIST "%TEMP%\gitstatus.tmp" DEL "%TEMP%\gitstatus.tmp"
GOTO :eof

:END
CD /D %OLDDIR%
PAUSE

For me it now outputted...

 

image.png.2fe8f2e3387af7d4d283ca4b9a5120c5.png

 

All you have to do is to change the path to the folder, where your Git repositories are. The script will check all folders within the root and if it's a Git repository it will do it's work.

 

If you need help, feel free to ask.

  • Like 4

Share this post


Link to post

That's cool. The challenge we have is we've got all of our production folders mapped to a couple of local drives (I: and V:) and each of those folders acts as the git base (whatever it's called).

 

Each of the folders has a Delphi project in it that builds an EXE.

 

Some of them share files because they're related, but they can't all be in the same folder for various reasons.

 

So 'git pull' and 'git push" and so on act on the entire virtual drive. It can get rather convoluted when someone updates a file elsewhere that has nothing to do with what you're working on and it creates a conflict elsewhere that stops you in your tracks and you have to go off and resolve that conflict to continue work on what you're REALLY trying to do (that's not related at all).

 

I'm not very well-versed in git, so I hope this makes sense... I'm trying to build a 'workbench' app that manages a bunch of resources, including the ability to set up a branch, do necessary pulls, sidestep unrelated conflicts, let you do your work, then wrap everything up with adds, commits, and pushes, then unwind the unrelated conflicts that may have come up earlier. This is feasible because all of our work is driven by specific work tickets that only affect code in a specific folder.

 

We can have multiple tickets that affect things in the same folder. They may deal with the same files, but very rarely are there real conflicts. (A common conflict scenario is two different tickets that require us to add a line to the end of the same file. they're not related, but all git knows is there's a conflict with the last line. We just need to accept both; the line position they're at is irrelevant.)

 

Everybody seems to use slightly different ways of dealing with stuff, and I'd like to implement a consistent process in this tool that just lets us focus on the task at hand without having to get distracted by unrelated stuff that git insists we handle RIGHT NOW.

 

Share this post


Link to post

 

15 minutes ago, David Schwartz said:

The challenge we have is we've got all of our production folders mapped to a couple of local drives (I: and V:) and each of those folders acts as the git base (whatever it's called).

Well, you can create a folder somewhere and just add junctions with mklink (e.g.: MKLINK /D GitRepo1 I:\SomeSecretApplicaition\Git) to all of the git repositories you have. That will fool the script and will work on all of them. Or, modify the script and call :CHECK manually on each, instead of the for loop:

CALL :CHECK D:\Doom

CALL :CHECK Q:\Quake

CALL :CHECK B:\BloodBorne

 

15 minutes ago, David Schwartz said:

So 'git pull' and 'git push" and so on act on the entire virtual drive. It can get rather convoluted when someone updates a file elsewhere that has nothing to do with what you're working on and it creates a conflict elsewhere that stops you in your tracks and you have to go off and resolve that conflict to continue work on what you're REALLY trying to do (that's not related at all).

I understand nothing of this, especially the whole virtual drive part. Maybe it's better this way. At work we have one huge repository, with all the sources in it. Client, server, web, framework. All has it's separate folder.

At home I have a different repository for my frameworks, they are located in my _DelphiComponents folder; far away from the applications which are using them. But, with this script everything can be refreshed at once, so no problem.

Edited by aehimself

Share this post


Link to post

not so different ... we don't use C:\ as the root for git as you do. Instead, we map V: and I: to folders below C:\ and treat them as virtual drives.

 

V: --> C:\development  and I: --> C:\imports

 

All of the libraries and server apps are in V: and I: has 800+ folders that contain import handlers (EXEs) for different client data files.

 

The stuff in V: hasn't changed in years. Most of our day-to-day work is in the I: drive area. But there's stuff in there that goes back to 2010 or so, and maybe half of it is obsolete. (Nobody seems to care, because in theory the clients could come back and pick up sending us data in the same formats they were using years ago.)

 

The problem is, we have clients that manage other clients of their own. Think of a property management company that would manage a bunch of apartment complexes. The property mgt company is OUR client; they send stuff to us to process for THEIR clients. Each of them is in a separate folder.

 

If we call the mgt company 'ABC' then we'd have ABC001, ABC002, and so forth, for their clients. All separate folders. But some of these guys will all use the same data formats, so we only have ABC000 and in that folder we have a single import EXE that imports data for ALL of THEIR clients, because it's all in the same format. (They'd typically use their system to generate the export data files, and they're just big flat data files saved as CSV data.)

 

Anyway, here's the rub ... ABC says, "We need the logo changed on cust123" and we have, say, a week to get that done. Then the next day, they say, "We need to add a new customer as cust456" and that takes a week. Then they say, "We need to change the message on a statement for cust890".

 

All three of these tickets overlap in time. Depending on priorities and who is available to do the work, they can get done in any order and work can occur a little here and a little there. 

 

It's not too bad when these customers have different data import needs, so they're in separate folders. But when their code is all in the same folder, and we have multiple people working on them, it's problematic.

 

Also, when someone works on one of the clients in another folder and checks it into git, and then I do a "git pull", it updates that other folder, which is usually completely unrelated to what I'm working on. I've done a git pull and it tells me 25 files were added, modified, and deleted, yet none of them are in the folder where I'm doing my work. It's just noise to me and doesn't affect my work at all.

 

When you're working all alone, this isn't an issue. You work on one thing at a time and it works out perfectly well.

 

With a team of people working on different parts of an elephant, everybody is supposed to have the entire elephant on their own drive in its current state -- so everybody is working with the exact same elephant. Otherwise, someone's elephant might be missing a trunk, another might only have half a tail, and so forth. We want everybody to have the same "model" in our disk so we can assign anything to anybody at any time. that's the theory, anyway. 

Share this post


Link to post

Having a completely separate import application for each "client" seems to be a huge waste of resources. One half of each application is the same (the output part) so I'd have went on a totally different design approach. Have ONE import application with multiple import formats, descended from the same class... so your application is using the TMyImport class, and TMyImport_ABC0001 is a descendant of this. The single application then instantiates the correct class based on the channel where the data is coming from. Would make your lives a lot, lot more easy. It might sound like a lot of work, but one-by-one, slowly the change can be done. Just make sure you prepare TMyImport and the output part well enough 🙂

 

Also, why to map C:\development to V:...? What difference does it make on which drive letter I'm changing my source code on? Please don't tell me you have hardcoded paths... 🙂

 

Based on the example you gave, my script should work jus fine. Set C:\ as the root folder so it will try to refresh C:\Development (V:) and C:\imports (I:) too.

Edited by aehimself

Share this post


Link to post
14 hours ago, aehimself said:

Having a completely separate import application for each "client" seems to be a huge waste of resources. .

This system was designed in the 2004-2008 timeframe by people who are long gone. It's a legacy project that nobody dares fiddle with. It has been running day-in and day-out for 15 years and the overall system is quite stable and robust. Resources are not anything anybody cares much about, and cutting them by 25%-50% yields no useful ROI. Reliability is paramount.

 

I'd love to redesign and rebuild this from the ground-up, but management here is of the same opinion I've encountered everywhere else I've worked since 2007 -- if they're going to rebuild it, it won't be in Delphi. The ones who DID undertake rebuilds under .NET ran WAY over budget and schedule, and the resulting systems were big, fat, slow, and unreliable. Meanwhile, the Delphi-based software just kept chugging along, solid as a rock and reliable as ever.

 

Most places do not want to replace legacy Delphi apps because they're so solid and reliable. But they don't give that much consideration when it comes to new product development.

 

The only work for old Delphi hacks seems to be working on maintaining old legacy systems. I'd love to find a new major project being developed in Delphi, but I haven't heard of any (in the USA anyway) in years.

Edited by David Schwartz

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×