Jump to content

Recommended Posts

Posted (edited)

I'm just wondering how others have created system wide locks to manage sharing a resource such as a single log file with multiple processes (I know, not ideal)

 

Two examples are as follows:

{$IFDEF useSemaphore}
procedure systemWideLock( name: string; proc: TProc; timeout: integer = 0 );
var
hSem: THandle;
nameWS: array[0..255] of WideChar;
begin
  StringToWideChar(name,nameWS,254);
  hSem := CreateSemaphore(nil,1,1,nameWS);
  try
    WaitForSingleObject(hSem,INFINITE);
    proc();
  except
  end;
  ReleaseSemaphore(hSem,1,nil);
end;
{$ENDIF}

{$IFDEF useFile}
procedure systemWideLock( name: string; proc: TProc; timeout: integer = 0 );
var
fs: TFileStream;
locked: boolean;
begin

  locked := false;
  while not locked do
  begin
    try
      fs := TFileStream.create('C:\temp\File.lock',fmCreate);
      locked := true;
    except
    end;
    sleep(random(3)); // can be removed
  end;

  try
    proc();
  except
  end;
  FreeAndNil(fs);
end;
{$ENDIF}

 

The challenge with the Semaphore, whilst being a much faster mechanism, is that, if for some reason code execution gets stuck between lock and unlock (let's say a dialog appears in a silent process or the process gets into an endless loop), end tasking the application will not release the semaphore, resulting in the need to reboot Windows.

 

The file approach seems to work better in that a file lock releases when the process terminates (apparently timing might vary but it happened ok in my testing), and you can argue if the sleep should be removed or the time changed, but either way its orders of magnitude slower.

 

Are there any other lock types that could be used that release upon process termination?

 

Edited by hsvandrew

Share this post


Link to post

Hi,

 

What should/can happen with file locking in case of the owner application crash or system power failure (eg. unexpected restart) ?

 

I prefer prefer memory locking or shared memory locking, but with extra step like have a background thread acting like watch dog and make sure there is time out or constraints to keep the lock active and control or process -> exiting/finishing/wrapping up/reporting/terminating/(asking the user for interaction)....

As or file locking if you prefer it, then make sure to have some data in it like the owner PID (process ID) or the time of system started, both.... You need to think about different scenarios before committing to potentially permanent locking like files and registry, for me they are somehow risky and dangerous.

Share this post


Link to post
44 minutes ago, FPiette said:

I use LockFile or LockFileEx from Windows API.

This way better than depending on creating the file, BUT ...

 

What/where is the locked file ?, who created it ?

Here few scenarios should be a protentional problems:

1) Application/installer first run done by an administrator, will the locking file be accessed or lockable by others ?

2) What happen if user using RDP ran the application which either created that file or locked that file ?

 

Anyway, my hate for file or registry locking is the need for more scenarios/situations to consider.

Share this post


Link to post

I'd use a log broker service, i.e. have log requests from the different apps go through the broker APIs.

Share this post


Link to post
3 hours ago, Kas Ob. said:

What/where is the locked file ?, who created it ?

Any thread/process may create the file but must be prepared to handle "file already exists" error if another thread/process is creating the file one microsecond before. In that context, file already exists is not an error!

Then every thread/process lock the file BEFORE writing or reading anything and unlock right after.

Now the question is where to lock. File locking allow to lock one or more bytes, and even lock a non existent part of the file. There are several possibilities. The most important is that every thread/process must follow the same rule.

1) Lock the whole file and way past the end of file.

2) Lock a single bytes or a few bytes, usually the the file header, if any.

3) Lock from the byte at end of file for a large block.

4) Lock a single byte far away from end of file, at a fixed place that will never be reached.

5) Another file, maybe empty, is used to place the locks.

 

Solution 1 is easy and the file is completely locked for everything except the one that placed the lock. This include any editor or even the "type" or "copy" commands of the command interpreter. This may be desirable... or not.

Solution 2 is frequently used by database manager.

Solution 3 will allow reading from a thread/process for the existing part of the file while another is appending to the file.

Solution 4 is interesting in some cases because any reader/writer will be able to read/write to the file as usual but all aware of the locking will correctly write to the file without overwriting anything. This may be desirable... or not.

Solution 5 is interesting because the file is never locked by thread/process aware of the locking scheme will safely read/write the file.

 

There are probably other locking schemes. The developer has to carefully think about his intent and use case. And of course must fully understand how locking files works. The file locking is the base of multi user applications and exists almost since the beginning of computer software and databases. There are a lot of articles and books published on the subject.

  • Like 1

Share this post


Link to post

More than just not ideal. Logging should have minimal impact which means introducing lock contention is counterproductive. The locks will change the runtime behaviour of the processes introducing coupling, uncertainly and variance. Performance issues will have admins turn off logging in production then when issues need to be investigated turning it back on means the processes behave differently frustrating attempts to reproduce those issues. 

 

Most go with one of:

1) One log file per process + read them into a combined view as needed. 

2) A logging process of some sort that other processes send log message to.   

Share this post


Link to post
1 hour ago, Brian Evans said:

More than just not ideal. Logging should have minimal impact which means introducing lock contention is counterproductive. The locks will change the runtime behaviour of the processes introducing coupling, uncertainly and variance. Performance issues will have admins turn off logging in production then when issues need to be investigated turning it back on means the processes behave differently frustrating attempts to reproduce those issues. 

 

Most go with one of:

1) One log file per process + read them into a combined view as needed. 

2) A logging process of some sort that other processes send log message to.   

In some application, I off-loaded the logging to another process which does the file I/O. The logging function write to a pipe connected to the logging application. This is even faster than writing directly to a file and no lock is required in the logging function in the main application : each thread opens his own pipe and different processes each has his own set of pipes.

Share this post


Link to post
On 1/8/2024 at 4:04 AM, Lars Fosdal said:

I'd use a log broker service, i.e. have log requests from the different apps go through the broker APIs.

I used CodeSite Express (and later upgraded to full version) for years at a previous job. It worked well for our needs.

 

@hsvandrew  CodeSite Express can be installed via GetIt Package Manager. There are a couple others that I came across that you may want to investigate:

 

loggerpro and SynLog

 

There is also SmartInspect. Although not free, there is a trial version available.

Share this post


Link to post
On 1/10/2024 at 11:16 PM, Brian Evans said:

More than just not ideal. Logging should have minimal impact which means introducing lock contention is counterproductive. The locks will change the runtime behaviour of the processes introducing coupling, uncertainly and variance. Performance issues will have admins turn off logging in production then when issues need to be investigated turning it back on means the processes behave differently frustrating attempts to reproduce those issues. 

 

Most go with one of:

1) One log file per process + read them into a combined view as needed. 

2) A logging process of some sort that other processes send log message to.   

I agree with you. I just have to create a solution within a team where someone higher than me doesn't get this 🙂

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×