Jump to content
Yaron

Best approach to Multi-file, multi-thread debug logging

Recommended Posts

What is the best approach you would recommend for creating log files with output to multiple files and input coming from multiple threads (some threads writing into the same log file) with a volume that can reach 1000's of entries per second?

 

Everything is running in a single process, so Critical section locks can be used and log entries must be flushed so they won't be lost in RAM if the process crashes.

Share this post


Link to post

So you need to

  1. Be able to cope with 1000s of entries per second
  2. Do not lose a single entry when the process crashes fatally

Since you say it "Debug logging" and you don't need to survive a powerloss: Use a second process. This can casually flush the log entries to disk when a certain threshold is reached and has no problem with your main process crashing.

Share this post


Link to post

Wouldn't inter-process communication severely lag this process?

Share this post


Link to post
Guest

I am doing something similar to the following, and i am sorry i am not pasting my code as it will not serve your need exactly, bu i will try yo explain in simple words and simple and short code snippets and you can build on your understanding:

The logger will be like this :

1) no need to critical section as long you will use the Interlocked**** instructions, so the logger will be lock free and near zero performance impact on the process, no matter it will one line per second or 10000.

2) you need to have two buffers defined like 😞 but can be more ) 

type
  PLogBuffer = ^TLogBuffer;
  TLogBuffer = record
    Current: Integer;
    ToFile: Integer;
    Size: Integer;
    Data: TBytes;
  end;

Current is always start with 0, Size = Length(Data)

3) the logging function will be 

var
  LogBuffer, LogBackRec: PLogBuffer;

procedure Log(const Msg: string);
var
  Len, NewCurrent: Integer;
begin
  Len := Length(Msg) * SizeOf(Char);
  NewCurrent := TInterlocked.Add(LogBuffer.Current, Len);
  if NewCurrent < LogBuffer.Size then
  begin
    Move(Msg[1], LogBuffer.Data[NewCurrent - Len], Len);
    TInterlocked.Add(LogBuffer.ToFile, Len);
  end;
end;

4) A background thread that sleep for 30ms then swap LogBuffer with LogBackRec  "preferably with TInterlocked.Exchange(LogBuffer, LogBackRec) " , then sleep again for 20ms after that it is clear to dump/append the Data in LogBackRec to file and flush it

5) the sleep amount is adjustable at runtime if needed as you see fit.

6) the Size of Data is adjustable but not recommended to change them at runtime, (i mean after initialize them ), here if Data size is 1mb and theoretically your thread with 50ms delay between flushing the data is capable to write up to 20MB of data per second, SSD disks should be able to do that, but again this is very extreme as data size most likely will be way less.

7) you should know the following :
a) if you hit NewCurrent >= LogBuffer.Size then Msg is lost !, so you can sleep on it for 1ms and try again, or convert that procedure to function with boolean result, ... but i don't see it needed as it will be fixed by increasing Size or decreasing the writer thread Sleep amount .

b) You data to be written to file is ToFile you have nothing to do with Current

c) after flushing the data to file, you should reset Current and ToFile to 0 .

I hope that will works for you, and good luck.
 

 

Share this post


Link to post
26 minutes ago, Yaron said:

Wouldn't inter-process communication severely lag this process?

 

I can't say how you define severe. Of course there is overhead for the 2nd process, but your 1000s of entries should be well possible. I suppose this is also how professional logging solutions like Gurock SmartInspect (also available for Delphi) do this (Link)

Edited by Der schöne Günther

Share this post


Link to post

You might want to look at the Windows Event Tracing API. I haven't used it personally.

 

What I have used is something like this:

  1. In your application the log entries are written to a thread safe, lock free, linked list (Windows SList).
  2. A separate thread reads from the SList and write entries to a circular buffer in a memory mapped file.
  3. A separate process reads entries from the circular buffer in the memory mapped file and write the entries to a disk files.
  • Like 1

Share this post


Link to post

I'm usually doing such logging in a manner similar to the one Kas Ob. presented.

 

I establish a logging queue. Usually a TOmniQueue as it does not require locking. 

Then I create a logging thread. In essence it does just:

while logQueue.Get(msg) do
  Log(ms);

Each part of the program that wants to log, just creates a `msg` record containing whatever info you need to perform the logging (message, severity, timestamp, target file ...) and pushes it into the logging queue. That's it.

Share this post


Link to post

In my company, we wrote our own logging solution from scratch for all of our apps to log to.  It consists of a Windows service hosting an out-of-process COM server, managing a thread-safe FIFO queue of log entries.  A background thread continuously writes the entries to a log file, stopping and starting a new file at night and archiving the old files, purging old records/files at scheduled intervals, etc.  Another thread forwards certain log entries to a secondary service running on another machine, to email, to SNMP traps, etc.  Our individual apps use the service's COM object to send log entries and other commands as needed.  Under heavy load, it easily handles 1000s of log entries per second across multiple processes.  And if a given app crashes, COM handles the cleanup for us.

 

And then we have a separate viewer app to display the log entries in real-time.

 

This is probably a bit extreme for your use case, though.  Existing logging tech wasn't very good way back when this project was first started.

Share this post


Link to post

We use our https://github.com/synopse/mORMot/blob/master/SynLog.pas Open Source logging framework on server side, with sometimes dozens of threads into the same log file.

It has very high performance, so logging don't slow down the process, and you can make very useful forensic and analysis if needed.

 

Having all threads logging in the same log file is at the same time a nightmare and a blessing.

It may be a nightmare since all operations are interleaved and difficult to identify.

It is a blessing with the right tool, able to filter for one or several threads, then find out what is really occuring: in this case, a single log file is better than several.

 

For instance, one of our https://livemon.com server generates TB of logs - see https://leela1.livemon.net/metrics/counters - and we can still handle it.
This instance for instance is running since more than 8 months without being restarted, and with hunderths of simultaneous connections, logging incoming data every second... 🙂

 

We defined our simple and very usable log viewer tool - see http://blog.synopse.info/post/2011/08/20/Enhanced-Log-viewer and https://synopse.info/files/html/Synopse mORMot Framework SAD 1.18.html#TITL_103

This is the key to be able to have something usable from heavily multithreaded logs.

 

So if you can, rather use a single log file per process, with proper thread identification and filtering.

Edited by Arnaud Bouchez
  • Like 4

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×