Jump to content
Yaron

Why is my firebird database code unstable under load? (code included)

Recommended Posts

Guest

The case above is not quite clear to me. Where do you get the actual value for count?

IMHO - reading up on FB transaction handling will get you a long way. I think FireDAC has components for that, not sure though.

 

If you would want to increase the count inside you SQL then deadlocks could definitely happen.

Two ways to manage that; either handle the case in your Exception block, posting something that will retry the operation. That could prove messy depending on the rest of the application/service architecture.

The second way is to tell FB that this trasaction should wait for a lock to be released. Now, the message indicates that the transaction is NOT waiting. You set a value of seconds for transaction wait.

 

I'm not the best to help here since i do not use FireDAC and i have no idea of how to handle transaction parameter, autocommitts and such stuff using FireDAC:

 

HTH,

 

/Dany

Share this post


Link to post
22 minutes ago, Dany Marmur said:

The case above is not quite clear to me. Where do you get the actual value for count?

IMHO - reading up on FB transaction handling will get you a long way. I think FireDAC has components for that, not sure though.

 

If you would want to increase the count inside you SQL then deadlocks could definitely happen.

Two ways to manage that; either handle the case in your Exception block, posting something that will retry the operation. That could prove messy depending on the rest of the application/service architecture.

The second way is to tell FB that this trasaction should wait for a lock to be released. Now, the message indicates that the transaction is NOT waiting. You set a value of seconds for transaction wait.

 

I'm not the best to help here since i do not use FireDAC and i have no idea of how to handle transaction parameter, autocommitts and such stuff using FireDAC:

 

HTH,

 

/Dany

 

The view number is read earlier in the code, I actually replaced it with a more reliable SQL command to increase the value instead of setting it:

procedure ClubHouseDB_IncCampaignViewCount(sDebugFile,sGalleryUID : String);
var
  dbConn     : TFDConnection;
  dbQuery    : TFDQuery;
  deadLocked : Boolean;
begin
  deadLocked := False;
  dbConn  := TFDConnection.Create(nil);  dbConn.ConnectionDefName := dbPoolName;
  dbQuery := TFDQuery.Create(nil);       dbQuery.Connection       := dbConn;
  Try
    dbQuery.SQL.Text   := 'UPDATE GALLERY_TABLE SET VIEW_COUNT=VIEW_COUNT+1 WHERE GALLERY_UID=:galleryuid;';
    Try
      dbQuery.Prepare;
      dbQuery.ParamByName('galleryuid').AsString := sGalleryUID;
      dbQuery.ExecSQL;
    Except
      on E : Exception do If Pos('deadlock',Lowercase(E.Message)) > 0 then deadLocked := True;
    End;
  Finally
    dbQuery.Free;
    dbConn.Free;
  End;
  If deadLocked = True then ClubHouseDB_IncCampaignViewCount(sDebugFile,sGalleryUID);
end;

As you can see from the updated code, I'm catching the deadlock by looking at the returned exception message, I'm not sure it's the best approach, but I haven't been able to find another way to identify the deadlock.

 

I don't have a problem increase the deadlock timeout, but I couldn't find anything on how to do that, I'm not even sure if it's part of the Firebird SQL statement or if I need to specify it through the Firedac components.

 

Edited by Yaron

Share this post


Link to post

How are the settings for the transcation?

 

Maybe the use of an sequence and

NEXT VALUE FOR

or

GEN_ID()

  could be an option.

Share this post


Link to post
Guest
2 minutes ago, Yaron said:

I don't have a problem increase the deadlock timeout, but I couldn't find anything on how to do that

I am afraid you need to check FireDAC documentation for that. In IBO i always create a Transaction component where i can set the isolation level and other parameters.

 

You see you can start a transaction with isolation level concurrency (see FB docs) level and FB will make sure that reads and writes are consistent within the transaction. That would have been a solution if you absolutely had to first read the existing value and then (a bit later) write it back increased. FB would see to it that other transaction would get a deadlock even though you just read the value (details here are a bit murky in memory though, check the docs). Otherwise one process could read the value, process two read the same and both processes increasing the same value. You would get a lower count than in reality.

 

But the solution with using the value from the database itself inside the SQL is ok too and here you would only need the comitted isolation level. If two processes/treads writes at the same time you get a deadlock. IMHO when it comes to things like click-tracking and measure data reading (small and fast operations) a wait of a couple of seconds can be a boon as you wont have to fiddle with queues of re-tries.

 

HTH

Share this post


Link to post
Guest

@Markus Kinzler, generators (in FB) are outside of transaction context. That can only be used for having an unique identifier. You cannot do "an audit-able series of numbers" using generators. Sure, you can compare two generator-generated values and determine witch was generated first. But as soon as there is an exception or a rollback there will be a "hole" in the series of numbers.

 

The OPs problem is that he wants to increase the count stored in a record, there are multiple records and each has it's count. Using generators he would have to create one per record. It's a bit quirky.

Share this post


Link to post
Just now, Markus Kinzler said:

How are the settings fpr the transcation?

 

Maybe the use of an sequence and


NEXT VALUE FOR

or


GEN_ID()

  could be an option.

Exactly, just what I was about to recommend. You create deadlock because the new value of VIEW_COUNT depends on the old value, and due to the nature of multi-generation architecture a simultaneous transactions, there can be different old values, depending on the isolation settings of your transaction.

Using generator/sequence for view_count circumvalents the problem for this specific update query. However, a proper transaction setting is preferred anyway. There is no magic solve-it-all setting, I recommend studing the docs too .. will most likely came in handy later anyway.

  • Like 1

Share this post


Link to post
Guest
1 minute ago, Zoran Bonuš said:

Using generator/sequence for view_count circumvalents the problem for this specific update query.

How?!? I have a table of pictures. I want to count each click for each picture. How do i do that using generators?

 

Share this post


Link to post
59 minutes ago, Zoran Bonuš said:

Exactly, just what I was about to recommend. You create deadlock because the new value of VIEW_COUNT depends on the old value, and due to the nature of multi-generation architecture a simultaneous transactions, there can be different old values, depending on the isolation settings of your transaction.

Using generator/sequence for view_count circumvalents the problem for this specific update query. However, a proper transaction setting is preferred anyway. There is no magic solve-it-all setting, I recommend studing the docs too .. will most likely came in handy later anyway.

I'm a complete novice working with databases and like dany, I want to know how as well.

I actually tried reading the documentation (https://firebirdsql.org/refdocs/langrefupd25-nextvaluefor.html) but it's pretty alien me at this point.

Edited by Yaron
more info

Share this post


Link to post

Looks like this issue is somewhat complex, I've actually read this article:
https://www.the-art-of-web.com/sql/counting-article-views/

 

In which they suggest creating a separate table just for view counting, with each row representing a view (sql INSERT) and every so often running aggregation code that converts the rows into actual pageview numbers to insert into the original table and erase the counted rows.

Using this method means no deadlocks are possible when counting things, might even mean less DB overhead (they write that UPDATE is slower than INSERT, but it's not something I confirmed).

 

Share this post


Link to post
Guest

@Yaron, that is what i'm doing in a lot of places. The clicks done in my server are logged with a lot of the click information like IP, agent, forwarding and more. I append a record to a table with a foreign key to what document (picture, whatever) was clicked. Thus i can later retrieve all kinds of statistics. Of course a generator is used to give that table a unique id for each click. That id does not have to be a sequence since it's the number of records for a given foreign key you will be counting.

 

Select Count(*) from clicks where click.document_id = :docid and clicks.IP = :analyse_ip_activity

 

/D

Share this post


Link to post

This task is pretty usual, it is called "aggregates". Keeping amount of goods in a warehouse is another example. There are two options for doing this right:

- Locking. A transaction should lock the counter, update it and unlock. Other transactions must wait for it. This could be done artificially with generators serving like flags (GEN_ID(1) - counter is locked, GEN_ID(-1) - counter is unlocked) but it would require other transactions to loop until they get a lock; or it could be done with waiting transactions and SELECT FOR UPDATE.

- Gathering aggregates. You just log all your events and calculate counter as SELECT COUNT(*) FROM T_VIEWS WHERE GAL_ID = :galID; of course the table can grow larger quickly so you also have to group all events happened before some moment and move the counters to an "archive" table, deleting the counted records from current table. This grouping must be done by a maintenance connection, not by a user, probably launched with scheduler. So you'll have the final result as

PastCount = SELECT  VIEW_COUNT FROM T_PAST_VIEWS WHERE GAL_ID = :galID;

CurrCount = SELECT  COUNT(*) FROM T_CURR_VIEWS WHERE GAL_ID = :galID;

TotalCount = PastCount+CurrCount

Share this post


Link to post
Guest
10 minutes ago, Fr0sT.Brutal said:

- Locking. A transaction should lock the counter, update it and unlock. Other transactions must wait for it. This could be done artificially with generators serving like flags (GEN_ID(1) - counter is locked, GEN_ID(-1) - counter is unlocked) but it would require other transactions to loop until they get a lock; or it could be done with waiting transactions and SELECT FOR UPDATE.

That is exactly what FB does for you if you set and handle your transaction parameters correctly. No need for explicit locking at all. It is, after all, a true ACID RDBMS. Only Oracle, Interbase and PostGRES can compete in this field, IMHO.

Share this post


Link to post
14 hours ago, Dany Marmur said:

That is exactly what FB does for you if you set and handle your transaction parameters correctly. No need for explicit locking at all. It is, after all, a true ACID RDBMS. Only Oracle, Interbase and PostGRES can compete in this field, IMHO.

I can say for FB only but it's unable to automatically sort out two concurrent updates of a same record, even with proper parameters. It could ensure data consistency but when two parallel transactions with isolation level "SNAPSHOT" modify the same value, it will throw an error. Moreover, this is the only correct behavior. Without explicit record locking DB server couldn't foretell what records you will touch and it ensures every transaction gets its own version of data. So if cnt is initially 0, Tx1 sets cnt=cnt+1 (effectively 1), Tx2 sets cnt=cnt+1 (effectively 1) as well, Tx1 commits, Tx2 commits too - what value cnt should have? Rather than losing a modification, it is safer for DS server to throw an error on Tx2's commit; moreover it postpones Tx2's update until Tx1 commits and only then throws.

So user has to handle this situation manually anyway. DB server helps, but won't do all the work.

 

Test case: two isql's, any database

 

1. isql#1

connect "fbtest" user SYSDBA password masterkey;

SET TRANSACTION WAIT SNAPSHOT;

 

2. isql#2 - the same

3. isql#1

update some_table set some_field=some_field+1;

4. isql#2 - the same. Update hangs

5. isql#1

commit;

6. isql#2 - Statement failed, SQLSTATE = 40001
deadlock
-update conflicts with concurrent update
-concurrent transaction number is ####

Edited by Fr0sT.Brutal
test case

Share this post


Link to post
Guest

@Fr0sT.Brutal, hmm. Granted, "No need for explicit locking at all" is a false statement for all cases.

Share this post


Link to post

So using proper transaction should be enough, or am I wrong?

 

1: set transaction wait read uncommitted lock timeout 30;

2: set transaction wait read uncommitted lock timeout 30;

1: update test set Total=Total+1; // done
2: update test set Total=Total+1; // waiting

1: commit; // 2 is done

2: commit;

 

Field Total is increased by 2.

 

Share this post


Link to post
On 12/19/2019 at 1:50 PM, Yaron said:

 

The view number is read earlier in the code, I actually replaced it with a more reliable SQL command to increase the value instead of setting it:


procedure ClubHouseDB_IncCampaignViewCount(sDebugFile,sGalleryUID : String);
var
  dbConn     : TFDConnection;
  dbQuery    : TFDQuery;
  deadLocked : Boolean;
begin
  deadLocked := False;
  dbConn  := TFDConnection.Create(nil);  dbConn.ConnectionDefName := dbPoolName;
  dbQuery := TFDQuery.Create(nil);       dbQuery.Connection       := dbConn;
  Try
    dbQuery.SQL.Text   := 'UPDATE GALLERY_TABLE SET VIEW_COUNT=VIEW_COUNT+1 WHERE GALLERY_UID=:galleryuid;';
    Try
      dbQuery.Prepare;
      dbQuery.ParamByName('galleryuid').AsString := sGalleryUID;
      dbQuery.ExecSQL;
    Except
      on E : Exception do If Pos('deadlock',Lowercase(E.Message)) > 0 then deadLocked := True;
    End;
  Finally
    dbQuery.Free;
    dbConn.Free;
  End;
  If deadLocked = True then ClubHouseDB_IncCampaignViewCount(sDebugFile,sGalleryUID);
end;

As you can see from the updated code, I'm catching the deadlock by looking at the returned exception message, I'm not sure it's the best approach, but I haven't been able to find another way to identify the deadlock.

 

I don't have a problem increase the deadlock timeout, but I couldn't find anything on how to do that, I'm not even sure if it's part of the Firebird SQL statement or if I need to specify it through the Firedac components.

 

You can minimize the deadlock problem, if you call starttransaction just before the ExecSQL, and call CommitTransaction right after ExecSQL. Then you also can make a controlled RollBack in case of an error.

  • Like 1

Share this post


Link to post
On 12/20/2019 at 9:34 PM, Vandrovnik said:

So using proper transaction should be enough, or am I wrong?

 

1: set transaction wait read uncommitted lock timeout 30;

2: set transaction wait read uncommitted lock timeout 30;

1: update test set Total=Total+1; // done
2: update test set Total=Total+1; // waiting

1: commit; // 2 is done

2: commit;

 

Field Total is increased by 2.

Hmm, AFAIK dirty reads are considered a bad practice even if they seem to work. Here (in Russian - translator will help) is described a use case similar to yours, the advice is to try "read committed no_record_version"

Share this post


Link to post
Guest

Why not to use Net Core (which provides support for dependency injection) and EF Core (with some SQL proc for more heavy stuff as ORM is usually good for simpler things) and some lovely MS SQL Server 👹 - no struggles and no deadlocks when two users query your database when loading the web page 🙂

 

In the past I did have some deadlocks too, in desktop app - it was my fault not the server or components, I have feeling that the same is here. While I cannot help with FireDAC and other fency things you guys use, I strongly recommend following reading for better understanding the issues: https://michaeljswart.com/2011/09/mythbusting-concurrent-updateinsert-solutions/.

 

I would not relied on any components that much, rather prepare SQL statements procedures to handle this and/or templates for in-app usage and then simply call it using ADO.NET with minimal required setup just to properly feed the SQL server. If the transaction is properly written (with correct error handling) then there will be no issues, and if so, roll-back and log the error message so you can see what went wrong.

 

PS: UPDATE is always slower than INSERT, and has to be, this is correct statement.

 

Share this post


Link to post
Guest
8 hours ago, Hans J. Ellingsgaard said:

MS SQL server can have deadlocks, just like any other database.

I have never said that MS SQL have no deadlocks, that is wrong conclusion 🙂 I was about Net Core that works out-of-the box for me with no issues, deadlock means you messed up, it is usually safer to use ready-to-go solutions like .Net 🙂 

 

But I understand that OP is about to make his own web server using Delphi, I have never done it this way.

 

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×