

Hans J. Ellingsgaard
-
Content Count
101 -
Joined
-
Last visited
Posts posted by Hans J. Ellingsgaard
-
-
On 12/19/2019 at 1:50 PM, Yaron said:The view number is read earlier in the code, I actually replaced it with a more reliable SQL command to increase the value instead of setting it:
procedure ClubHouseDB_IncCampaignViewCount(sDebugFile,sGalleryUID : String); var dbConn : TFDConnection; dbQuery : TFDQuery; deadLocked : Boolean; begin deadLocked := False; dbConn := TFDConnection.Create(nil); dbConn.ConnectionDefName := dbPoolName; dbQuery := TFDQuery.Create(nil); dbQuery.Connection := dbConn; Try dbQuery.SQL.Text := 'UPDATE GALLERY_TABLE SET VIEW_COUNT=VIEW_COUNT+1 WHERE GALLERY_UID=:galleryuid;'; Try dbQuery.Prepare; dbQuery.ParamByName('galleryuid').AsString := sGalleryUID; dbQuery.ExecSQL; Except on E : Exception do If Pos('deadlock',Lowercase(E.Message)) > 0 then deadLocked := True; End; Finally dbQuery.Free; dbConn.Free; End; If deadLocked = True then ClubHouseDB_IncCampaignViewCount(sDebugFile,sGalleryUID); end;
As you can see from the updated code, I'm catching the deadlock by looking at the returned exception message, I'm not sure it's the best approach, but I haven't been able to find another way to identify the deadlock.
I don't have a problem increase the deadlock timeout, but I couldn't find anything on how to do that, I'm not even sure if it's part of the Firebird SQL statement or if I need to specify it through the Firedac components.
You can minimize the deadlock problem, if you call starttransaction just before the ExecSQL, and call CommitTransaction right after ExecSQL. Then you also can make a controlled RollBack in case of an error.
-
1
-
-
For Interbase the default username and password has been SYSDBA and masterkey, has it changed in later versions?
-
1 hour ago, Uwe Raabe said:While this may suit a single threaded application, the pooling approach is the recommended way for multi-threaded DB access.
http://docwiki.embarcadero.com/RADStudio/Rio/en/Multithreading_(FireDAC)#Connection_Pooling
Yes you'r right. I was thinking about a single user client with multiple datamodules.
-
You could have just one FDConnection, and let all the other datamodules use that connection. You would then have just one place to set up all your connection logic,
And by the way... Some SQL servers have licens restrictions on how many simultanious connections you can have, and if your are using one connection for each datamodule, you could easily end up being unable to connect to the SQL server.
-
Why are you not using ApplyUpdates to delete the marked records from the database?
-
1
-
-
If you can call the Generator directly, like in Interbase/Firebird and other db's, I would prefer
that approach. I can't find any documentation that this is possible with Pervasive db's, but you can
check it out.Create a query that calls the Generator and wrap it in a function like this. (The example is made in
Interbase syntax).function GetGenID(GeneratorName: string): integer;
begin
with FDQGetGenID do
begin
SQL.Clear;
SQL.ADD('SELECT GEN_ID ('+GeneratorName+', 1) from rdb$database');
Open;
Result := Fields[0].AsInteger;
Close;
end;
end;Then when you insert new data you will call the GetGenID function.
FDQuery.insert;
FDQueryID.Value:=GetGenID('GeneratorName');
FDQueryMASTERDATA.AsString:= 'Some value';
FDQuery.Post;You can also place the call to GetGenID in the FDQuery events, like OnNewRecord or BeforeInsert.
---
If this approach is not possible with Pervasive, then this approach could work. It is at least working with Interbase.
In this example I have used a DBGrid and A DataSource component to connect to the FDQuery. And the query has two fields
called "ID" and "MASTERDATA".Set the AutoGenerator parameter value of ID field to arAutoInc (if you've not already done so).
In the OnDataChange Event of the DataSource put the following code:
if (FDQMasterID.Value < 0) and (FDQMaster.State in [dsInsert]) and not FDQMasterMASTERDATA.IsNull then
begin
FDQMaster.Post;
FDQMaster.Refresh;
end;You need to check if at least one of the fields has a value apart from the ID field, or you will get
an error on the Post command, if the user adds to empty records.
-
You need to close and open the query every time you change the parameter values, if it has a result set.
-
On 4/1/2019 at 2:11 PM, GreatDayDan said:Better yet, How do I an in-memory db instead of an on-disk db? I have tried setting the FDConnection filename to :memory: but that did not work.
You can use FDMemTables, they can load and save from streams, files and other datasets.
-
Yes, I can see that from your link. It came as a surprice to me, as I am mostly used to work with IB and Firebird databases.
There is at NOLOCK isolationlevel, that will minimize risk of locking, but it will not be suitable, if there is a risk of data being changed during the execution of the query. Another thing to do is to make sure that all the fields in the where clause and the joins are indexed to avoid tablescans.
-
It sounds strange. You should never get a lock on a read only query. If you have many concurrent users, you could have drop of performance, but not locks.
-
Bitbucket is a great versioning system based on git. You can have 5 users for free, so you can always give it a try. It works well with Sourcetree as a client. It,s also easy to integrate with Jira.
-
1
-
-
You can do it with two queries. Make one query that select's all the fields that you want in the grid (don't use "select *", use the wanted fieldnames instead). Then make another query with a parameter that select's the clob field for one record only. Like this:
select [clob field]
from [table name]
where [Primary Field ID] = :ParameterName
Then you open the second query after the first query has entered the wanted record. The parameter will get it's value from the first query. There might be an event on the grid you can use - I don't know the DevEx grid. You can also use the AfterScroll event of the dataset to open the second query.
If you use the second approach, you will need to have a delay on the opening of the second query, or it will open after all scrolled records. You can use a timer to have a small delay. Disable it on the BeforeScroll event and enable it on the AfterScroll event. The delay should just be long enough to make shure that it is not triggering while the user is scrolling.
-
If your sql server is not on the same machine as your client, it might be that you don't have the right sql driver verison. Look into the ODBC Data Source Administrator for what version of sql driver you have. E.g for a SQL server 2008 you need at least a version 10.00. I'm not sure wich version you need for server 2012, but it is probably at bit higher than that. You can download the correct version from Microsoft.
-
On 11/25/2018 at 10:45 AM, David Schwartz said:Do LiveBindings help solve this problem vs. regular data-aware controls? (I haven't really played with them yet.)
Livebingings will not make any difference here. It's the design of your queries, and your database components, that will make all the difference.
-
On 11/25/2018 at 10:45 AM, David Schwartz said:Also, how do you NOT load up things like BLOBs unless you actually want to see them? (Since I pre-defined the tables with the fields so I can refer to field vars in the code rather than using FieldByName for everything, if I don't do a 'select * from...' then an error arises saying a data field is missing.)
If you get an error that says data field is missing, it's because some of the fields that you excluded from your query are connected to a dataaware component. The easiest way to find that component is to search for the field directly in the dfm file, and there you can see wich component they are assigned to.
-
On 11/25/2018 at 11:30 AM, David Schwartz said:My question refers to a use-case that's similar to thousands of others like it: suppose you have an app built in, say, D7 that was originally designed as a client/server app, using standard Delphi components available at the time, and it's now exhibiting poor performance after replacing the local file server with a DB server (MS SQL Server, Oracle, MySQL, whatever) because the 243 SQL queries on the 178 forms are all set to do "select * from xxx" in order to populate the controls on the form. So loading up some forms can literally take 10-15 minutes.
It's a very general question: how does one usually deal with this?
(In my experience, Management usually says, "We're going to rebuild the app in C#/.NET because Delphi clearly can't handle this workload." But I'm asking for a more TECHNICAL explanation that doesn't just throw the baby out with the bathwater and start over from scratch.)
Your problem has nothing to with with Delphi, and C#/.net can not handle it any better. If you are using BDE, you will need to switch to FireDac. FireDac datasets has parameters for how many records to load at a time. A good advice is also not to use 'select * ', but instead use 'select fieldname from'. FireDac is really fast and can easily handle db's with million og records.
-
it was not an assumtion. It's at copy/paste from the sqlite.org webpage.
-
Here is what SQLite says on it's own webpage sqlite.org:
Many concurrent writers? → choose client/server
If many threads and/or processes need to write the database at the same instant (and they cannot queue up and take turns) then it is best to select a database engine that supports that capability, which always means a client/server database engine.
SQLite only supports one writer at a time per database file.
Why not choose a DBMS when there are lots of them to choose from for free? You also get extra features like live backup and replication for free. But as stated by someone else before, if you carefully design your interface between your database and restserver, it's not to much work switching database on a later stage.
-
Don't use SQLite in a multiuser enviroment, go for a real RDBMS like Firebird, Interbase etc..
Why is my firebird database code unstable under load? (code included)
in Databases
Posted
MS SQL server can have deadlocks, just like any other database.