Jump to content

David Schwartz

Members
  • Content Count

    1237
  • Joined

  • Last visited

  • Days Won

    25

Everything posted by David Schwartz

  1. I can't recall ever seeing any discussions about unit testing of server-side assets, like views, stored procs, and functions. Does anybody do it?
  2. David Schwartz

    Delphi Rio IDE hangs opening 10.2.3 projects

    D10.2.3 does seem to be slower than 10.2.2 at opening projects with complex forms, appearing to "hang" for quite a while the first time any complex form is opened.
  3. I'm working with an Oracle DB and am using TOracleDataset from AllroundAutomation. The app is pre-processing some data and saving it to a couple of tables. One string field is mostly pretty small, usually <100 characters. But there are some big instances, and a handful of REALLY HUGE instances (8500 chars). The max varchar2 size is 4000 chars. So to hold the handful of records that are bigger, I replaced it with a CLOB field. The problem is, when the app starts up, it opens the tables using "select * from xyz". That works fine as long as there are no LOB fields -- each one takes 1-2 seconds to open. But adding a CLOB field suddenly causes it to take ~15-20 seconds to open, even if there's just one short line of data in each of the CLOB fields. So I'm trying to figure out some way to "lazy load" the data in the CLOB fields (or even most of the table). The nature of this problem is that most of the fields are static -- which is why they're filled from the initial preprocessing step. Even the initial data in this CLOB field comes from the pre-process step. But the user will need to modify it and then save it. (Ironically, it will usually be shrunken way down in the process.) This CLOB field is not read-only, but read-write. However, the data really isn't needed until the user scrolls to that exact record. Then it gets loaded almost instantly even if it's huge. So I'm trying to figure out a way of minimizing the initial load time when I open the table so it doesn't actually load the CLOB data right then, but rather when the user accesses the record the first time. If it matters, none of the data needs to be pre-loaded. It can be loaded as the user scrolls through it. It can also be paged, but I don't know how to do that with typical DB-aware controls. (I'm using cxGrids as well as some separate edit controls, and I've heard cxGrids may have that ability.) I'm looking for suggestions on how to resolve this quickly -- as opposed to re-architecting the entire app.
  4. David Schwartz

    log file policies

    I'm curious if anybody has any particular policies they follow when it comes to the use of log files. I've actually never run into any such policy anywhere I've ever worked. But we've run into a problem that is leaving us open to someone asking, quite legitimately, "What sort of policies do you guys follow when it comes to log files and data logging?"
  5. David Schwartz

    Forward declarations

    As I said earlier, if you have circular references, you need to rethink your approach. If a "person" can have an "address" as a property, and an "address" can have a "person" as a property, you've got a problem. If you think of this in normal "in real life" terms, sure, a "person" DOES have an "address". But more abstractly, you can say, "person has a residence address" And, a "person" could have a "work address", and a "previous residence/work address" and so forth. Now, an "address" usually does not "have a person" -- it could have MANY such "persons" associated with it who have different roles: owner, landlord, manager, tenant, maintenance people, etc. So if you look at this more realistically, your simplified explanations may be misleading you in your architecture. Also, you seem to be bumping up against the general notions of "inheritance" vs. "composition", or "Is-A" vs. "Has-A". As Einstein once said, you cannot solve a problem at the same level of logic that caused it.
  6. David Schwartz

    Delphi-Zeos-SQLite3-TDBEdit

    Your post is a bit vague, so I'll just post the obvious in case you need it: You need to hook the dataset (Zeos query or table) to a TDataSource, then hook the TDBEdit to the TDataSource
  7. David Schwartz

    any suggestions on how to "lazy load" a BLOB/CLOB from Oracle DB?

    I think you're still missing the problem. Varchar2 fields represent baseline performance. It takes ~2 seconds to load 100% of tables into memory using "select * from Xxxx" on ALL FOUR TABLES. It's about 10 MB of data. With this approach, I'm only missing about 2% of the data because of the small number of records that have > 4000 characters in those fields. When I simply replace even one Varchar2 field in a table with a CLOB field, the time to load "select * from Xxxx" goes up by nearly 60 seconds PER TABLE! That's 0.5 sec to 60 secs. Just changing one field from Varchar2 to CLOB. We're talking maybe 100kb more data in a few dozen records TOTAL. There is absolutely no reason that ten megabytes can be loaded in 2 seconds but then an additional 100kb takes FOUR MINUTES! I keep saying, the problem is NOT IN LOADING ALL OF THE RECORDS INTO MEMORY!!! Where is the overhead coming from in switching from a Varchar2 field to a CLOB field? THAT IS THE ISSUE! I've worked with applications that use SQL Server and MySQL with tables that have Memo fields (equivalent to Oracle's CLOBs) that hold 100kb chunks of text -- tens of thousands of records -- and running "select * from Xxxx" takes a few seconds to load 100% of them into memory. We're talking a couple orders of magnitude LARGER databases than what I'm working with here -- and all I ever hear about Oracle is how it's supposedly so much faster and more efficient for HUGE databases. I'm sorry, but it's TOTALLY FAILING here on a teeny weenie change that should have zero impact on performance. This is NOT NOT NOT related to "select * from Xxxx" loading the whole table into memory! Not when the entirety of all four tables is UNDER 10 MB! What accounts for the disparity between 2 sec and 4 minutes when the size of the data load is only 100kb more? It's NOT "load time"! Not when I've got several GBytes of free memory available!
  8. David Schwartz

    log file policies

    Thanks Arnaud, this is one "level" of detail I'm curious about. Another thing I'd like to see is ... what kinds of stuff gets logged? There are some devs who set up log files to only capture errors. Some capture more. Context is often a big clue to tracking down programming errors. Do you have a policy to make an effort to recover from exceptions, or just log them and cancel whatever was happening and move on? When you log errors, do you log just the exception messages, or do you log contextual data as well? (I like to use MadExcept because it grabs a whole bunch of useful contextual info that's sometimes quite helpful.) I realize everybody is different, but what I'm asking is as a general practice, what kinds of policies have you guys seen? (If you get sued and someone alleges your program was throwing errors left and right for months and months, but the log files show no unexpected activities that can be proven DID in fact occur ... what's your defense?)
  9. David Schwartz

    any suggestions on how to "lazy load" a BLOB/CLOB from Oracle DB?

    Ok, forget all this crap. You guys are so far off in the weeds that it's ridiculous. Thanks, but let's refocus. The root problem I'm trying to deal with is the performance hit that occurs when varchar2 fields are replaced with CLOB fields. That's all it is. Everything flows from that. **************************************************** Here's the deal... I have two pairs of tables that represent master-detail relationships. They're very similar, but not identical. The data I'm working with is such that I pre-process it ONCE to load up the master tables and also initialize a lot of the records in the detail tables. The two "master" tables have fewer than 500 records each, and the two "detail" tables won't ever have more than 1500 records. (Not "millions"!) The users will use the program to go through the data, making comments, setting other data flags, and doing things that even weak minds are far quicker at figuring out than code. As it happens, there's one field in all four tables that has a very wide range in terms of its length: 95% of the records are under 1k, and 4% are under 4k, all of which fit very happily inside of a single varchar2 field (limited to 4000 chars). But 1% are > 4k and a dozen are > 8k. This forces me to either use CLOBs or employ some crazy-ass paging using 3 varchar2 fields (which I haven't tried yet). The problem is that when I change this field in each table from varchar2 to a CLOB, the TOTAL TIME to simply call Open() on all four tables goes from about two (2) seconds to almost four (4) minutes -- one minute per table. We're talking about only a few hundred records! Given that 99% of the data in those fields is the exact same length (ie., <4k) whether it's in a varchar2 or CLOB, the performance impact clearly has something to do with how CLOBs are managed internally, not the tiny number (1%) of really long fields. (And it's NOT data transfer time to the client, because again, <1% of the data is >4k in length.) I'm taking this approach to simply allow me to use data-aware controls so I don't have to manually manage all of the loads and stores whenever the cursor changes. Maybe someone has a better idea. But please stop talking about nonsense inherited from the BDE and swapping queries at run-time. At its core, this is a very simple problem, and I'm just trying to find a simple solution so the program doesn't take so damn long to initialize (open the tables) when it starts up.
  10. David Schwartz

    Forward declarations

    If you have circular references, then you need to re-think your design.
  11. David Schwartz

    any suggestions on how to "lazy load" a BLOB/CLOB from Oracle DB?

    That doesn't solve the problem I'm having. The initial Open() on the table uses "select * from <table>" and the presence of the CLOB field has a ginormous performance impact when it runs the Open(). We're talking 1-2 seconds vs. nearly a full minute! A grid gets loaded with a few of the columns. When you click the grid, it selects the entire record. It's at THAT MOMENT that I want to load the CLOB data into a TDBMemo. It's like lazy-loading just that column, and then just the records as they're selected. The grid in this case is a DevEx cxGrid.
  12. David Schwartz

    any suggestions on how to "lazy load" a BLOB/CLOB from Oracle DB?

    Yup, that's mine.
  13. In that case, I wouldn't call it "paranoid code". There are people who are taught that you want to employ certain practices to make your code "bullet-proof" or at least "safe". Unfortunately, they either weren't taught, or didn't understand, the reasons WHY or what's really going on. I'm working on this huge code base right now and it's got lots of try ... except clauses throughout it. Unfortunately, in many places, the except clause simply logs the exception's standard error message to a log file then just goes wherever it goes. The log file is absolutely useless in this case, because there's insufficient information in the typical exception error message to allow you to figure anything out. So we've got lots of nice code designed to safely capture exceptions and log them, but the information being logged is useless. And even worse (IMHO) is that the code makes no effort to recover! Many if not most of these exceptions are the result of a try ... except block that I suspect was just wrapped around a huge block of existing code. No effort was spared to narrow down the scope of the exceptions. So there are potentially dozens of exceptions that could occur in many different places. The info logged in the file is useless because there are no line numbers, no variable names, no context information at all. And in most cases in THIS particular bunch of code, the logic could be easily adapted to recover from the errors if it did any useful amount of testing and validating things. I mention this b/c we recently had to deal with one of the apps that started generating a ton of errors. They were all silently logged in the log file, and nobody had a clue anything was awry until a customer called and said there was a problem with some of their billings. We started looking through the log files and saw that there was a GPF occurring regularly that started one day a few weeks earlier. Nobody had a clue! And we stared at the errors in the log file scratching our heads trying to figure out what in the heck might be wrong. All we could tell was there was a GPF occurring in one of the units. That's why I always lobby to include something like MadExcept with everything. This app hasn't been rebuilt in ages (until recently), so nobody even thought about configuring it in.
  14. Does anybody happen to have an example of how to Serialize/Deserialize a TObjectDictionary with JSON? Most of the examples I've found are for several versions ago, and I've heard that the JSON classes keep improving. There seems to be two or three different approaches, none of which strike me as very straightforward. The class I'm using in this case contains two strings and a TObjectList of simple objects that contain two integers (similar to TPoints).
  15. Just following-up ... I ended up doing something fairly simple to de/serialize a few of my classes. Nothing special, and it turned out to be quite easy. But I've been looking over the Grijjy code and it looks like the Griggy.ProtocolBuffers unit is perfect for what I need. I may go back and replace what I'd done. I could have also used BSON because there's no need to be able to read the serialized data. Thanks for the suggestions. 🙂
  16. David Schwartz

    Releasing memory devoted to arrays

    My experience with the FastMM and the additional diagnostic tools it offers (you need to download the source and use it in your program) tells me that most of the time memory errors are like landmines planted way earlier, and when they blow up it can be quite random as to who or what triggers them. Here's an analogy: Using static arrays is like building a house. One simply does not call a friend to increase or decrease the size of a room with a quick refresh. If something is blowing up in a room, it's because a previous tenant did something he should not have done and left a time-bomb -- not the current tenant. You can stare at the current tenant all you want, you're not likely to find the error. Something else is crapping over some memory unexpectedly. That's about all we know right now.
  17. David Schwartz

    Has the migration tool EVER worked for you?

    not usually. It seems to miss some key things. I can see the differences it makes in the Registry, yet the IDE looks exactly the same after I run it as it did before. I end up spending more time trying to make it work and track down issues than it takes to just reorganize everything by hand.
  18. David Schwartz

    General DB access question -- paging query results

    I get your point, but I'm not sure I agree with your initial point. I've had this beef with Delphi for ages -- the entire IDE and even some language "features" (limitations, actually) are built around the same usage paradigm embraced with D2 for building Client/Server applications. When anybody mentions allowing things in Delphi like separately compiled classes (which C# supports), there's a lot of resistance from the peanut gallery saying, "Oh, this isn't pascalish!" and it never gets anywhere. The Delphi language and IDE have been calcified in this respect since they were born! Consequently, C# and Visual Studio have evolved to the point where you can build apps that use some kind of MVC-like architecture (among others) that are supported to some extent within the IDE itself. Trying to build anything other than the regular Client/Server style app in Delphi becomes quite a chore, and you're fighting the IDE at nearly every step. You're right in your characterization of Delphi vs. C# devs, but not for lack of innovative thinking. The simple approach in Delphi is like falling off a log, while a more innovative approach that's reasonable to take in C# is far too convoluted for most Delphi users to even consider, since it goes against everything the Delphi environment has been optimized for over time. In a world where Client/Server architectures are pretty much dead, the wizards in charge of Delphi's future continue to crank out release after release with features that make it quicker and easier to build more ever more complex Client/Server apps. If you want to build a REST-based server in Delphi, they offer a solution by providing a pre-built server framework that can be extended by creating plug-in modules (DLLs basically), but the IDE and components still have no knowledge of the dynamics inherent in such designs. But there's no particular benefit of one server over another -- the IDE sees them all as foreign objects that require the programmer to manipulate directly. I keep trying to avoid specific DACs in this discussion in part because we're using ASTA components in our app, and over time the programmers have written code that's highly specific to ASTA's features. AND because the Delphi IDE only really knows one paradigm (ie., Client/Server), the code is infused at every possible level with business logic and DB actions that have always been seen as perfectly legitimate for C/S apps, but they cannot survive a transformation to a more modern server-based architecture where all of the business logic and DB interactions happen at the other end of something like an HTTP connection (eg, REST requests). It's really easy to SAY: "Oh, just replace ASTA with ____" but that's a HUGE project it its own right, regardless of whatever _____ might be. There's still the matter of extricating all of the business logic and embedded SQL from the client code and moving it to the server side. (This also presumes that everybody sees these issues as a problem from the outset.) Yes, you're right. It's really seductively easy to drop a few components on a form in Delphi and have a simple client/server type app running in an hour or two. When we can build an MVC-like app that uses, say, a REST-based service, all within the Delphi IDE just as quickly, then we'll have made some real progress. Knowing how such a thing is structured and what's needed is one thing; but having to build it within the Delphi IDE is like swimming upstream against a strong current -- something most of us prefer to avoid, and is very hard to justify from a complexity standpoint to Management. It's hard enough finding good Delphi devs. But having a crazy complex application architecture that nobody can figure out easily doesn't help. These things are fairly common in the C#/VS ecosystem, not in Delphi's world.
  19. David Schwartz

    Rio has a broken REST Library

    That'll teach you to avoid using a fresh-out-of-the-oven compiler release on a time-sensitive project! In this case, I'd have stuck with what I had the most confidence in. So your hackathon experience turned into a bug-a-thon for EMBT. 🙂
  20. David Schwartz

    Binding TObjectList to TListView

    Do you want to use the TListView just for viewing the contents of the TObjectList? Or do you want to treat it like a DB-aware component that lets you edit what's in the list as well? Those are going to be very different solutions. The former is rather simple; the latter can be quite complex depending on how you want to approach it.
  21. David Schwartz

    Extracting SQL from Delphi code and DFMs

    We've got a ton of SQL queries embedded in a Delphi app that we've decided to move to the server where they belong. Around 700 or so of them. Does anybody have any ideas about how to most easily extract the SQL from Delphi code that generates them at run-time with a bunch of string catenation before stuffing them into the .SQL.Text property, or even via repetitive SQL.Add(...) methods? Also, what's the best way to extract them from SQL properties in DFMs where they're defined at design-time? The "obvious" approach in my mind is to use some kind of editing, like regular expressions, to remove common syntactic sugar needed to manipulate a bunch of string constants used to build up a big string. Another thought was to use some kind of scripter to actually process the Delphi code in a way that generates the strings dynamically, then grab them from the buffer that way. (ie., use a run-time interpreter to process limited lines of code that produce longer strings, rather than at run-time.) Any ideas?
  22. David Schwartz

    Extracting SQL from Delphi code and DFMs

    Thanks. I actually got something working that uses the old TRegExpr component and a few simple re's to extract the Object and SQL.String data from my DFMs. Then I wrote a simple parser routine to take the DFM's representation of the SQL.String and restore it as a "clean" SQL expression. function postincr( var i : integer ) : integer; begin Result := i; i := i+1; end; . . . procedure GetDatasetInfo( aMatchNum : integer ); var p : integer; s0, s1, s2 : string; dataset_nm, dataset_type : string; re2 : TRegExpr; naked_query : string; begin s0 := WholeObject_re.Match[0]; // this is the whole object definition if (Pos('SQL.Strings',s0) < 1) then // If there's no SQL.Strings property, we aren't interested Exit(); s1 := WholeObject_re.Match[1]; // just [name: type] p := Pos( ':', s1 ); dataset_nm := Copy( s1, 1, p-1 ); dataset_type := Trim(Copy( s1, p+1, 99 )); if (s0 <> '') and (s1 <> '') then begin re2 := TRegExpr.Create( 'SQL.Strings = \(.*\)$', 'ims-xg' ); try s2 := ''; if re2.Exec(s0) then s2 := re2.Match[0]; // matches the whole SQL.Strings expression naked_query := ExtractSQL( s2 ); query_sl.Text := naked_query; . . . // do something with the above data . . . finally re2.Free(); end; end; end; //*********************************************************************** // this RE finds object definitions in the DFM that contain 'dataset' in the type name. // subexpression #1 is the object's definition; eg: "qry: TAstaClientDataset" WholeObject_re := TRegExpr.Create( 'object *(\S*: *\S*dataset)\s*$.*?end\s*$', 'ims-xgr' ); if WholeObject_re.Exec(ss.DataString) then begin // NOTE: there's only ONE SQL.Strings property per dataset / WholeObject GetDatasetInfo(postincr(n)); while WholeObject_re.ExecNext() do GetDatasetInfo(postincr(n)); end; Here's the one for extracting SQL.Strings from a string consisting of just an Object: with TRegExpr.Create( 'SQL.Strings = \(.*\)$', 'ims-xg' ) do begin if Exec(dm1.Datasets_tblWholeObject.AsString) then aMemo.Lines.Text := Match[0]; Free(); end; And here's the ExtractSQL parser: // This function extracts raw SQL from a SQL.Strings property function Tdm1.ExtractSQL( aSQL_String : string ) : string; //----------------------------------------- function incr( var n : integer ) : integer; begin n := n+1; Result := n; end; //----------------------------------------- var ln, n : integer; trimln, t2 : string; add_next_line : Boolean; rslt : TStringList; SQL_String : TStringList; begin rslt := TStringList.Create(); SQL_String := TStringList.Create(); try SQL_String.Text := aSQL_String; add_next_line := False; t2 := ''; ln := 0; repeat trimln := Trim(SQL_String[ln]); if (ln = 0) and (CompareText( trimln, 'SQL.Strings = (' ) = 0) then Continue; if (trimln = '''''') then // it's just a blank line begin rslt.Add(''); Continue; end; if (trimln = '') then Continue; if (trimln[1] = '''') then // a single leading quote begin Delete( trimln, 1, 1 ); //trimln := Trim(trimln); end; n := Length(trimln); repeat if (trimln = '') then Break; case trimln[n] of '''': begin // ' Where bill_order in (1,2,3)' // ' and pti.pi_id = rfl.pi_id(+) ' if (trimln[n-1] = ')') then // ....)' begin Delete( trimln, n, 1 ); Break; end; Delete( trimln, n, 1 ); trimln := TrimRight(trimln); n := Length(trimln); end; ')': begin // ' and en_id = :pENID') if (trimln[n-1] = '''') then // ....') begin Delete( trimln, n-1, 2 ); // remove ') Break; end; // ' and pti.pi_id = rfl.pi_id(+) ' with [_'] at end removed if ((trimln[n-2] = '(') and (trimln[n-1] = '+')) then Break; // nothing needs to be done here Delete( trimln, n, 1 ); trimln := TrimRight(trimln); n := Length(trimln); end; '+': begin // this is the most common pattern that's used for line-splitting: // the ' is typically in col 74 and + in col 76 // ...' + if (trimln[n-2] = '''') and (trimln[n-1] = ' ') then begin Delete( trimln, n-2, 3 ); add_next_line := True; end // ' Where udf.active = '#39'T'#39 // ' AND udf.ud_type = '#39'E'#39) else if (Copy(trimln, n-4, 3) = '#39') then begin Delete( trimln, n-1, 2 ); // we don't want to delete the #39 add_next_line := True; end end; else break; end; until (1 = 2); if add_next_line then begin t2 := t2 + trimln; add_next_line := False; end else begin if (t2 <> '') then trimln := t2+trimln; rslt.Add( trimln ); t2 := ''; end; until (incr(ln) >= SQL_String.Count); finally Result := rslt.Text; rslt.Free(); SQL_String.Free(); end; end; If the program ever hangs while parsing this stuff, it's probably because I missed something in the handling of a trailing single-quote or '+' that deals with undoing the word-wrapping. (This code could probably be simplified. There may even be better code hiding somewhere in the VCL source.)
  23. David Schwartz

    GetIt pkg mgr problems

    D10.2.3 ... GetIt package manager problems ... when I try to open GetIt in the IDE, I get an error: "invalid type cast". It's in GetIt250.bpl. Is there any way to access the files and download them directly? I'm looking for Raize Components. (Konopka)
  24. David Schwartz

    General DB access question -- paging query results

    As I said, this code has been working "as designed" for over a decade, long before I came on the scene, and I have not touched it. The problem existed before I arrived. How can I be doing something wrong? I don't know how many customers are complaining or are unhappy with it. I've only been told about two who have voiced their complaints. Personally, if a form took over one minute to load up, I'd ditch the software. But these folks seem to be much more tolerant than I am. The architectural defect I'm talking about has been pointed at several times above: we have datasets that have to run SQL queries phrased as "select * from ..." because that's how things were written in D7 10 years ago, and there's no paging going on anywhere. These queries ran very quickly when the file server was in the same building and they only had a year or two of historical data present. But today they literally take several minutes (some up to 15!) to run on the remote server, processing over a decade of data (they're really hairy queries that apparently aren't time-boxed) and many return over 100,000 rows of data to the client before the user is allowed to view the first 10 and do anything. As I said, this is working 100% as-designed! Nobody considers it a "bug" in any traditional sense. But clearly, 10-15 minute response times -- while not a "bug" -- are extremely "unfriendly" from the users' standpoint. Explaining this in terms of Delphi's DACs or the server are fruitless -- it's the lack of paging in the queries that's causing the problems. This is an architectural defect, not a "bug" in anything, since it's all working perfectly correctly and "as-designed".
  25. David Schwartz

    General DB access question -- paging query results

    My company has some clients who are complaining that a few of our screens are taking over 10 minutes to load up. This code has been around for over 10 years, far longer than I've been working there, so I know it's not me. (It's not code I've ever touched.) In fact, it's working perfectly "as designed" and there are no bugs present. It is actually an "architectural defect". Regardless, can I have them give you a call so you can tell them that this is a non-existent problem?
×