Dmitry Onoshko
Members-
Content Count
37 -
Joined
-
Last visited
Community Reputation
0 NeutralRecent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.
-
Two versions of a unit with different $DEFINEs in the same project
Dmitry Onoshko replied to Dmitry Onoshko's topic in General Help
I feel $INCLUDEing full units in an implementation section might not work, but I think I get the idea. And I’d agree with you that this would make it trickier than it should as the unit should have been torn into pieces for $INCLUDEing, all that only to test it. -
Two versions of a unit with different $DEFINEs in the same project
Dmitry Onoshko posted a topic in General Help
I guess, I know the answer but maybe I miss something. The question arose related to unit testing with DUnitX but is not really tied to it. Say, we have a Delphi unit shared between multiple projects in a project group, and certain code pieces in the unit are written in $IFDEFs to exclude them in one of the projects and include for the rest (say, for performance reasons or to remove some stuff in a program for outside world use). The code pieces in question are part of quite complex algorithm(s), so unit testing is a must. I understand one can just create another build configuration for the DUnitX project, or even create a separate test project, but that’s not quite convenient: being able to test the whole unit in all its variant with a single test project would be much better. Another use case I can imagine is something related to supporting different versions of some protocol or format in the same executable without the risk of introducing bugs by merging several implementations into a single one (although this could probably be done by giving unit for different versions different names). So, let’s just stick to testing $IFDEF-dependent unit as a whole in one test project as the use case. I guess, linker and separate unit compilation are one of the problems here, but… Is that even possible? -
Generic container and pointer to it in its elements
Dmitry Onoshko posted a topic in Algorithms, Data Structures and Class Design
Consider a container: type TCustomItem = class protected FContainer: // ??? end; TCustomContainer<T: TCustomItem> = class abstract protected FItems: TArray<T>; ... procedure HandleItemNotification(AItem: T); end; The items are supposed to be classes derived from common ancestor in parallel with containers: type TFooItem = class(TCustomItem); TFooContainer = class(TCustomContainer<TFooItem>); TBarItem = class(TCustomItem); TBarContainer = class(TCustomContainer<TBarItem>); An item should store a pointer to its container for notification purposes. But TFooItem should never be used with TBarContainer or vice versa. I seem to get closer when I declare the TCustomItem as inner class: type TCustomContainer<T: TCustomItem> = class abstract public type TCustomItem = class protected FContainer: TCustomContainer<T>; end; protected FItems: TArray<T>; ... procedure HandleItemNotification(AItem: T); end; But then, when I call procedure TCustomContainer<T>.TCustomItem.SomeMethod; begin ... FContainer.HandleItemNotification(Self); ,,, end; inside TCustomItem method, I get “Incompatible types: T and UnitName.TCustomContainer<T>.TCustomItem”. Is this even possible? -
Until it’s a grid with millions of rows each one having its icon attached?
-
Well, my problem is not asynchronous query execution but the problem of aborting it. AbortJob seems to work for ordinary queries (or maybe I just can’t catch the particular moment it would fail) while for array DML it fails quite often.
-
In an application where certain images from image list (TVirtualImageList attached to TImageCollection) should be used depending on the state of something, how would you avoid hardcoding the image index values? Some of the images are to be used in TAdvStringGrid, which only takes image index, among other images there’re those which are chosen depending on the state and, if have values in order, allow for easy calculation of image index without nested if’s and stuff. One option I know of is to use image names instead. But that means additional lookup every time the image index is required which is costly for some cases. And the names are somewhat lengthy and feel wrong, especially when grouping is used in TImageCollection. Say, 'Input/Output\Save' where 'Input/Output' is the category name and 'Save' is the actual image name, but both pieces should be used as the image name. Or the values retrieved by conversion from image name to image index can be saved in a variable to use later. Doesn’t feel smart as well, since in fact the variables will have values that are otherwise known at compile time, so they are in fact constants. Just declaring constants and setting their values appropriately causes them to become wrong whenever the contents of the image list change.
-
I have a large set of data to insert into a MySQL/MariaDB table. For now I decided to use FireDAC’s array DML feature, so I create a TFDStoredProc, feed it with all the data and call Execute method (all the stuff in a TTask to avoid hanging the UI). Now when the user wants to close the application, the task should obviously be interrupted to prevent the application from hanging in the processes list for quite a long time. So, from the UI thread (say, TForm.OnCloseQuery, doesn’t really matter) I call TFDConnection.AbortJob on the connection I’ve assigned to the TFDStoredProc before executing it. The problem is that AbortJob raises an exception: So, the array DML query doesn’t get aborted, the program stays hanging until the TDStoredProc.Execute returns. Has anyone ever had such problem? Any help is appreciated.
-
Distributing application that connects remote MySQL/MariaDB servers
Dmitry Onoshko replied to Dmitry Onoshko's topic in Databases
But that’s the problem: when the server version is not under developer’s control, how would one supply necessary DLLs? I guess, the best ones are those that come with the server installation itself. But making a user replace the DLLs that come with the application during server upgrade is not really an option. -
Distributing application that connects remote MySQL/MariaDB servers
Dmitry Onoshko replied to Dmitry Onoshko's topic in Databases
But, if so, this creates a potential problem like the one we used to have when MySQL changed default authorization mechanism and the programs that used older versions of the protocol failed to connect. -
Distributing application that connects remote MySQL/MariaDB servers
Dmitry Onoshko replied to Dmitry Onoshko's topic in Databases
Will that really help to avoid providing DLLs next to the application in some tricky way? -
Distributing application that connects remote MySQL/MariaDB servers
Dmitry Onoshko posted a topic in Databases
So, there’s a program that uses FireDAC to connect to remote MySQL/MariaDB servers. It’s written and debugged with a MariaDB installation and the only thing it seemed to require was libmariadb.dll. Until it comes to putting the program to a computer with no MySQL/MariaDB installations at all and asking it to connect to a remote MySQL 8 server. It suddenly turns out it requires caching_sha2_password.dll as well. Copying it to the application folder actually worked, but… On another user computer I try to use libmysql.dll instead, and it asks for libssl-xxx.dll and libcrypto-xxx.dll. Which actually is mentioned in FireDAC documentation. But I can’t seem to find the information anywhere on MySQL/MariaDB websites. The problems: 1) At any point in time whenever end-users decide to upgrade their DBMS an application might start requiring some libwhatever-they-choose-to-add-next.dll. This might be inevitable unless the developers on MySQL/MariaDB stay sane enough to think about such use cases. 2) I can’t really be sure if the DLLs I dealed with by now are enough once and forever. 3) Using the exact version compatible DLLs might be the best, but then how would I distribute the application without knowing in advance the MySQL/MariaDB server version? Please, share any recommendations on how to make a DBMS client application work without disturbing end users with unnecessary errors. Thanks in advance. -
Dynamic array field of base type declared in descendants
Dmitry Onoshko replied to Dmitry Onoshko's topic in Algorithms, Data Structures and Class Design
Meanwhile one thing came to my mind. Along the lines of: type TAncestor = class // All the stuff goes here, but no FData definition // Pieces of code that use FData directly are // virtual abstract methods here end; TAncestorClass = class of TAncestor; TAncestor<T> = class(TAncestor) FData: TArray<T>; // Override FData-related methods common to all classes // in the hierarchy here end; TDescendant1Item = ... TDescendant1 = class(TAncestor<TDescendant1Item>) // ... end; TDescendant2Item = ... TDescendant2 = class(TAncestor<TDescendant2Item>) // ... end; Lets remove duplicate code from descendants but adds a lot of TXxxxItem type identifiers to the global namespace. Having a way to hide them would be great as the types are for particular class’ internal implementation only. -
Dynamic array field of base type declared in descendants
Dmitry Onoshko replied to Dmitry Onoshko's topic in Algorithms, Data Structures and Class Design
This would turn the whole “dataset” into an array of pointers to a lot of dynamically-allocated pieces. Not quite cache-friendly, the overhead of allocations, etc. The reason to use the dynamic array in the first place was to store data in a single block of memory having all the benefits of good old PODs. -
Dynamic array field of base type declared in descendants
Dmitry Onoshko posted a topic in Algorithms, Data Structures and Class Design
What I would like to achieve is along the lines of: type TAncestor = class FData: TArray<???>; // ... // A lot of methods that load, export, change FData // in a way common to all the descendants (mostly Length // and SetLength are used) // // Differences in managing FData are extracted into // protected virtual methods // ... end; TAncestorClass = class of TAncestor; TDescendant1 = class(TAncestor) type TItem = ... // Some type // ... // Virtual methods that do descendant-specific things // are overriden here // ... end; TDescendant2 = class(TAncestor) type TItem = ... // Another type // ... // Virtual methods that do descendant-specific things // are overriden here // ... end; ... The TItem-specific code is all in descendants, TAncestor just implements general management of how and when to resize FData and provides boilerplate code for loading, exporting and performing container-level changes on the data thatwould otherwise be duplicated in every descendant. This could probably be achieved with generics (descending from TAncestor<T> with particular T for each descendant), but this approach fails to support metaclasses and nested types. Am I missing something or is this not possible? -
Handling TCP send buffer overflow the right way
Dmitry Onoshko replied to Dmitry Onoshko's topic in Network, Cloud and Web
I wasn't suggesting you not destroy your objects. But a destructor is not always the best place to perform complex cleanups, especially during app shutdown. For instance, you could use the Form's OnClose/Query event to detect the user's desired to exit the app, signal your code to start cleaning up (close sockets, cancel I/Os, etc), and disable the UI so the user can't generate more traffic in the meantime. When the cleanup is actually finished (all I/O completions are reported, etc), then exit the app. Ah, sorry, I must have misunderstood your idea. What I tried to address in my previous post is an idea of asking a socket to shutdown and letting it destroying itself automatically (without explicit destructor call). Back to your point, I feel requiring a particular method call before destroying the object like asking for trouble some time later. If one suddenly decides to call a destructor, it should clean all the stuff without any conditions. As for calling a destructor only at a particular moment in time, like before application termination, changing network app settings at runtime is a thing, and this implies actually shutting down sockets while app is still running, possibly with other sockets serving other ports/protocols/tasks but using the same IOCP and stuff. Disabling related UI parts while the restart takes place might be OK, though.