Dmitry Onoshko
Members-
Content Count
37 -
Joined
-
Last visited
Everything posted by Dmitry Onoshko
-
Generic container and pointer to it in its elements
Dmitry Onoshko posted a topic in Algorithms, Data Structures and Class Design
Consider a container: type TCustomItem = class protected FContainer: // ??? end; TCustomContainer<T: TCustomItem> = class abstract protected FItems: TArray<T>; ... procedure HandleItemNotification(AItem: T); end; The items are supposed to be classes derived from common ancestor in parallel with containers: type TFooItem = class(TCustomItem); TFooContainer = class(TCustomContainer<TFooItem>); TBarItem = class(TCustomItem); TBarContainer = class(TCustomContainer<TBarItem>); An item should store a pointer to its container for notification purposes. But TFooItem should never be used with TBarContainer or vice versa. I seem to get closer when I declare the TCustomItem as inner class: type TCustomContainer<T: TCustomItem> = class abstract public type TCustomItem = class protected FContainer: TCustomContainer<T>; end; protected FItems: TArray<T>; ... procedure HandleItemNotification(AItem: T); end; But then, when I call procedure TCustomContainer<T>.TCustomItem.SomeMethod; begin ... FContainer.HandleItemNotification(Self); ,,, end; inside TCustomItem method, I get “Incompatible types: T and UnitName.TCustomContainer<T>.TCustomItem”. Is this even possible? -
Two versions of a unit with different $DEFINEs in the same project
Dmitry Onoshko posted a topic in General Help
I guess, I know the answer but maybe I miss something. The question arose related to unit testing with DUnitX but is not really tied to it. Say, we have a Delphi unit shared between multiple projects in a project group, and certain code pieces in the unit are written in $IFDEFs to exclude them in one of the projects and include for the rest (say, for performance reasons or to remove some stuff in a program for outside world use). The code pieces in question are part of quite complex algorithm(s), so unit testing is a must. I understand one can just create another build configuration for the DUnitX project, or even create a separate test project, but that’s not quite convenient: being able to test the whole unit in all its variant with a single test project would be much better. Another use case I can imagine is something related to supporting different versions of some protocol or format in the same executable without the risk of introducing bugs by merging several implementations into a single one (although this could probably be done by giving unit for different versions different names). So, let’s just stick to testing $IFDEF-dependent unit as a whole in one test project as the use case. I guess, linker and separate unit compilation are one of the problems here, but… Is that even possible? -
Two versions of a unit with different $DEFINEs in the same project
Dmitry Onoshko replied to Dmitry Onoshko's topic in General Help
I feel $INCLUDEing full units in an implementation section might not work, but I think I get the idea. And I’d agree with you that this would make it trickier than it should as the unit should have been torn into pieces for $INCLUDEing, all that only to test it. -
In an application where certain images from image list (TVirtualImageList attached to TImageCollection) should be used depending on the state of something, how would you avoid hardcoding the image index values? Some of the images are to be used in TAdvStringGrid, which only takes image index, among other images there’re those which are chosen depending on the state and, if have values in order, allow for easy calculation of image index without nested if’s and stuff. One option I know of is to use image names instead. But that means additional lookup every time the image index is required which is costly for some cases. And the names are somewhat lengthy and feel wrong, especially when grouping is used in TImageCollection. Say, 'Input/Output\Save' where 'Input/Output' is the category name and 'Save' is the actual image name, but both pieces should be used as the image name. Or the values retrieved by conversion from image name to image index can be saved in a variable to use later. Doesn’t feel smart as well, since in fact the variables will have values that are otherwise known at compile time, so they are in fact constants. Just declaring constants and setting their values appropriately causes them to become wrong whenever the contents of the image list change.
-
Until it’s a grid with millions of rows each one having its icon attached?
-
I have a large set of data to insert into a MySQL/MariaDB table. For now I decided to use FireDAC’s array DML feature, so I create a TFDStoredProc, feed it with all the data and call Execute method (all the stuff in a TTask to avoid hanging the UI). Now when the user wants to close the application, the task should obviously be interrupted to prevent the application from hanging in the processes list for quite a long time. So, from the UI thread (say, TForm.OnCloseQuery, doesn’t really matter) I call TFDConnection.AbortJob on the connection I’ve assigned to the TFDStoredProc before executing it. The problem is that AbortJob raises an exception: So, the array DML query doesn’t get aborted, the program stays hanging until the TDStoredProc.Execute returns. Has anyone ever had such problem? Any help is appreciated.
-
Well, my problem is not asynchronous query execution but the problem of aborting it. AbortJob seems to work for ordinary queries (or maybe I just can’t catch the particular moment it would fail) while for array DML it fails quite often.
-
Distributing application that connects remote MySQL/MariaDB servers
Dmitry Onoshko posted a topic in Databases
So, there’s a program that uses FireDAC to connect to remote MySQL/MariaDB servers. It’s written and debugged with a MariaDB installation and the only thing it seemed to require was libmariadb.dll. Until it comes to putting the program to a computer with no MySQL/MariaDB installations at all and asking it to connect to a remote MySQL 8 server. It suddenly turns out it requires caching_sha2_password.dll as well. Copying it to the application folder actually worked, but… On another user computer I try to use libmysql.dll instead, and it asks for libssl-xxx.dll and libcrypto-xxx.dll. Which actually is mentioned in FireDAC documentation. But I can’t seem to find the information anywhere on MySQL/MariaDB websites. The problems: 1) At any point in time whenever end-users decide to upgrade their DBMS an application might start requiring some libwhatever-they-choose-to-add-next.dll. This might be inevitable unless the developers on MySQL/MariaDB stay sane enough to think about such use cases. 2) I can’t really be sure if the DLLs I dealed with by now are enough once and forever. 3) Using the exact version compatible DLLs might be the best, but then how would I distribute the application without knowing in advance the MySQL/MariaDB server version? Please, share any recommendations on how to make a DBMS client application work without disturbing end users with unnecessary errors. Thanks in advance. -
Distributing application that connects remote MySQL/MariaDB servers
Dmitry Onoshko replied to Dmitry Onoshko's topic in Databases
But that’s the problem: when the server version is not under developer’s control, how would one supply necessary DLLs? I guess, the best ones are those that come with the server installation itself. But making a user replace the DLLs that come with the application during server upgrade is not really an option. -
Distributing application that connects remote MySQL/MariaDB servers
Dmitry Onoshko replied to Dmitry Onoshko's topic in Databases
But, if so, this creates a potential problem like the one we used to have when MySQL changed default authorization mechanism and the programs that used older versions of the protocol failed to connect. -
Distributing application that connects remote MySQL/MariaDB servers
Dmitry Onoshko replied to Dmitry Onoshko's topic in Databases
Will that really help to avoid providing DLLs next to the application in some tricky way? -
Dynamic array field of base type declared in descendants
Dmitry Onoshko posted a topic in Algorithms, Data Structures and Class Design
What I would like to achieve is along the lines of: type TAncestor = class FData: TArray<???>; // ... // A lot of methods that load, export, change FData // in a way common to all the descendants (mostly Length // and SetLength are used) // // Differences in managing FData are extracted into // protected virtual methods // ... end; TAncestorClass = class of TAncestor; TDescendant1 = class(TAncestor) type TItem = ... // Some type // ... // Virtual methods that do descendant-specific things // are overriden here // ... end; TDescendant2 = class(TAncestor) type TItem = ... // Another type // ... // Virtual methods that do descendant-specific things // are overriden here // ... end; ... The TItem-specific code is all in descendants, TAncestor just implements general management of how and when to resize FData and provides boilerplate code for loading, exporting and performing container-level changes on the data thatwould otherwise be duplicated in every descendant. This could probably be achieved with generics (descending from TAncestor<T> with particular T for each descendant), but this approach fails to support metaclasses and nested types. Am I missing something or is this not possible? -
Dynamic array field of base type declared in descendants
Dmitry Onoshko replied to Dmitry Onoshko's topic in Algorithms, Data Structures and Class Design
Meanwhile one thing came to my mind. Along the lines of: type TAncestor = class // All the stuff goes here, but no FData definition // Pieces of code that use FData directly are // virtual abstract methods here end; TAncestorClass = class of TAncestor; TAncestor<T> = class(TAncestor) FData: TArray<T>; // Override FData-related methods common to all classes // in the hierarchy here end; TDescendant1Item = ... TDescendant1 = class(TAncestor<TDescendant1Item>) // ... end; TDescendant2Item = ... TDescendant2 = class(TAncestor<TDescendant2Item>) // ... end; Lets remove duplicate code from descendants but adds a lot of TXxxxItem type identifiers to the global namespace. Having a way to hide them would be great as the types are for particular class’ internal implementation only. -
Dynamic array field of base type declared in descendants
Dmitry Onoshko replied to Dmitry Onoshko's topic in Algorithms, Data Structures and Class Design
This would turn the whole “dataset” into an array of pointers to a lot of dynamically-allocated pieces. Not quite cache-friendly, the overhead of allocations, etc. The reason to use the dynamic array in the first place was to store data in a single block of memory having all the benefits of good old PODs. -
Handling TCP send buffer overflow the right way
Dmitry Onoshko posted a topic in Network, Cloud and Web
I’ve recently got really worried about TCP send buffer overflows. I use TServerSocket and TClientSocket in asynchronous mode, and I’m going to send file data interleaved with smaller packets through them. What really scares me and doesn’t fit well in the whole picture is the case when SendBuf might return –1 (or anything else not equal to the size of the buffer being sent). I basically have two questions: 1) What is the proper way to detect a client that just doesn’t read the data from its socket (say. because it got hung)? 2) How should one handle the case when SendBuf says the buffer hasn’t been sent? In general I do understand I should try to SendBuf a little later, even better — in OnClientWrite event handler. To do that I’d have to use some sort of queue and put the data there. But then I think about the client that doesn’t read data: since there are shorter messages, not only file blocks, the queue might start growing and the logic gets really complicated. Any thoughts are appreciated. -
Handling TCP send buffer overflow the right way
Dmitry Onoshko replied to Dmitry Onoshko's topic in Network, Cloud and Web
I wasn't suggesting you not destroy your objects. But a destructor is not always the best place to perform complex cleanups, especially during app shutdown. For instance, you could use the Form's OnClose/Query event to detect the user's desired to exit the app, signal your code to start cleaning up (close sockets, cancel I/Os, etc), and disable the UI so the user can't generate more traffic in the meantime. When the cleanup is actually finished (all I/O completions are reported, etc), then exit the app. Ah, sorry, I must have misunderstood your idea. What I tried to address in my previous post is an idea of asking a socket to shutdown and letting it destroying itself automatically (without explicit destructor call). Back to your point, I feel requiring a particular method call before destroying the object like asking for trouble some time later. If one suddenly decides to call a destructor, it should clean all the stuff without any conditions. As for calling a destructor only at a particular moment in time, like before application termination, changing network app settings at runtime is a thing, and this implies actually shutting down sockets while app is still running, possibly with other sockets serving other ports/protocols/tasks but using the same IOCP and stuff. Disabling related UI parts while the restart takes place might be OK, though. -
Displaying partially downloaded images (while downloading)
Dmitry Onoshko posted a topic in General Help
Are there any ready-to-use solutions for Delphi to display partially downloaded images? Like web browsers do over slow internet connections. I tried to load a few truncated versions of a PNG file into TImage (and, so, TPicture under the hood) and it fails on CRC check. Since this stuff is highly format-dependent (say, displaying a partially downloaded BMP is quite easy, for PNG it’s quite tricky unless it has multiple IDAT chunks, while for JPEG with its different versions this might be a real pain to implement) implementing such a feature looks like a separate project. So, I wonder if anyone knows libraries where it’s already done. (Of course, it’s not that critical for an application, and in most cases the image being downloaded can be replaced with a progress bar. I’m just curious.) -
Handling TCP send buffer overflow the right way
Dmitry Onoshko replied to Dmitry Onoshko's topic in Network, Cloud and Web
Well, not destroying an object implicitly feels like a code smell. At least, it requires more attention when reviewing code for possible leaks and stuff. So, yes, stopping an overlapped socket is almost east. Especially since the only way to guarantee pieces of TCP stream are processed in the right order is to have at most one outstanding Tx and Rx operation per socket. But then events come into play. Being notified of client disconnection is useful for bookkeeping, and this implies either synchronizing with the GUI thread (which is waiting for pending requests to finish in the destructor) or making an event that gets invoked from arbitrary thread. The second way might not be as bad, really, but felt too complicated back then. I remember also trying to make the wait for pending operations alertable (so Synchronize works) but that also feels somewhat wrong. Frankly speaking, the ease of event handling provided by TServerSocket and TClientSocket and the simplicity of code, classes and documentation (as compared to Indy, in fact) led to me making the wrong choice. Although, at least I made a minimum viable “product” (sort of) with them. Just to see how bad they are for the task. -
Handling TCP send buffer overflow the right way
Dmitry Onoshko replied to Dmitry Onoshko's topic in Network, Cloud and Web
Well, I gave it a try at the beginning of the project, with hand-written IOCP-based overlapped sockets. But the problem of properly shutting them down when the user just invokes the destructor from a GUI thread, plus lack of spare time for brave experiments made me switch to something asynchronous at hand till better times. Now I feel the better times will have to come sooner than expected. -
Handling TCP send buffer overflow the right way
Dmitry Onoshko replied to Dmitry Onoshko's topic in Network, Cloud and Web
Yep, I understand that, that’s why I wrote they should’ve used some value like DWORD(–1) or MAXINT, or MAXLONG, or something like that in the first place, since that’s quite a common, let me call it so, design pattern. If my memory serves, Raymond Chen called similar stuff sentinel values. Well, since for now I’m with TServerSocket and his friends, it’s the one who calls WSAStartup and initializes to v1. If I call it myself to be the first one, it might work but the classes might break, ’cause certain stuff has definitely changed between major WinSock versions. For instance, ScktComp uses Winapi.Winsock which defines SOMAXCONN as 5. Now if I’m the first to initialize WinSocks in the application, I get a backlog of 5 clients instead of whatever the OS could choose. Some stuff has also changed around Vista times in the treatment of SO_REUSEADDR. So, for like 10’000 clients we get 10’000 threads each taking quite a piece of resources, like memory and handles? And they start competing for the poor processor cores (let’s hope 8 or 16 if the end-user doesn’t try to use a PC instead of real server hardware)? -
Handling TCP send buffer overflow the right way
Dmitry Onoshko replied to Dmitry Onoshko's topic in Network, Cloud and Web
Well, maybe not so easy with sockets, but I remember hitting it when I was a student, writing a program that did some stuff in a separate thread and notified the UI by posting messages. Really easy. Yep, I know. But (1) what if even part of the expected clients connects simultaneouly and (2) why would thevalue change from 5 to $7FFFFFFF from WinSock1 to WinSock2? I understand the second value should have been used since WinSock1, but they probably had reasons to choose 5, and will it be treated as real maximum when WSAStastup initializes to version 1 is a big question for me. The point was to avoid using third-party libraries and to have asynchronous sockets. I’m still worried about the fact it uses blocking sockets, so I’m not sure what effect would that have with tens of thousands of clients. And with the need of DB access when serving the requests. I understand it doesn’t (shouldn’t?) use a thread-per-client approach, but… -
Handling TCP send buffer overflow the right way
Dmitry Onoshko replied to Dmitry Onoshko's topic in Network, Cloud and Web
I’ve just tried it and it seems I get no exception when trying to Open a TServerSocket on a busy port. In debugger I can see both bind and listen calls somehow succeeded. WinSock < 2.0 stuff, maybe? I’m somewhat worried about the maximum size of the Windows message queue. AFAIR, it was something around tens of thousands of messages by default (at least, in WinXP), and even if it has been increased since then, tens of thousands of clients are expected for the program, so (1) saving a few messages seems to be a good idea, (2) I start feeling the good old library is not that good for the task, (3) especially since SOMAXCONN is 5 there, (4) why on earth there’s no other good socket library included with Delphi and (5) OMG, I’ll probably have to implement IOCP-based sockets myself handling all the quirks of multithreading, but then again DB access is required for most of the messages that get received, so serialization is still a thing, and… -
I’m using FireDAC with MariaDB. Since I have quite a lot of stuff done with stored procedures, I wrote a simple function like this: function TMyDataModule.OpenProc(const AName: String; AArgs: array of Variant): TFDStoredProc; begin Result := TFDStoredProc.Create(nil); try Result.Connection := MyConnectionComponent; Result.Open(AName, AArgs); except Result.Free; raise; end; end; In my database I have a table: CREATE TABLE `Data` ( `ID` BIGINT(20) UNSIGNED NOT NULL AUTO_INCREMENT, `When` TIMESTAMP NOT NULL DEFAULT current_timestamp(), `MyID` BIGINT(20) UNSIGNED NOT NULL, `Bool1` TINYINT(1) NOT NULL, `FOURCC` CHAR(4) NOT NULL COLLATE 'utf8mb4_unicode_ci', `Comment` TINYTEXT NOT NULL COLLATE 'utf8mb4_unicode_ci', `Arg1` TINYINT(3) UNSIGNED NOT NULL, `Arg2` TINYINT(3) UNSIGNED NOT NULL, `Arg3` TINYINT(3) UNSIGNED NOT NULL, `Arg4` TINYINT(3) UNSIGNED NOT NULL, PRIMARY KEY (`ID`) USING BTREE ) COLLATE='utf8mb4_unicode_ci' ENGINE=InnoDB ; … and a stored procedure: CREATE DEFINER=`root`@`localhost` PROCEDURE `MyProc`( IN `aMyID` BIGINT, IN `aBool1` TINYINT(1), IN `aFOURCC` CHAR(4), IN `aComment` TINYTEXT, IN `aArg1` TINYINT UNSIGNED, IN `aArg2` TINYINT UNSIGNED, IN `aArg3` TINYINT UNSIGNED, IN `aArg4` TINYINT UNSIGNED ) LANGUAGE SQL NOT DETERMINISTIC CONTAINS SQL SQL SECURITY DEFINER COMMENT '' BEGIN INSERT INTO `Data`(`MyID`, `Bool1`, `FOURCC`, `Comment`, `Arg1`, `Arg2`, `Arg3`, `Arg4`) VALUES(aMyID, aBool1, aFOURCC, aComment, aArg1, aArg2, aArg3, aArg4); END When I try to invoke MyProc, I get this error: “Out of range value for column 'aArg2' at row 0”. In Delphi debugger I see AArgs containing perfectly valid values: a number, a boolean, a 4-character string, an empty string for aComment and four integers perfectly in the range of 0..255. But at OpenProc call it fails. I used Wireshark to see what data actually goes “onto the wire”: the byte values are those they should be. The same procedure with the same argument values runs just fine with HeidiSQL. But HeidiSQL seems to use simple text queries while FireDAC sends arguments in their binary representation. What could be the reason and fix to the problem? Thanks.
-
Out of range value for column TINYINT UNSIGNED
Dmitry Onoshko replied to Dmitry Onoshko's topic in Databases
P.S. Turns out the values of AArgs matter: those were (124, 152, 10, 2). I tried (127, 0, 0, 0) and it worked. Then I tried (128, 0, 0, 0) and it failed with error saying about aArg1 this time. When I inspected AArgs in the debugger adding TVarData(AArgs[...]) to watches, I saw the aArgX array items were of type Byte inside the Variant. But somehow it seems FireDAC manages to ignore both the UNSIGNED part on the SQL side and the unsignedness of the variants on the client side. -
Handling TCP send buffer overflow the right way
Dmitry Onoshko replied to Dmitry Onoshko's topic in Network, Cloud and Web
This is long story !, in short the answer is yes, loop is way better and will yield better throughput. I don’t really understand this part. My point was that to receive data the loop is not the way to go (well, not the only one) since that would generally lead to several posted messages, and when processing them — to a few useless recv calls if no data has arrived since then. But you seem to suggest using the loop anyway. This is already done. After all, at least for TCP, knowing the amount of data available in advance has no benefits for the code: the “protocol message” might arrive in pieces and/or concatenated with the rest of the previous and the beginning of the next one, so there still has to be some storage to be able to find the message boundaries. Unless, maybe, if the protocol used is simple enough to make the data stream being parseable with a simple finite automaton byte-by-byte.