Jump to content

Dmitry Onoshko

Members
  • Content Count

    44
  • Joined

  • Last visited

Everything posted by Dmitry Onoshko

  1. I work on a TCustomControl descendant that must paint itself with either plain GDI or Direct2D. I got somewhat stuck while implementing the Padding property support which is needed because the control contents might get scrolled, and having its contents drawn at the very edge of its rectangle doesn’t seem like a good thing. Now, looks like custom clipping is not a thing readily available in both TCanvas and TDirect2DCanvas, not even to mention coordinate transformation adjustments to make actual contents painting easier. While I do understand how to implement all the stuff by invoking GDI/Direct2D directly, I’m afraid to have missed some piece of VCL that might have already made the task easier and more straightforward. So, the question is: what is the recommended/supposed way to support Padding property in a TCustomControl?
  2. I’m writing a TCustomControl descendant that implements a custom TPen-typed property for grid lines, as well as a few scalar ones. All of the properties are defined along the lines of: property Aspect: Single read FAspect write SetAspect; and then procedure TMyControl.SetAspect(const AValue: Single); begin if FAspect <> AValue then begin FAspect := AValue; Invalidate; end; end; The same thing is done for the TPen-typed property except that Assign is used: procedure TMyControl.SetGridPen(const AValue: TPen); begin FGridPen.Assign(AValue); Invalidate; end; But when I do it this way, changing the GridPen property at design-time doesn’t take effect until I unselect the control. (Obviously, the FGridPen is created in the control constructor and doesn’t get destroyed until the destructor.) I looked at the implementation of TShape and the only difference I could find is that TShape doesn’t call Invalidate directly in property write method but assigns an internal method to pen’s OnChange event, then Invalidate’ing itself in the event handler. So I tried to change my code this way: procedure TMyControl.SetGridPen(const AValue: TPen); begin FGridPen.Assign(AValue); end; constructor TMyControl.Create(AOwner: TComponent); begin // ... FGridPen.OnChange := StyleChanged; // ... end; procedure TMyControl.StyleChanged(Sender: TObject); begin Invalidate; end; and that actually did the job: the control now repaints itself immediately for GridPen changed as well. But since I’m willing to understand what actually happens and why the initial piece of code was wrong, I’ve digged into TPen implementation. And I can’t see what actually is so different between doing my control invalidation right after FGridPen.Assign and in StyleChanged handler that is really called from inside the Assign method: it doesn’t look like anything should depend on the handler being assigned, I doubt invalidation of a control goes differently whether the pen is locked or unlocked inside Assign method, etc. Please, rub my nose into the difference.
  3. Dmitry Onoshko

    TPen.OnChange and Invalidate in custom control at design-time

    Makes sense now, thanks.
  4. Dmitry Onoshko

    Component installation and paths

    I might be missing something pretty simple, but I feel the need to finally understand the whole thing. I have some (limited) experience of component creation in D7 (a few simple GUI controls), but I have trouble doing the job in D12. I create a DPK with a new component derived from TCustomControl and install the package. Then I start a new project and put an instance of the control onto the form. It works in design-time (like, say, repaints correctly when changing the inherited Color property from the Object Inspector) but when I try to compile the program, I get a “file not found” error pointing to a .pas with the control. Google says one might manually add an appropriate folder to the Library path in IDE, but I feel there should be a way to make IDE do it automatically on component (package) installation. At least I can’t remember having the same trouble with D7. Can anyone point me to or provide a good explanation (related to paths and stuff) of the whole ecosystem and best practices? (Information on D12 IDE some-level-support for 64-bit components is also appreciated.)
  5. Dmitry Onoshko

    Component installation and paths

    Hmm, funny. I installed D7 and tried: looks like the IDE added path to the component automatically when the component package was created, but if I remove it from the library path while the package is uninstalled and then try to install it back, the path is not there. I believe when I was at school yet, I once installed dozens of component packages (mostly manually) and I can’t remember adding their paths manually as well. Strange. Anyway, thanks for the clarification.
  6. Dmitry Onoshko

    Limit TAction shortcut to a particular control

    Suppose we have a form to edit information about some items. The form has a dozen of TEdit’s and a TListView that represents file attachments for the item. Naturally, the TListView has a TPopupMenu assigned to it, and each TPopupMenu has a TAction assigned. Now, I want to assign shortcuts to the popup menu items: «Num +» to add a new file attachment, «Del» to remove selected attachments, etc. Now, here’s the problem: I obviously want Num + and Del only work this way inside the TListView, since in TEdit’s these keys already have well-defined and frequently used meanings. Handling TForm.OnShortCut doesn’t help: neither value for Handled parameter causes the VCL to think the key should be passed to the control. The obvious solution then is to just handle the keys on TListView.OnKeyDown and remove them from TAction’s, but that would cause the shortcuts to disappear from menu items. Which, in turn, could probably be solved by messing with TMenuItem.Caption and #9 character (?) but would also make it wrong if the shortcut changes some day. Changing shortcuts to something like «Ctrl+Num +» and «Ctrl+Del» makes the shortcuts too complex for the case when the TListView is already focused, so is also not the best option. I wonder, what is the best possible solution for such case? Basically, what I need is a way to make TAction shortcut local to a control (make the TAction fire only when the control is focused).
  7. Dmitry Onoshko

    TFDQuery editing fails randomly

    I’m trying to make a tiny UI program to manage an SQLite database. The database basically consists of the main table `Items` and a few additional tables to which `Items` has foreign keys. The `Items` table has several thousands of records, gets loaded at program startup and is then displayed on the main form. For various reasons the following decisions were made: * The dataset displayed on the main form is along the lines of SELECT `Items`.*, SUM(…) FROM `Items` LEFT JOIN … ORDER BY `Items`.`ID` * The dataset is displayed in TAdvStringGrid in virtual mode (OnGetDisplText) * The dataset is TFDQuery and it is used to handle the OnGetDisplText event by setting dataset’s RecNo and returning Fields[ACol].AsString. Everything goes fine until I try to edit an item. ItemEditor form uses plain controls (due to lack of certain complex features in DB-aware controls). When “Apply” button is clicked, the code executed is like this: with Dataset do // Dataset.RecNo is set to the currently edited record here begin Edit; FieldByName('Field1').AsString := …; FieldByName('Field2').AsDate := …; Post; end; Now I see strange behaviour. Sometimes the data in the dataset gets updated, sometimes not. FDMonitor shows that sometimes an UPDATE query (autogenerated, but valid for the changes made) gets executed, sometimes not. I tried to set CachedUpdates to False explicitly immediately after creating a TFDQuery but it seems to make no difference. If that matters, the FireDAC units are only used in DataModule, form units get it as TDataset (but I guess virtual methods should still do the job, right?). 1) Is using TFDQuery as a local copy of the table data even viable? 2) What could cause the strange update behaviour? Any advice on how to implement such a simple CRUD program (with additional requirements not easily achievable with DB-aware controls) is also appreciated. UPD1: after removing all the debugging code I used looks like only the first update on the TFDQuery is actually done, the rest are just ignored. If that helps. UPD2: well, maybe not the first, but whenever the item being edited is not the one that was edited before (and not the first one for the first edit). UPD3: sorry, wrong information, the generated UPDATE statement is wrong, OLD_ID is always 30 for some reason instead of the real value.
  8. Consider a container: type TCustomItem = class protected FContainer: // ??? end; TCustomContainer<T: TCustomItem> = class abstract protected FItems: TArray<T>; ... procedure HandleItemNotification(AItem: T); end; The items are supposed to be classes derived from common ancestor in parallel with containers: type TFooItem = class(TCustomItem); TFooContainer = class(TCustomContainer<TFooItem>); TBarItem = class(TCustomItem); TBarContainer = class(TCustomContainer<TBarItem>); An item should store a pointer to its container for notification purposes. But TFooItem should never be used with TBarContainer or vice versa. I seem to get closer when I declare the TCustomItem as inner class: type TCustomContainer<T: TCustomItem> = class abstract public type TCustomItem = class protected FContainer: TCustomContainer<T>; end; protected FItems: TArray<T>; ... procedure HandleItemNotification(AItem: T); end; But then, when I call procedure TCustomContainer<T>.TCustomItem.SomeMethod; begin ... FContainer.HandleItemNotification(Self); ,,, end; inside TCustomItem method, I get “Incompatible types: T and UnitName.TCustomContainer<T>.TCustomItem”. Is this even possible?
  9. I guess, I know the answer but maybe I miss something. The question arose related to unit testing with DUnitX but is not really tied to it. Say, we have a Delphi unit shared between multiple projects in a project group, and certain code pieces in the unit are written in $IFDEFs to exclude them in one of the projects and include for the rest (say, for performance reasons or to remove some stuff in a program for outside world use). The code pieces in question are part of quite complex algorithm(s), so unit testing is a must. I understand one can just create another build configuration for the DUnitX project, or even create a separate test project, but that’s not quite convenient: being able to test the whole unit in all its variant with a single test project would be much better. Another use case I can imagine is something related to supporting different versions of some protocol or format in the same executable without the risk of introducing bugs by merging several implementations into a single one (although this could probably be done by giving unit for different versions different names). So, let’s just stick to testing $IFDEF-dependent unit as a whole in one test project as the use case. I guess, linker and separate unit compilation are one of the problems here, but… Is that even possible?
  10. Dmitry Onoshko

    Two versions of a unit with different $DEFINEs in the same project

    I feel $INCLUDEing full units in an implementation section might not work, but I think I get the idea. And I’d agree with you that this would make it trickier than it should as the unit should have been torn into pieces for $INCLUDEing, all that only to test it.
  11. Dmitry Onoshko

    How to avoid hardcoding image index?

    In an application where certain images from image list (TVirtualImageList attached to TImageCollection) should be used depending on the state of something, how would you avoid hardcoding the image index values? Some of the images are to be used in TAdvStringGrid, which only takes image index, among other images there’re those which are chosen depending on the state and, if have values in order, allow for easy calculation of image index without nested if’s and stuff. One option I know of is to use image names instead. But that means additional lookup every time the image index is required which is costly for some cases. And the names are somewhat lengthy and feel wrong, especially when grouping is used in TImageCollection. Say, 'Input/Output\Save' where 'Input/Output' is the category name and 'Save' is the actual image name, but both pieces should be used as the image name. Or the values retrieved by conversion from image name to image index can be saved in a variable to use later. Doesn’t feel smart as well, since in fact the variables will have values that are otherwise known at compile time, so they are in fact constants. Just declaring constants and setting their values appropriately causes them to become wrong whenever the contents of the image list change.
  12. Dmitry Onoshko

    How to avoid hardcoding image index?

    Until it’s a grid with millions of rows each one having its icon attached?
  13. Dmitry Onoshko

    FireDAC array DML and AbortJob

    I have a large set of data to insert into a MySQL/MariaDB table. For now I decided to use FireDAC’s array DML feature, so I create a TFDStoredProc, feed it with all the data and call Execute method (all the stuff in a TTask to avoid hanging the UI). Now when the user wants to close the application, the task should obviously be interrupted to prevent the application from hanging in the processes list for quite a long time. So, from the UI thread (say, TForm.OnCloseQuery, doesn’t really matter) I call TFDConnection.AbortJob on the connection I’ve assigned to the TFDStoredProc before executing it. The problem is that AbortJob raises an exception: So, the array DML query doesn’t get aborted, the program stays hanging until the TDStoredProc.Execute returns. Has anyone ever had such problem? Any help is appreciated.
  14. Dmitry Onoshko

    FireDAC array DML and AbortJob

    Well, my problem is not asynchronous query execution but the problem of aborting it. AbortJob seems to work for ordinary queries (or maybe I just can’t catch the particular moment it would fail) while for array DML it fails quite often.
  15. So, there’s a program that uses FireDAC to connect to remote MySQL/MariaDB servers. It’s written and debugged with a MariaDB installation and the only thing it seemed to require was libmariadb.dll. Until it comes to putting the program to a computer with no MySQL/MariaDB installations at all and asking it to connect to a remote MySQL 8 server. It suddenly turns out it requires caching_sha2_password.dll as well. Copying it to the application folder actually worked, but… On another user computer I try to use libmysql.dll instead, and it asks for libssl-xxx.dll and libcrypto-xxx.dll. Which actually is mentioned in FireDAC documentation. But I can’t seem to find the information anywhere on MySQL/MariaDB websites. The problems: 1) At any point in time whenever end-users decide to upgrade their DBMS an application might start requiring some libwhatever-they-choose-to-add-next.dll. This might be inevitable unless the developers on MySQL/MariaDB stay sane enough to think about such use cases. 2) I can’t really be sure if the DLLs I dealed with by now are enough once and forever. 3) Using the exact version compatible DLLs might be the best, but then how would I distribute the application without knowing in advance the MySQL/MariaDB server version? Please, share any recommendations on how to make a DBMS client application work without disturbing end users with unnecessary errors. Thanks in advance.
  16. Dmitry Onoshko

    Distributing application that connects remote MySQL/MariaDB servers

    But that’s the problem: when the server version is not under developer’s control, how would one supply necessary DLLs? I guess, the best ones are those that come with the server installation itself. But making a user replace the DLLs that come with the application during server upgrade is not really an option.
  17. Dmitry Onoshko

    Distributing application that connects remote MySQL/MariaDB servers

    But, if so, this creates a potential problem like the one we used to have when MySQL changed default authorization mechanism and the programs that used older versions of the protocol failed to connect.
  18. Dmitry Onoshko

    Distributing application that connects remote MySQL/MariaDB servers

    Will that really help to avoid providing DLLs next to the application in some tricky way?
  19. What I would like to achieve is along the lines of: type TAncestor = class FData: TArray<???>; // ... // A lot of methods that load, export, change FData // in a way common to all the descendants (mostly Length // and SetLength are used) // // Differences in managing FData are extracted into // protected virtual methods // ... end; TAncestorClass = class of TAncestor; TDescendant1 = class(TAncestor) type TItem = ... // Some type // ... // Virtual methods that do descendant-specific things // are overriden here // ... end; TDescendant2 = class(TAncestor) type TItem = ... // Another type // ... // Virtual methods that do descendant-specific things // are overriden here // ... end; ... The TItem-specific code is all in descendants, TAncestor just implements general management of how and when to resize FData and provides boilerplate code for loading, exporting and performing container-level changes on the data thatwould otherwise be duplicated in every descendant. This could probably be achieved with generics (descending from TAncestor<T> with particular T for each descendant), but this approach fails to support metaclasses and nested types. Am I missing something or is this not possible?
  20. Meanwhile one thing came to my mind. Along the lines of: type TAncestor = class // All the stuff goes here, but no FData definition // Pieces of code that use FData directly are // virtual abstract methods here end; TAncestorClass = class of TAncestor; TAncestor<T> = class(TAncestor) FData: TArray<T>; // Override FData-related methods common to all classes // in the hierarchy here end; TDescendant1Item = ... TDescendant1 = class(TAncestor<TDescendant1Item>) // ... end; TDescendant2Item = ... TDescendant2 = class(TAncestor<TDescendant2Item>) // ... end; Lets remove duplicate code from descendants but adds a lot of TXxxxItem type identifiers to the global namespace. Having a way to hide them would be great as the types are for particular class’ internal implementation only.
  21. This would turn the whole “dataset” into an array of pointers to a lot of dynamically-allocated pieces. Not quite cache-friendly, the overhead of allocations, etc. The reason to use the dynamic array in the first place was to store data in a single block of memory having all the benefits of good old PODs.
  22. I’ve recently got really worried about TCP send buffer overflows. I use TServerSocket and TClientSocket in asynchronous mode, and I’m going to send file data interleaved with smaller packets through them. What really scares me and doesn’t fit well in the whole picture is the case when SendBuf might return –1 (or anything else not equal to the size of the buffer being sent). I basically have two questions: 1) What is the proper way to detect a client that just doesn’t read the data from its socket (say. because it got hung)? 2) How should one handle the case when SendBuf says the buffer hasn’t been sent? In general I do understand I should try to SendBuf a little later, even better — in OnClientWrite event handler. To do that I’d have to use some sort of queue and put the data there. But then I think about the client that doesn’t read data: since there are shorter messages, not only file blocks, the queue might start growing and the logic gets really complicated. Any thoughts are appreciated.
  23. Dmitry Onoshko

    Handling TCP send buffer overflow the right way

    I wasn't suggesting you not destroy your objects. But a destructor is not always the best place to perform complex cleanups, especially during app shutdown. For instance, you could use the Form's OnClose/Query event to detect the user's desired to exit the app, signal your code to start cleaning up (close sockets, cancel I/Os, etc), and disable the UI so the user can't generate more traffic in the meantime. When the cleanup is actually finished (all I/O completions are reported, etc), then exit the app. Ah, sorry, I must have misunderstood your idea. What I tried to address in my previous post is an idea of asking a socket to shutdown and letting it destroying itself automatically (without explicit destructor call). Back to your point, I feel requiring a particular method call before destroying the object like asking for trouble some time later. If one suddenly decides to call a destructor, it should clean all the stuff without any conditions. As for calling a destructor only at a particular moment in time, like before application termination, changing network app settings at runtime is a thing, and this implies actually shutting down sockets while app is still running, possibly with other sockets serving other ports/protocols/tasks but using the same IOCP and stuff. Disabling related UI parts while the restart takes place might be OK, though.
  24. Are there any ready-to-use solutions for Delphi to display partially downloaded images? Like web browsers do over slow internet connections. I tried to load a few truncated versions of a PNG file into TImage (and, so, TPicture under the hood) and it fails on CRC check. Since this stuff is highly format-dependent (say, displaying a partially downloaded BMP is quite easy, for PNG it’s quite tricky unless it has multiple IDAT chunks, while for JPEG with its different versions this might be a real pain to implement) implementing such a feature looks like a separate project. So, I wonder if anyone knows libraries where it’s already done. (Of course, it’s not that critical for an application, and in most cases the image being downloaded can be replaced with a progress bar. I’m just curious.)
  25. Dmitry Onoshko

    Handling TCP send buffer overflow the right way

    Well, not destroying an object implicitly feels like a code smell. At least, it requires more attention when reviewing code for possible leaks and stuff. So, yes, stopping an overlapped socket is almost east. Especially since the only way to guarantee pieces of TCP stream are processed in the right order is to have at most one outstanding Tx and Rx operation per socket. But then events come into play. Being notified of client disconnection is useful for bookkeeping, and this implies either synchronizing with the GUI thread (which is waiting for pending requests to finish in the destructor) or making an event that gets invoked from arbitrary thread. The second way might not be as bad, really, but felt too complicated back then. I remember also trying to make the wait for pending operations alertable (so Synchronize works) but that also feels somewhat wrong. Frankly speaking, the ease of event handling provided by TServerSocket and TClientSocket and the simplicity of code, classes and documentation (as compared to Indy, in fact) led to me making the wrong choice. Although, at least I made a minimum viable “product” (sort of) with them. Just to see how bad they are for the task.
×