Jump to content

pcoder

Members
  • Content Count

    25
  • Joined

  • Last visited

Posts posted by pcoder


  1. Essentially I need to copy/paste MIME formats as defined in urlMon.h and others.
    I've seen a few clipboard readers which use GlobalSize(handle) as file size.

    The alternative would be having a correct reader for all possible file types, just for finding the file size.

    I believe Microsoft is aware of these and other challenges and has corrected globalSize(), but I have no proof.


  2. Here is an excerpt from linked oldNewThing:

    Quote

    Bonus chatter: It appears that at some point, the kernel folks decided that these “bonus bytes” were more hassle than they were worth, and now they spend extra effort remembering not only the actual size of the memory block but also the requested size. When you ask, “How big is this memory block?” they lie and return the requested size rather than the actual size. In other words, the free bonus bytes are no longer exposed to applications by the kernel heap functions.

    With mistake I mean, globalSize() would be much useless without giving the requested size (clipboard content size).

    And I just wanted to know if anyone of the forum members got a larger size than requested, because I don't.


  3. Thank you, initially I thought the same (as stated in MS docs), but so far I could not confirm that GlobalSize() returns a larger size.
    Did you also read the given link to oldnewthing which contains "bonus chatter"?

     

    And I cannot believe that MS made such a mistake but has not corrected this.
    In my use case, the clipboard format, used in getClipboardData(format), refers to a MIME type (a file in memory).
    The clipboard reader needs to save each received stream to a file, which is easy if the file size is known. But interpreting every data format
    (more than 20 different MIME types) to find the correct file size is very cumbersome.

    I just hope that globalSize() returns the same as heapSize()

     

    If the clipboard is used for a MIME type (file (png, svg, ...)), then the file size is not stored additionally
    (because this would break the MIME format). therefore I need globalSize() to give the correct filesize.


  4. Has anyone seen the case of globalSize() returning more than the requested size?
    It would be much more useful to get the requested size (<= allocated size) to determine the clipboard data size,
    but unfortunately I've found contradicting statements about globalSize().

     

    Pro allocated size:
    learn.microsoft.com/en-us/windows/win32/api/winbase/nf-winbase-globalsize
    stackoverflow.com/questions/22075754/how-do-i-determine-clipboard-data-size

     

    Pro requested size:
    devblogs.microsoft.com/oldnewthing/20120316-00/?p=8083
    www.labri.fr/perso/betrema/winnt/heapmm.html
    www.joelonsoftware.com/2001/11/20/a-hard-drill-makes-an-easy-battle/


  5. COLRv1 was just an info shown in the file properties dialog.
    The glyph rendering looks vector based and has color gradients, so SVG or COLRv1.

     

    Format-selection is for lower-level handling, but I use only RenderTarget.drawTextLayout() with D2D1_DRAW_TEXT_OPTIONS_ENABLE_COLOR_FONT.
    It would be useful here (also for end-users) if DirectWrite would offer a parameter to specify a preferred format.
    If possible I try to avoid lower-level handling, even more code according to the preview


  6. Answering to myself: This is OK for (TDirect2DCanvas.RenderTarget.dpi = 96) only, ie (pixels = DIPs).
    dpi 96 (regardless of screen dpi and font.dpi) allows clients to pass pixel coordinates to all Direct2D-functions which want DIPs.  
    This is GDI-compatible, so clients must provide final pixel values (dependent on screen (or other) DPI).


  7. Each textline needs up to 2 seconds :), varies on whether DirectWrite has found a way to cache something.

    I also have found that DirectWrite textdraw to bitmap DC is slower than DirectWrite textdraw to window DC.

    But I'm still happy with the progressive display now. And users have the option to select a different font:)

     

    The challenge (I should have mentioned it earlier) was, the user clicks a button to generate a different image on display.
    Thus a waiting time between button click and resulting display is impossible to avoid, regardless of internal concept.


  8. Thanks for all!  I will use a bitmap and draw into it whenever I have a time slot.
    The main problem is a real slow color-font (text draw 100+ times slower than with other color-fonts).
    I will also try distribution over several WM_Paint (each displays whole (but possibly unfinished) bitmap)),
    will see... (should look better than freeze). As Remy said, the framework is not prepared for this,

    so I will extend the wmPaint method to trigger the next paint. Initial tests have shown that this works.

    procedure TmyControl.WMPaint(var Message: TWMPaint);
    begin
    wantMorePaintTime := false;
    inherited;
    if wantMorePaintTime then
        postMessage( handle, iwm_wantPaintTime, 0, 0); //  handler calls self.invalidate;
    end;

     


  9. Sometimes my draw in OnPaint (WM_Paint) is very slow (kind of UI freezing).
    Now I would prefer to exit the paint handler earlier (based on time measurement (getTickCount))

    and continue the draw in subsequent OnPaint events (like progressive drawing).

    Is that possible under Windows? And how can I trigger the paint events?

    • Like 1

  10. I found that TDirect2DCanvas.SetPen() doesn't assign (copy) the contents, but only the parameter pointer.

    A side effect is, callers cannot free or store the original variable. What is the reason behind this?


  11. For mainstream computers only legacy FPUs exist (co-exist) for more precision. But the point is, speed per operation only matters if the number of operations is large. This is similar to the existence of int64 on cpu32.  And many math operations (including BLAS) can have use-cases for more than double-precision.

     

    The conclusion remains: libraries can be used if Delphi does not support float80. Some compilers (gcc, icc, fortran compilers) have this built-in and Delphi could follow. The main trend (without pure GPU solutions) is that floating point operations use SIMD instructions for all precisions.
    Additionally, complex operations on float128 values can much profit from specific algorithms (eg. matrix multiplication from Ozaki scheme).


  12. On 9/13/2022 at 1:33 PM, David Heffernan said:

    I believe that the bulk of numerical computing doesn't need more than double. Is that not the case?

    The bulk of business apps probably, but academic fields should not be neglected.
    The general trend is that standard C-runtimes (see libc) implement float80, float128 to be cross-platform (hardware- and software-based).
    And C standard recommends "long double" to be the largest of float128, float80, float64.
    A useful overview of current accuracy state is here. (GNU libc and Intel Math Library (libmmd.dll) support float128)


  13. Also, not to forget the existing large number of functions with extended precision, most notably amath.pas
    They will run as long as Delphi32 apps are supported. Downgrading (to lower precision) would be wrong, in terms of software functionality.


  14. 1 hour ago, David Heffernan said:

    Given that the suppliers of hardware have dropped support for extended I would suggest that you are out of date. 

    The question was about 32bit Delphi. The modern Intel/AMD FPUs still have the 80bit mode, and if the FPU would not be in hardware,

    then in system emulation. Your earlier advice ("nobody should") did not care about the precision drop to 64bit float and developers who need more precision.

    That's why I mentioned the possible need for alternatives (large-precision libraries). A developer of mathematical software should know this ...

    • Like 1

  15. 3 hours ago, David Heffernan said:

    True, but nobody should be using that type anyway. I speak as a developer of mathematical software.

    With 80bit float you have less error and larger numeric range. Your statement is helpful only if you have an alternative (library) for larger precision.


  16. 6 hours ago, Rinzwind said:

    Why not use Free Pascal exclusively? Or is it lacking in some areas?

    If a FOSS project does not get the expected attraction, despite decades of public existence, despite capable language (Object Pascal), then they have some strange (or not professional-grade) priorities. They are not attractive for new/more compiler developers.

    • Like 1

  17. I have systems with identical software and always latest Windows 10.
    Slowness/hanging is noticeable only on systems with single harddisk and no SSD.
    The harddisk can get fully occupied by system activities and thus blocks user processes, very annoying.
    System updates are configured postponed and  MS Store app updates are set to be manual,
    but this helps only partially. Switching off internet connection is more effective.

    • Thanks 1
×