Jump to content

Pawel Piotrowski

Members
  • Content Count

    26
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by Pawel Piotrowski


  1. Unfortunately not.

    FMX applications are totally not accessible under android/iOS. At least when I last tested.

    The screenreader support for windows is also not perfect. The control under the mouse is often not read aloud and  sometimes changing the focus to a new control also is not announced.

     

    In VCL applications there is also a problem, that none of the graphic controls like TLabel or TSpeedbutton are "seen" by a screenreader. Which is no wonder, since they do not have a underlying window control with a proper handle.


  2. What was the problem with StackOverflow ? Just curious.

     

    as for the invisible browser, I had a similar problem with the old TWebBrowser in a hidden TTabsheet on a visible TForm.
    I needed to show it at least once.

    I don't know the exact reason, but it seems it requires some message to be send to the web browser, before it will work properly.

    Try showing the form and then hiding it... if it works, then try to pin-point the windows message that it requires to start going. If you know it, then you can emulate it.

    I know, that is not a ready solution, but maybe that helps a bit.

    • Like 1

  3. Usually I put as much as possible in svn.

     

    We have even some projects where we put the executables and dll files and all the files needed for a deployment into the svn.

     

    That is great because a tester can just update his working copy and test the newest build. Or he can go back and tell us when a bug was introduced.

    For the team it is also great. No need to sync images, dlls, configs or other data through some other tool. Just Update your working copy and you are good to continue your development.

     

    If you are worrying about the repository size on the server, remember that SVN stores delta files even for binary files. Maybe not as gracefully as for real text files, but disk space is much cheaper then developer time.

    • Like 2

  4. 17 hours ago, Mahdi Safsafi said:

    When you ask for memory, MM(FastMM) asks the OS for a large chunks and then it splits them and gives you a piece (based on the size you need). When the object is destroyed (free), the memory is returned to the MM. Now, based on the returned size, the MM may either choose to recycle the object location (if its small) or return the memory to the OS (a real-free-op).

    What you're doing in MultiplyStr is not just wasteful but extremely harmful ! For each iteration you're reallocating memory. Allocating a new block and copying the old block to the new one. It's very important to know that small block are implemented as a segregated list. i.e if you ask for a 32 bytes, MM on reality allocates an entire table i.e 32x32=1024 bytes and yields first block. In your example you said you used 1GB ! this is extremely bad because you're not economizing resources and you'll quickly run out of memory i.e another thread that asks for a large chunk.

    It's indeed a good practice to pre-allocate memory : 

    
    function MultiplyStr(aStr: string; aMultiplier: integer): string;
    var i: Integer;
    begin
      SetLength(Result, length(aStr) * aMultiplier);
      for i := 1 to aMultiplier do
        Result := Result + aStr;
    end;

     Please run the above and notice the memory and performance !!!

    Also a small remark ! aStr should be const !!!

     

    Actually, your code is wrong.

    Now you are doubling the memory usage.

    SetLength creates a big string, then in the loop, you add more to it...

     

    If you wish to pre-allocate the length of the string, then you need to do it this way:

     

      Function MultiplyStrWithPreallocate(aStr: String; aMultiplier: Integer): String;

      Var i, pos: Integer;

      Begin

        SetLength(Result, Length(aStr) * aMultiplier);

        pos := 1;

        For i := 1 To aMultiplier Do

        Begin

          move(aStr[1], Result[pos], Length(aStr) * sizeOf(Char));

          Inc(pos, Length(aStr));

        End;

      End;

     

    Here is what is going on behind the scene, when you call:

    Result := Result + aStr;

    the memory manager creates a new temporary string, assigns the content of "result" and "aStr" to it. That new temporary Strings is then assigned to "result" which in turn decreases the reference counter to the String that result is refering... in that case to 0, which causes the memory manager to release that String.

    So for a short period of time, both, the original "result" and the "result"+"aStr" are in memory.

    And for each for-loop-iteration, you basically have a copy operation of the whole string.

     

    My above optimization using move reduces both of those concerns. There is just one memory allocation, and only a few bytes are copied in each for-loop-iteration

    • Like 1
    • Thanks 1

  5. if you are more comfortable using strings, then just use rawByteString.
    TBytes are fine, too. There will be no real performance drawback using them.

    RawByteString have a possible drawback, depending how you will use them. There are some functions, that expect a string, but will happyly accept a rawByteString, doing a implicit conversion.
    And that conversion from rawByteString to String and then back to rawByteString can cost some performance...
    You will not have such a problem with TBytes.
    SO if you go with rawByteString, keep a close look on your compiler warnings 🙂


  6. Maybe have a look at the DEB an Event Bus framework for Delphi from Daniele Spinetti.
    I think you will like it. It should help you achieve your decoupling without too much added complexity.

    here is the blog entry about this:
    http://www.danielespinetti.it/2016/02/deb-event-bus-framework-for-delphi.html

    and here the git repo:
    https://github.com/spinettaro/delphi-event-bus

     

    • Thanks 1

  7. I suppose I would write a power point automation. You can control Word and Excel with delphi I suppose you could control power point as well.
    From there it is simple and fast.
    Insert your text/music/images  into your ppt a template
    then let power point export it as a video file.

     


  8. Good or bad memory doesn't matter. You need to be able to navigate and understand the code even if you didn't wrote it yourself.

    I would guess, your biggest problem lays in the hard to find coupling of controls with the actual code. In other words, bad code navigation capabilities.

    Code navigation is for me personally more important then separation. Well, I know, controversial. And don't get me wrong, I'm all for separation, but try to find a way, that allows you to navigate your code nicely and have still your separation.
    When you have a dozen of projects, each with hundreds of units... well, you will be thankful to be able to navigate the code with ease.
    Do not try to follow the hype just because. There should be a reason to do it. Over complicating things doesn't solve anything.

    You wrote, that your mainform references the frame, maybe it shouldn't? Maybe it should reference the presentation layer instead. Maybe you should couple the frame with that presentation layer there, or at least have a call to the coupling there?

    I know the unpleasant WTF moment, when you look at a form, and have no idea where the code is, that supposed to be there... then you go on a search, wasting a lot of time...just because someone decided to implement a fully decoupled system just for the sake of it.

    I would suggest two solutions:
    1. Hide the frame from yourself. the mainform (or better said, your entry point) should reference the presentation layer, where you can find the coupling of the controls and the frame that is used.

    or..

    2. let the frame couple itself with the presentation layer. There is something beautiful to be able to click in the form designer on a button and see where the call goes.
    You need to do the coupling somewhere anyway.
    There is little reason to remove *all* the code from the frame. And if you do that, ask yourself what is the real benefit for you.  Are you solving a problem or are you introducing one?

     

     


  9. 1 hour ago, David Heffernan said:

    Stop even trying to justify this. The two pieces of code are not comparable. 

     

    You can see the single SetLength call. That's missing for the list. There's nothing more to say. 

    I agree. But others before me pointed that already out 🙂

    I was trying to explain (not justify) the differences between the two samples given, the one with 17% more memory and the second with 68% more.
    You can explain the +17% with the missing SetLength, but not the +68% more memory.
    More knowledge doesn't hurt, and if someone uses the task manager to compare memory usage, I suppose it is nice to know about the memory manager itself, isn't it?

    • Like 1

  10. Like others said, TList has a capacity growing  strategy.

     

    The second, and more important  thing is, that the memory manager doesn't release the memory back to windows.
    So what you might getting is this:
    when you add items to the list, and it tries to resize, it might happen, that there is not sufficient  memory in the current place to just in place resize. so the memory manager copies the whole array to a new location, that can hold the size of the new array.
    but the old memory space is still kept reserved by the memory manager.
    So when you use the task manager, windows sees both.
    Andd BTW, the memory manager doesn't request the exact size either. Like TList, it has a growth strategy, too.

     

     

    • Like 1

  11. 3 hours ago, Fr0sT.Brutal said:

    Don't bother, you explained it pretty clear but 2 bytes could easily cross 32bit boundary couldn't they?

    Thanks 🙂

     

    The answer to your question is yes and no 😉

     

    No: not with the default Delphi settings and not in a regular record or class, no a word can not cross 32bit boundaries. The compiler ensures that for you by default.

     

    Yes: You can force it to cross the 32bit boundaries. Delphi gives you this option. Like with packed records. Or by changing the compiler settings, or by using the compiler directive... and  maybe something else 😉

    But that is usually a conscious choice.


  12. 9 minutes ago, Fr0sT.Brutal said:

    Yes, and I see that w2 field is not aligned. And it is 2 bytes long so, AFAIU, potentially prone to non-atomic change. 

    I might unintentionally confused you... sorry for that.
    The header in my test saying "aligned" should rather say "starts with memory divisible by 4". But that was too long for the header.

    I tried - and failed it seems 😉 - to clarified that at the bottom of that same post.

    even if a variable doesn't start right away aligned (so its memory allocation is not divisible by 4), but doesn't cross the 32bit boundary, it is guaranteed to be atomic.
    Basically, a word is atomic, no matter if the 2 bytes that have to be discarded are at the start of the memory or at the end of it.


  13. 4 hours ago, Fr0sT.Brutal said:

    It's a rare case for variables that are accessed from several threads to be independent. Usually they are fields of a structure/object/class so nothing could guarantee they're aligned without explicit measures. So you'll have to either ensure alignment (by using paddings, dummy fields, $A directives etc) or just accept that accessing variables of size more than 1 byte is probably not atomic.

    The situation is not that grim and hopeless 🙂

    dummzeuch is correct.
    Delphi already performs aligning. Not only for global variables, but also for fields inside a record or a class. See my post above for measurements of the sample record. You can see, that all of those fields are aligned, Delphi already injects dummy bytes to ensure this.

     

    This is the default setting.
    So you only need to be aware of special cases, like packed records and such.


  14. 6 hours ago, FredS said:

    My point was about a single write single read thread.

    I know 🙂
    But the number of threads doesn't matter to the correctness or falsehood of the assumption that strings are thread safe 😉
    The high number of threads just helps to  increase the probability of bad things to happen 🙂
    And we want bad things to happen while we write and test the code, and not after we ship the software.
    When working with threads, one of the many problems is... it can work just fine... even, if there are problems in the code. At least for a while, on the development machine. The problems start rolling in, when you ship the software to some hundred customers. Then you will start to get strange bug reports from them. That is a very unlucky position to be in.

    • Like 1
    • Thanks 1

  15. I'm not sure if y

    I'm not sure if your testcase not crashing is a proof that  strings are now thread safe.

    remember how strings are designed:
    http://www.marcocantu.com/epascal/English/ch07str.htm

     

    so the string is just a pointer to the string content and before that you will find the length and the reference.

    In order to increase the reference, the CPU needs to get the address of the content. Then decrease it, and only then has it access to the reference.
    And only then can it safely increase the reference.
    That are multiple read/write operations right there. This is not atomic.

     

    Increase the number of threads that perform the writing and reading. Mayby then it will crash.


    I've prepared a small test app myself.
    It crashes almost instantanously. Try it out.
    And even if it doesn't crash, it shows errors in the string length.
    But if you enable the critical section, all is fine again.

     

     

    Unit Unit1;
    
    { .$DEFINE UseCS }
    
    Interface
    
    Uses
      Winapi.Windows, Winapi.Messages, System.SysUtils, System.Variants, System.Classes, Vcl.Graphics,
    {$IFDEF UseCS }
      syncObjs,
    {$ENDIF}
      Vcl.Controls, Vcl.Forms, Vcl.Dialogs, Vcl.StdCtrls, Vcl.ExtCtrls;
    
    Type
      TForm2 = Class(TForm)
        Timer1: TTimer;
        StaticText1: TStaticText;
        Procedure FormCreate(Sender: TObject);
        Procedure Timer1Timer(Sender: TObject);
        Procedure FormDestroy(Sender: TObject);
      Private
        fErrorCounter: integer;
        fTerminated: boolean;
    {$IFDEF UseCS}
        fCS: TCriticalSection;
    {$ENDIF}
        Procedure asyncWrite;
        Procedure asyncRead;
      Public
    
      End;
    
    Var
      Form2: TForm2;
      GlobalString: String;
      TempString: String;
    
    Implementation
    
    {$R *.dfm}
    
    
    Procedure TForm2.asyncRead;
    Var
      len: integer;
      x: integer;
      s: String;
    Begin
      Repeat
    {$IFDEF UseCS}
        fCS.Enter;
        Try
    {$ENDIF}
          s := GlobalString;
          len := length(s);
    {$IFDEF UseCS}
        Finally
          fCS.Leave;
        End;
    {$ENDIF}
        For x := 0 To 99999 Do
        Begin
    {$IFDEF UseCS}
          fCS.Enter;
          Try
    {$ENDIF}
            If len <> length(s) Then
            Begin
                inc(fErrorCounter);
              break;
            End;
    {$IFDEF UseCS}
          Finally
            fCS.Leave;
          End;
    {$ENDIF}
        End;
      Until fTerminated;
    
    End;
    
    Procedure TForm2.asyncWrite;
    Var
      x: integer;
    Begin
      Repeat
    
    {$IFDEF UseCS}
        fCS.Enter;
        Try
    {$ENDIF}
          If random(2) = 0 Then
            GlobalString := ''
          Else
            GlobalString := StringOfChar('a', random(124));
    {$IFDEF UseCS}
        Finally
          fCS.Leave;
        End;
    {$ENDIF}
      Until fTerminated;
    End;
    
    Procedure TForm2.FormCreate(Sender: TObject);
    Var
      x: integer;
    Begin
      randomize;
    
    {$IFDEF UseCS}
      fCS := TCriticalSection.create;
    {$ENDIF}
      For x := 0 To 9 Do
      Begin
          TThread.CreateAnonymousThread(asyncWrite).Start;
        TThread.CreateAnonymousThread(asyncRead).Start;
      End;
    
    End;
    
    Procedure TForm2.FormDestroy(Sender: TObject);
    Begin
      fTerminated := true;
    End;
    
    Procedure TForm2.Timer1Timer(Sender: TObject);
    Begin
      StaticText1.Caption := IntToStr(self.fErrorCounter);
    End;
    
    End.

     


  16. 40 minutes ago, ertank said:

    What about string variable pointing to "1/9" as value and that is not going to be 4 characters in total ever?

    For a Delphi string, it's never threadsafe if you have one thread writing and one thread reading.
    What is safe is the reference counting of strings.
    Copy-On-Write of Delphi strings is not a threadsafe operation. if you need a multithreaded read/write access to the same string you generally should use some synchronization, otherwise you are potentially in trouble.

     

    Example of what could happen without any lock.

    String is being written: it should become bigger than it was, so new memory is allocated. But pointer is not yet modified, it points to old string.

    At the same time reading thread got a pointer and began to read old string.

    Context switched again to writing thread. It changed pointer, so now it is valid. Old string got refcount 0 and was immediately freed.

    Context switch again: reading thread continues to process old string, but now it is access to deallocated memory which may easily result in access violation.

     

    3 hours ago, Fr0sT.Brutal said:

    Hm, what if a variable isn't aligned? It produces single mov instruction anyway. Or do you mean that CPU will have to do several instructions to modify that variable?

    dummzeuch is correct. Usually, you do not need to worry, Delphi plays nice and takes care for you.

     

    But if you want to know why non aligned memory access is not atomic, here is why:
     
    The problem is not limited to CPU instructions.  In fact it has more to do with the data bus.
    When you have a 32 bit wide data bus, a read from memory is aligned on that boundary.
     
    So if you were to perform a 32 bit read from address 0x02, then two memory cycles are required, a read from address 0x00 to get two of the bytes and a read from 0x04 to get the other two bytes.

     

    You see, the first read fetches all 4 bytes from the address   0x00, discards the first 2 bytes.
    The second read fetches all 4 bytes from 0x04, and similar discards the second two bytes.
    After that the remaining bytes are combined to give you your 32bit data that you requested.
     
    This is not guaranteed to be atomic. A different CPU core could get its chance in between the above two reads to change the memory at address at  0x04, just before you read it.
     
    This is why you can not assume atomicity with non aligned variables.

     

    You might get lucky, because the CPU has multiple caches, and it might happen to be safe. But it is not guaranteed to be atomic.

     

    On a similar note, aligned memory is twice as fast  to read/write, and this is why Delphi (and other compilers) align instructions and data in memory.

     

    3 hours ago, dummzeuch said:

    Most variables are automatically aligned on a 32 bit boundary. That's the compiler default. So unless you explicitly make something not aligned, e.g. see my example above, or you access some data structure created by foreign code, there won't be any problem.

     

    I've build a small test, to see, how the following record will be aligned: Just for fun.
     

    TmyNiceRecord = Record
        i1: Integer;
        i64: int64;
        b1, b2, b3: byte;
        w1, w2: word;
        b4: byte;
        i2: Integer;
        b5: byte;
        b6: byte;
        b7: byte;
      End;


     
      who can guess which fields are aligned properly? Which are guaranteed to be atomic?
      Here are the results:
     
     
     for win32 build, with delphi 10.3.2
     

    Address*| Aligned | VarName | VarSize
    0       | Yes     | i1      | 4
    8       | Yes     | i64     | 8
    16      | Yes     | b1      | 1
    17      | No      | b2      | 1
    18      | No      | b3      | 1
    20      | Yes     | w1      | 2
    22      | No      | w2      | 2
    24      | Yes     | b4      | 1
    28      | Yes     | i2      | 4
    32      | Yes     | b5      | 1
    33      | No      | b6      | 1
    34      | No      | b7      | 1

     (* as an offset to the record itself)
     
    For win64

    Address | Aligned | VarName | VarSize
    0       | Yes     | i1      | 4
    8       | Yes     | i64     | 8
    16      | Yes     | b1      | 1
    17      | No      | b2      | 1
    18      | No      | b3      | 1
    20      | Yes     | w1      | 2
    22      | No      | w2      | 2
    24      | Yes     | b4      | 1
    28      | Yes     | i2      | 4
    32      | Yes     | b5      | 1
    33      | No      | b6      | 1
    34      | No      | b7      | 1

    interesting, isn't it?

     

    So, is delphi playing nice? Which fields can be assumed to be atomic?

    The answer is: All of them.

    Even b2 and b3. Yes, they do not start with a aligned address, but they do not cross the boundary either.

    This means, the read/write is still atomic.

     

    I hope that helps to better understand the topic.

    MemoryAlignedOrNot.zip

    • Like 1
    • Thanks 1

  17. 3 hours ago, dummzeuch said:

    Are you sure about that? What about a packed record which starts with a byte followed by an integer? Is access to that integer atomic? On all platforms? 

     

    Of course it wont be atomic!

     

    There is a very very important rule besides the size. A variable must be aligned  by 32bit boundaries.

     

    on x86systems , a 32-bit mov instruction is atomic if the memory operand is naturally aligned, but non-atomic otherwise. In other words, atomicity is only guaranteed when the 32-bit integer is located at an address which is an exact multiple of 4.
    That rule is true on all modern x86, x64, Itanium, SPARC, ARM and PowerPC processors: plain 32-bit integer assignment is atomic as long as the target variable is naturally aligned.

     

    Which the integer in your packed record will most likely not be.  - unless... it is part of an other record, that has 3 byte fields before your packed record 😉

    So to assume atomicity, you need to know where your variable is placed in the memory...

     

    Your record may be safe anyway - but by  luck not by design - because there is something like  a CPU cache line.
    The CPU has some cache and that one is locked too, so the multiple cores can not access it simultaneously...
    see here for more details, I'm not an expert on that:  http://delphitools.info/2011/11/30/fixing-tcriticalsection/

     

    To sum up, use TInterlocked to access  simple types if you are not 100% sure they will be  naturally aligned.

    • Like 1

  18. 2 hours ago, ertank said:

    Do I still need to do such protection for reading only? Thread will be the only process which is writing into a string variable. 

     

    First things first, what you describe is not read only.
    When  a thread writes and an other thread reads, then you need to protect the variable. Always.

    OK, almost always. There are types that are guaranteed to have atomic read write operations (see comments above). So they are protected by the CPU. Anyway, you need to be aware of this, even if you do not need to protect the variable yourself. And you need to double check the architecture of the device your code will run on. I would say, all of them should have the same rules regarding atomic read write operations... but...

    In case of strings. Protect them.
    See here for a topic on why: http://codeverge.com/embarcadero.delphi.general/strings-thread-safety/1051533
        
    itself is not.

     

    • Like 1

  19. My advice would be the same.
    Stay away from TThread.Synchronize calls if possible.

    Instead,  either call  TThread.Queue - that works a bit better.

    or even better, have  a shared variable, in which the thread writes the current step.
    In the main GUI thread have a timer, and read out that same variable and update the label there.
    And don't forget to protect that shared variable with a critical section or similar.

    • Like 1

  20. You can try those options:

    1)Don't compile the third party components every time. Just compile them once when you install a new version, in both release and debug mode. Put the path for the release dcus in the "Library path", the path for debug dcus in "Debug DCUs" and the path for the .pas in the "Browsing path" This way you can still fully debug the third parties (by selecting "Use debug dcus" in the compiler options) or treat them as a black box (by unselecting use debug dcus). But the more important thing is that you will compile much less lines of code every time you do a build.

    2)Look out for circular dependencies between units.

    3. move used units  down from the interface uses section to the implementation uses section

    4. remove not needed units from the uses section

     You can also  compile the project with the command line  and output the results to a file. Then open the file with the output, and see which units take a long time to compile.  then review the uses sections (both interface and implementation)

     

    • Like 1

  21. have a look at the TRttiEnumerationType  class in RTTI.pas

    you can do like this

    type
    TmyEnum=(enum1, enum2);
    
    TRttiEnumerationType.GetName<TmyEnum>(enum1) >> returns 'enum1'  as a string
    TRttiEnumerationType.GetValue<TmyEnum>('enum1') >> returns the enum1
    

    quite ugly... but here you go.
    Usually I write a record helper to simplyfy it a bit like this:

    TmyEnumHelper = record helper for TMyEnum
    function toStr:string;
    class function from(const aName:String): TmyEnum; static;
    end;
    
    function TmyEnumHelper.toStr:string;
    begin
      result:= TRttiEnumerationType.GetName<TmyEnum>(self);
    end;
    
    class function TmyEnumHelper.from(const aName: String): TmyEnum; static;
    begin
      result:= TRttiEnumerationType.GetValue<TmyEnum>(aName);
    end;
    

    this simplifies it a bit, and then you can use it like this:

    var e:TMyEnum;
    ...
    e:= TmyEnum.from('enum1');
    
    // and later
    e.toStr; >> gives 'enum1';
    

    you can also just cast it to a integer value

    var
    i:integer;
    e:TmyEnum;
    ...
    i:=ord(enum1);
    e := TmyEnum(i);
    
×