Jump to content

Rollo62

Members
  • Content Count

    1949
  • Joined

  • Last visited

  • Days Won

    25

Everything posted by Rollo62

  1. Wouldn't be the NativeInteger the right cast for a pointer ? Matching the right bitness on 32- and 64-Bit machines, to the same pointer bitness ?
  2. Rollo62

    Drone control from mobile

    I'm afraid you had the ESP8862 in mind, as far as I know the ESP32 boards lay around 5$. But you never know what quality to expect from different suppliers ...
  3. Hi there, I need to choose a basic type for caching and manipulating binary data, which is mostly represented as String, but could be also pure Byte data. The problem is that I need to analyse, chop, copy, append, re-combined to this buffer into several places, and finally the data will be string in most times. The original source is TBytes, so my first consideration is to keep TBytes as buffer data type. While the original data mostly contains ANSI strings, but in some cases maybe also contains binary (Byte) data, 0 ... 255. In short, the basic question is maybe: With original source data as TBytes - Keep TBytes for buffer manipulations, and convert in different places maybe later convert parts to string, or - Immediately convert all TBytes to e.g. String, and use string for manipulating data, even if maybe some of the data may be binary. TBytes: But from my gut feeling I would say that TBytes is probably not the most efficient data type for handling data, since its dynamic handling is not supported very well from the compiler. There needs to be done a lot of pointer tricks and memory move's to make that efficient. String: Strings on the other hand are very efficient and optimized, using all tricks like copy-on-write to make them fast and easy. I use them in many paces and they behave always very good and very efficient. But the drawback is that Strings are Char-based, which should double the memory footprint compared to Byte. What will be the right codepage for the encoding then ? RawByteString: The alternative RawByteString is not recommended, only as replacement for older AnsiStrings with codepage issues. So they have clearly another use-case: AnsiString: (without specific codepage) I could take AnsiString without codepage as base class, which would possibly reach the same efficieny as strings, but as AnsiString was deprecated and removed once from modern platforms, this leaves a bad taste. It seems that I came back only on massive complaints from the community. So my current decision tends more to use pure String as base class: type BufferType = String; var FBuffer : BufferType; ... //<== Single point of source data procedure SourceData( AData : TBytes ); begin FBuffer := EncodeAsASCII( AData ); // use no specific codepage, or DOS-like, to simply use Byte (0 ... 255) as elements ... // FBuffer copy, move, indexof, concat, ... //<== Further processing on FBuffer with effective string routines end; Is that the right decision, ignoring the doubled footprint in favor of speed ? So which option to choose from, A., B., C., or maybe I have overseen even another possible option ? I hope that you can help me with that decision.
  4. I agree on the ugliness, but what if type extension is too hard to get ... What else is on the roadmap ?
  5. Would be great to support at least kindof casting to the desired helper, that would solve some cases where several competing helpers were all around.
  6. Rollo62

    ANN: FireDocking 1.0 beta1 has been released!

    Great, that you support Firemonkey. Would be very helpful to see any visual impression of the FireDocking in action.
  7. Rollo62

    Error with Transporter

    Have you upgraded the project from Rx0.3.3 ? Maybe there is still the Compiler option "Generate iOS universal binary" somewhere in the .dproj file, or other settings have been wrong. If the project worked before, then usually I recreate my .dproj files, to get a clean .droj setting, especially after larger version upgrades like 10.3.3 to 10.4
  8. @TurboMagic Thanks, I was not aware of that feature. Found this here from Marco. But I'm working with TBytes as parameter, not sure if that should also work. So far I'm using manual pointer operations. Have to check how that is done and look into its performance, maybe only next week.
  9. @Erik@Grijjy Great stuff, thanks for that. What is your next project, are you going to implement similar structure to the JSON parser ?
  10. Rollo62

    general question about embedding docs in an app

    @Der schöne Günther Depends on what you need, the HtmlComponents are a complete RichEdit-like Editor, and ends up as complete reporting solution. with all the possibilities it should have (scripting, interactive options). So it can create and display the HTML, if you need that, so e.g. for translations the app could be switched into "trnanslation" mode for special users. Anyway, for a pure display webbrowser is fine too, but David asked for "components" explicitely, so I think they are a good choice. On the downside you can see what happens if TWebBrowser-support is broken, as it was in some older versions and for other platforms, or now the messy change to Edge browser. Sometimes its good to have your own viewer in the backpack.
  11. Rollo62

    general question about embedding docs in an app

    I would recommend those HtmlComponents, they can show HTML nearly as good as a webbrowser.
  12. Rollo62

    [Souce code]

    Very good, the Python4Delphi community gains traction 👍
  13. I used the circular buffer (of bytes) in the first place, because of that decoupling of write/read operations by a fixed time. The transmissions could be fragmented, means 1,2 or 3 transmissions needed to complete a frame. But in the begining the frames came veeeeeery rarely only on manual demand, from the first devices we supported. Then later I needed to add new devices, which send data more frequently, but still moderate, thats why I consider possible optimizations now. Maybe the next devices will have even higher load, so this might get critical. The circular buffer I use has a fixed memory pre-allocated, so there should be no overhead with grow or shrink, that was the main advantage of circular buffer, why I've made that decision at that time. It still works fine and decouples read / write operations well, just buffering the slight processing deviations from other tasks, to keep the average processing non-blocking, thats was my goal. I never saw a buffer overrun, although this would be allowed too, then simply grow once and stay at double space. I could think of an optimized list, to buffer the single fragments, but that would need to create / destroy memory each time. In the circular buffer I just need to move the pointers, to the new memory position, which is very effective. The analysis of the data is done after the circular buffer (for decoupling). When reading from the circular buffer, then the data fragments could be torn, like 1, 2-and-half ==> not fully received until 3 yet. So I have to wait for more data and retry. For this analysis I use an external TBytes buffer, which could be replaced by linked list, etc. From deeper consideration, maybe the analysis could be taken "inside" the circular buffer itself, saving the cost of copying memory. The problem is probably the "circular" design, which doesn't alllow effective, linear memory searches. It could be reasonable to replace the circular buffer by a two-section linear buffer of TBytes, like a "framebuffer" where one section currently reads until ready, then switches to processing, while the second then reads until the next frame finished: First, partly received ( full frame consists of 1,2,3 ) (1)==> "1,2" --> analyse incomplete (2)==> "" where e.g. section one rund full until 1,2,3 ==> where always TBytes can be fastly analysed from 0 ... last, as one memory piece. (1)==> "1,2,3" --> analyse complete, switch to processing the buffer (2)==> "" or When found 1,2,3 and maybe already the next fragment 4 is in the 1. section, (1)==> "1,2,3,4" --> analyse complete, copy the overhead to the 2nd buffer, switch to processing the buffer (2)==> "" it could be copied to the next section, to have section (1) for further processing, while (2) is now the new write buffer. While new fragments "5" may arrive (1)==> "1,2,3" --> processing (2)==> "4,5" --> buffering The frames may consist of various data representations, from fixed length, over SOT/EOT signatures, to simply timeout based, that is the problem. There is no clear protocol to rely on, so I have to do specific analysis all the time when data arrives (also extendable in the future). Anyway the real problem seems to me the analysing, not the storage of the fragments itself. The analysing might fail (not fully received), and needed to wait for more data, so an analysing process needs to be repeated from time to time. Thanks for your input to get some diffferent views and thoughts, I think still this is not the final solution yet, but maybe closer.
  14. Rollo62

    [Android] How to capture characters with a bluetooth device

    I think the response is workable for you, no matter where the problem comes from. As long as you see double-zero at the end, you know its the "0-version", otherwise the normal version. The only question is maybe, does all Android devices behave the same, or is this caused by a special driver of. e.g. LG devices ? Have you checked with other Phones too, and/or with other BT keyboards ?
  15. Its not very critical, llike maximum maybe 1 Kbps, but the single transmissions maybe be processed in several different positions, llike DB-store, listview, Chart, ... My tests so far didn't show much different performance between TBytes and RawByteString, but thats hard to test, as I've tried to check the whole signal chain. Yes, a different design, like the linked list proposal from @FPiette, comes closer to my decision. Since that means a bigger rewrite than just a buffer change, I will have to do that later. So far its working, and I can stay with TBytes, which also make sense with the binary data. Maybe from a philosophical standpoint I could think the following: Does the data contains only pure ASCII --> Then RawByteString could make sense May data contain Binary, beside ASCII --> Then maybe the whole data SHOULD BE considered as binary, and TBytes is the right choice.
  16. Yes, it is TBytes, but not well optimized. Thats why I look for more suitable options. I just replaces TBytes by a custom type based on RawByteString, and all unit tests were running well so far. Maybe tomorrow I can get my first impression (and measurements) on performance.
  17. I will check, it also said that it should have no codepage, only when you explicitly set one.
  18. Then you don't know me well, I make permanent, changes to all my core code libraries on a regular base Only I have too little sleep sometimes Even this change wouldn't be necessary, as it works right now and only degrades in performance under some few, rare conditions. But I'm willing to prepare for future devices, and to increase performance for my customers.
  19. Of course I know, thanks for that nice piece of software Yes, TCP/UDP has very similar issues. Additionally in my configuration I cannot rely on delimiters, as I said, some use SOT/EOT delimiters, others use fixeld-length method. So if I only need to follow one protocol, that would be easy, but I need to follow several, different protocols. Yes, I could use a special handler for each protocol near the source, thats true. I already considered before, beside simple data types, to use a "class wrapper" für such tasks. Originally I thought about a wrapper for the simple types in the first place, but finally there could also linked lists, fifo or other classes be used. I have a working solution based on TBytes right now, and I look for fast/easy optimizations, so maybe I need to move through more than one step: 1. RawByteString 2. Linked list Since the 2. is a bigger efford to change at the moment, I would need to postpone that to a future change. But thanks for the suggestions anyway. I know that this is of course a very powerful method, transferring referenced types, which reduces memory copying a lot.
  20. I have my land line connection for fast internet connection, TV, etc, and the land line phone is a kind of free "by-product". If I want that people don't call me, I give them my land line number
  21. Sorry to confuse, I meant the original, old ASCII character set, of usuallly printable characters >= 32 < 127, pllus also including controls like Cr, Lf, Tab, ...). https://theasciicode.com.ar/ascii-printable-characters/space-ascii-code-32.html To me this is "ASCII 0 ... 127", sorry I have no better name for that set, maybe its compatible to CP_437. Since this kind of characters mostly do no harm and in memory these are 1:1 byte compatible. I see such character sets mainly produced from embedded devices, and like most that its a human readable "string".
  22. Yes maybe, but then the list items will include incomplete chunks, and I have to torn and re-construct them later. When the source data arrives I have not enough time to analyse it, just push and go. If I push to lists, then that might happen ListMessages: - 11111 - 111 //<== Ok, this I can combine easily = 11111111 - 222 - 222222 - 2223333 // <== This I have to combine, torn apart, so that 3333 can be used later = 222222222222 +3333 ... - 33333333 = 333333333333 Yes, its possible, but a little more tricky.
  23. Right, but as I explained I need to decouple source and processors by a circular buffer. The source data can provide data fastly, from a thread, while the processors will analyse and send the decoded data to different consumer. Thats why I'm looking for an intermediate buffer solution, to pre-process the data, until a complete chunk can be identified. I have sources were a transmission is not complete in one step, only the next transmission(s) may be complete one data chunk.
  24. Yes, from Marco's guide this is exactly what I'm looking for, regarding the RawByteString If this RawByteString is still modern, then I will choose it as preferred solution. Since its that long in the Delphi environment maybe its very likely to stay there in the future too. I will try and check how it behaves and compares to the pure TBytes solution. Thanks to all for putting your arguments into this interesting discussion.
  25. @Kryvich Thanks, but its from 2008. Is this still 100% valid, after all that FMX, mobile, ARC, .... additions ?
×