-
Content Count
1112 -
Joined
-
Last visited
-
Days Won
96
Everything posted by Dalija Prasnikar
-
Why is public class method marked as used, while it's not actually used?
Dalija Prasnikar replied to Mike Torrettinni's topic in Algorithms, Data Structures and Class Design
If you have many DLLs with bloat is duplicated. Also code size is not something that should be observed from disk size perspective. How do you deliver that code also matters. If you need to make frequent updates to some remote place with poor or expensive Internet connection then every byte counts. Literally. -
Simple inlined function question
Dalija Prasnikar replied to Mike Torrettinni's topic in Algorithms, Data Structures and Class Design
DelphiCon schedule is totally unrelated to any release. It is just online Delphi conference. Previously we had CodeRage in about same timeframe. -
Extending string - How to call the default record helper?
Dalija Prasnikar replied to Incus J's topic in RTL and Delphi Object Pascal
There is feature request for expanding helpers functionality https://quality.embarcadero.com/browse/RSP-13340 -
I don't understand this CONST record argument behavior
Dalija Prasnikar replied to Mike Torrettinni's topic in Algorithms, Data Structures and Class Design
I agree with specific intent part. Not necessarily with always measuring (but this has tendency to push discussion in wrong direction). Measuring performance bottlenecks is important, but once you know that particular construct can cause bottleneck, you might want to avoid using it in performance critical code, even though you will not exactly measure how much of bottleneck that construct really is in that particular code. In other words, life is too short to measure everything, sometimes you just use what you know is the fastest code (with all downsides considered). I am using Items, when I am iterating over small collections in non-critical code. Yes, exposing internals is not the best practice in general. In case of TList it is necessary sacrifice because performance requires it. Why not something else - well you can use something else, but you are out of option when core RTL/VCL/FMX frameworks are concerned. For instance speed optimization of FMX framework described in https://dalijap.blogspot.com/2018/01/optimizing-arc-with-unsafe-references.html includes direct array access and using List property. With that optimization in just few places I got from "it barely scrolls" to "fast scrolling" on mobile applications. Yes, ARC also had huge impact in this case, but even without ARC compiler, you can still get better GUI performance on mobile apps with such optimization, and on low end devices every millisecond counts. -
I don't understand this CONST record argument behavior
Dalija Prasnikar replied to Mike Torrettinni's topic in Algorithms, Data Structures and Class Design
I think you are blowing this out of proportion. Understanding that accessing underlying array does not have range check error implemented is not nuclear physics. Nor is making sure that you don't screw up your indexes. If you expect some piece of code needs to be performant for any reason (called a lot, or operating on large data sets) you don't have to really measure every two lines of code to know how to make that part of code run faster. Why? I know that TList is not the most brilliant piece of code out there, but it is not that bad either. And iterating through dynamic array (List property) will not be any faster if you use some other collection. -
I don't understand this CONST record argument behavior
Dalija Prasnikar replied to Mike Torrettinni's topic in Algorithms, Data Structures and Class Design
It won't fully help you with accessing TList List property. Since List can have more items than actual number of items stored, you can access index at the end of array that will not trigger compiler range error, but you will be accessing items outside of TList Items range. -
I don't understand this CONST record argument behavior
Dalija Prasnikar replied to Mike Torrettinni's topic in Algorithms, Data Structures and Class Design
Access through Items (GetItem) implements range check and throws exception if index is out of range. Using List gives you direct access to underlying dynamic array (which capacity may be larger than number of items) and there is no range check in place. If range checking does not bother you - because you know that you are looping through correct index range, then you can safely use List. -
Best type for data buffer: TBytes, RawByteString, String, AnsiString, ...
Dalija Prasnikar replied to Rollo62's topic in Algorithms, Data Structures and Class Design
Some common RTL string manipulation functions have several overloads - including RawByteString one. If you use such function on AnsiString, you will trigger automatic compiler Unicode conversion. Using RawByteString will just use appropriate overload. There might be other differences, I haven't been fiddling around 8-bit strings (writing new code, besides the one I already have) for at least 5 years now. None that I have found so far. I am using RawByteString for mixed ASCII/UTF8/binary data for 10 years now. No problems. -
Best type for data buffer: TBytes, RawByteString, String, AnsiString, ...
Dalija Prasnikar replied to Rollo62's topic in Algorithms, Data Structures and Class Design
Even AnsiString is back, but you can safely use RawByteString and UTF8String. Original version of NextGen compiler didn't have support for 8-bit strings, but it is not brought back just because of community demands, but because UTF-8 is commonly used Unicode encoding, especially on Linux. So RawByteString and UTF8String were introduced in NextGen as a step toward supporting Linux. Now when NextGen is gone, you can also use AnsiString, but that type is really legacy nowadays and it makes no sense to use it beyond that. -
Best type for data buffer: TBytes, RawByteString, String, AnsiString, ...
Dalija Prasnikar replied to Rollo62's topic in Algorithms, Data Structures and Class Design
TBytes is preferred type for manipulating binary data. However, it lacks some features and behaviors that string has (copy on write, easy concatenation, string manipulating routines (even if you have to copy paste pieces of RTL code to adjust them for RawByteStrings, this is easier and faster than handling TBytes), and most important one debugging). If those features are needed, then RawByteString is the only other option you have left. Which codepage to use is debatable - probably 437 (I have been using UTF8 because textual data I am extracting from such strings is always in UTF8 format, so this makes handling easier). Most important thing is, you should avoid calling ANY function that does Unicode transformation and usually that means some common Delphi string functions will be out of reach, and you need to make your own. Of course, the whole thing needs to be covered with tests to prevent accidental mistakes when you change pieces of code. With CP 437 you might get away with transformation problems, but any unnecessary conversion also means performance loss. Now I will go sit in my corner waiting to be tarred and feathered. -
merge --squash (if I am not mistaken, never used it as I am always rebasing) Remotes never have feature branches, if they do they are only secondary remotes, not main ones. So, yes, I want that history preserved. Rebase without fast forward does that. I usually don't commit experimental stuff in branches that will end up being merged. And if I do, then it is because such commits contain some approaches that are not applicable at the moment, but I want them preserved for the future because requirements can change or some missing compiler/framework features might be implemented. Sometimes, some cruft will end up in repo, but there is not enough of that to be a problem.
-
That part is purely optional. You can do as you please. But when you are merging without rebasing, you also get the whole sausage, unless you squash. Yes, never rebase a public branch is more clearer. That is what was meant with that sentence. It is never call rebase when you are on public branch. In other words when active branch is public branch.
-
I found this answer on Stack Overflow about merging branch to master. Maybe it will help. https://stackoverflow.com/a/29048781/4267244
-
git - do you 'pull' before/after switching branches?
Dalija Prasnikar replied to David Schwartz's topic in General Help
That means going through diff list and making sure that all changes are expected changes - changes I intended to make (that does not mean I am rereading every line of code). For instance, first step is making sure that only files that had to change are changed - for instance if I am making changes in for code and I am not touching design and dfm (fmx) file pops in changed files, I will not commit those files and will revert changes after quick inspection. Next I will go through changes I expect and I did make. If I was working on class xyz and something is also changed in abc, well that means something is wrong. Such things don't happen often, but sometimes you mess things up during debugging or leave unwanted artifacts. I don't commit every two lines of code, but logical tasks. If the task is larger, then it can be spread through several commits. -
git - do you 'pull' before/after switching branches?
Dalija Prasnikar replied to David Schwartz's topic in General Help
I forgot another extremely important thing. I never commit everything blindly, I always check every change before commiting to avoid accidental errors and changes in files that should not have been changed. -
git - do you 'pull' before/after switching branches?
Dalija Prasnikar replied to David Schwartz's topic in General Help
If I am working on a repo in a team then I pull often, otherwise, no - because I know there is nothing to pull How I use Git (modified git flow) I almost always use GUI client - SourceTree - it was best client on OSX at the time, and I got used to it, so I am using it on Windows, too. It has its quirks (every client has) but I never shoot myself in the foot with it master is always production ready develop is development branch both master and develop live on remote (and they are the only ones that live on remote) each feature is new branch, usually very short lived - never pushed on remote, unless they live too long and have to be shared for some reason, but for that I use temporary remote Basic workflow (if working on team) before creating feature branch pull from remote (there will be no merges, because repo is clean - also, I always do pull with rebase) create feature branch from develop a) work on branch, commit... b) if I think there is something new in the repo and I have committed all my feature branch changes, switch to develop and pull with rebase, same with master c) if there were changes rebase feature branch on develop, check if everything works - repeat from a) when work is done on feature branch, pull and rebase feature on top of changes (if there were any), check if it works (if checking takes long enough and depending on potential conflicts with others, pull again, rebase... check...) when all is completed merge feature branch to develop and push to remote If you have problems with workflow, you can setup demo repo and use that to figure out optimal workflow and then apply to real one. If you ever feel like you have messed up something locally, rename (or otherwise backup files you have been working on) repo folder, clone from remote, create new feature branch and copy your changed files back. I said it before, but it is worth repeating. Unless you are GIT DIE HARD, don't use console. -
Nope. Memory management is completely different castle. Custom managed records are about ability to use automatic, custom initialization and/or finalization on records and enabling easier implementation of some concepts like nullables. Without automatic initialization/finalization, you can still use records but there are trade offs - you either need to use manual initialization/finalization which is error prone or you need to implement some additional safeguard mechanisms that are more complex and commonly slower than using custom managed record.
-
I have to say I have trouble visualizing your actual problem with Git workflow (or any workflow). Also, whatever problem you do have I don't think it is related to Git. If your problem are merges, then without Git your would still have problem, you just would not know you have it, until something would not work correctly. Having everything in single repo is a bit odd, but if the parts are really independent then you should not have any issues. Short lived branches and features are actually better than long term feature branches as they create less conflicts - I am not talking about develop and master in that context, but creating some feature branch that takes long time to develop and that changes code other people might change in between. Your main problem seems to be having conflicts in short term features that sound like they are independent. If that is true, then your tickets are not actually independent and working on multiple tickets means adding changes to same files (even though Git is good in merging code) that is usually recipe for disaster (or at least constant merge struggles). To fix that, you should either split offending parts of code to smaller pieces (files) because that will decrease chance of more people working on same code at the same time. If that is not possible, then you need to introduce some other level of discipline here. If there is some common file that needs to be updated, you need to coordinate that change between your developers. Probably the best would be that such common files are not fixed inside ticked feature branch, but as separate feature, then merged back to common base branch (develop) from which everyone can get changes. Next, when I say merge branch, that is not just straight forward merge. For every merge, you need to rebase on top of branch you are going to merge to, make sure your code still works as intended and then merge (if there are changes in that branch in the meantime, you need to rebase again). Same goes for getting changes - if you know there is some change in base branch, you need to take, rebase your feature branch on top of it to apply those changes. For any more detailed advice, you would need to be more exact with description of the problem - for instance, what is exact scenario where you have conflict - like ticket A changes files 1, 2 and 3 and ticket B changes files 1, 4 and 5 and why you cannot separate changes to file 1 or similar.
-
Use of Ansistring in Unicode world?
Dalija Prasnikar replied to Mike Torrettinni's topic in General Help
Don't use AnsiString if anyhow possible. Only if you exclusively work with 7bit ASCII subset then you can safely use AnsiStrings. The reason is that AnsiString - Unicode conversions is potentially lossy. You can lose original data. Using 8-bit strings instead of 16-bit strings can be faster under some circumstances, and they also use less memory. But if you have use case where 8-bit string performance and memory consumption is needed you should use UTF8String instead of AnsiString. -
where can I get general git process questions answered?
Dalija Prasnikar replied to David Schwartz's topic in General Help
I don't know any particular forums. But, from experience, Git workflow can also be quite dependent on the tool set and existing code base. For instance, some types of files can create huge merge conflicts. So applying workflow that works in one tool set may hit you on the head in another. For instance, Xcode storyboards are nightmare to merge (you can even create unsolvable conflicts as single developer) while handling Android Studio xml UI files is almost trivial. So discussing here is as good place as any other, because you are interested in managing more abstract workflow and when you have that figured out, finding correct Git commands is not as hard. -
where can I get general git process questions answered?
Dalija Prasnikar replied to David Schwartz's topic in General Help
Not really. This is mostly opinion based question unless there is exact, very narrow problem you need to solve. -
Boolean short-circuit with function calls
Dalija Prasnikar replied to Mike Torrettinni's topic in Algorithms, Data Structures and Class Design
The only mention for nullables was in context of custom managed records that are prerequisite for implementing nullables. From that perspective we kind of have nullable support right now in 10.4. But that is far from having full compiler support for nullable types. Nullable types include way more than just defining common RTL type (we don't even have that in 10.4) and one of the things that such support also implies (assumes) is ternary operator. Additionally ternary operator has more broader usage scope than nullables themselves and thus is more interesting, worthwhile feature. -
Boolean short-circuit with function calls
Dalija Prasnikar replied to Mike Torrettinni's topic in Algorithms, Data Structures and Class Design
Many features can be nicely added to the Pascal and fit well. https://docs.elementscompiler.com/Oxygene/Delphi/NewFeatures/ The people you talk about are not really Pascal purists, rather people that can't be bothered to learn something new. Delphi never suffered from purists... even its predecessor Turbo Pascal was always innovative and expanded features on top of standard Pascal. We would not have Pascal with objects otherwise. Probably not. Ternary operator is easier to implement than whole nullable support. But like with many other compiler features, someone needs to implement all that - that is the weakest point and the core issue, not "purists" as such. -
Delphi implementation of Aberth–Ehrlich method and precision problem
Dalija Prasnikar replied to at3s's topic in Algorithms, Data Structures and Class Design
Guy is beating a dead horse with a stick. Another guy comes along "Hey, your horse is dead, you need a new one" "Nah... I am probably holding the stick wrong" -
August 2020 GM Blog post
Dalija Prasnikar replied to Darian Miller's topic in Tips / Blogs / Tutorials / Videos
I never said I didn't ask any questions. Sometimes you need to ask, sometimes you don't. Sometimes you don't get the answer... or the answer is not exactly the one you expected (hoped for), but in such cases forums would not help more either.