Jump to content

Dalija Prasnikar

Members
  • Content Count

    1058
  • Joined

  • Last visited

  • Days Won

    91

Posts posted by Dalija Prasnikar


  1. 45 minutes ago, David Heffernan said:

    Which is why accessing via Items is to be preferred unless there are very well understood extenuating circumstances, for instance, measurable and significant performance gains. 

     

    I think you are blowing this out of proportion. Understanding that accessing underlying array does not have range check error implemented is not nuclear physics. Nor is making sure that you don't screw up your indexes. If you expect some piece of code needs to be performant for any reason (called a lot, or operating on large data sets) you don't have to really measure every two lines of code to know how to make that part of code run faster.

     

    45 minutes ago, David Heffernan said:

    But in such a case I'd probably look to use a different collection. 

    Why?

     

    I know that TList is not the most brilliant piece of code out there, but it is not that bad either. And iterating through dynamic array (List property) will not be any faster if you use some other collection.


  2. 28 minutes ago, Mike Torrettinni said:

    What would be a situation where accessing through .List is not advisable?

    I assume if I access TList, I control the content anyway, so I assume .List should be safe in all cases, no?

    Access through Items (GetItem) implements range check and throws exception if index is out of range. Using List gives you direct access to underlying dynamic array (which capacity may be larger than number of items) and there is no range check in place. 

     

    If range checking does not bother you - because you know that you are looping through correct index range, then you can safely use List.

    • Thanks 1

  3. 18 minutes ago, Rollo62 said:

    I have not really checked yet in detail what is the internal difference between RawByteString and AnsiString internally,
    I assume this are mainly the codepage parts.

    Some common RTL string manipulation functions have several overloads - including RawByteString one. If you use such function on AnsiString, you will trigger automatic compiler Unicode conversion. Using RawByteString will just use appropriate overload.

    There might be other differences, I haven't been fiddling around 8-bit strings (writing new code, besides the one I already have) for at least 5 years now.

     

    18 minutes ago, Rollo62 said:

    In case it works and solves my issues, I could live with that, but still there might be better solutions around.

    None that I have found so far. I am using RawByteString for mixed ASCII/UTF8/binary data for 10 years now. No problems.

    • Like 2

  4. 2 hours ago, Rollo62 said:

    but as AnsiString was deprecated and removed once from modern platforms, this leaves a bad taste.
    It seems that I came back only on massive complaints from the community.

    Even AnsiString is back, but you can safely use RawByteString and UTF8String.

     

    Original version of NextGen compiler didn't have support for 8-bit strings, but it is not brought back just because of community demands, but because UTF-8 is commonly used Unicode encoding, especially on Linux. So RawByteString and UTF8String were introduced in NextGen as a step toward supporting Linux.

     

    Now when NextGen is gone, you can also use AnsiString, but that type is really legacy nowadays and it makes no sense to use it beyond that.

    • Thanks 1

  5. TBytes is preferred type for manipulating binary data.

     

    However, it lacks some features and behaviors that string has (copy on write, easy concatenation, string manipulating routines (even if you have to copy paste pieces of RTL code to adjust them for RawByteStrings, this is easier and faster than handling TBytes), and most important one debugging). If those features are needed, then RawByteString is the only other option you have left. Which codepage to use is debatable - probably 437 (I have been using UTF8 because textual data I am extracting from such strings is always in UTF8 format, so this makes handling easier).

     

    Most important thing is, you should avoid calling ANY function that does Unicode transformation and usually that means some common Delphi string functions will be out of reach, and you need to make your own. Of course, the whole thing needs to be covered with tests to prevent accidental mistakes when you change pieces of code. With CP 437 you might get away with transformation problems, but any unnecessary conversion also means performance loss.

     

    Now I will go sit in my corner waiting to be tarred and feathered. 


  6. 10 hours ago, Anders Melander said:

    Squash? But that's a rebase thing, right? So how can you squash without rebase?

    merge --squash (if I am not mistaken, never used it as I am always rebasing)

    10 hours ago, Anders Melander said:

    Are you saying that you'd want the whole 1000 'FooBar' commits to appear in the 'master' commit history?

    I can see that if you don't have 'FooBar' pushed to the remote then the history will be lost, in which case it makes sense, but if you have pushed it, then there's no need to duplicate that history in 'master'.

    Remotes never have feature branches, if they do they are only secondary remotes, not main ones.

     

    So, yes, I want that history preserved. Rebase without fast forward does that. I usually don't commit experimental stuff in branches that will end up being merged. And if I do, then it is because such commits contain some approaches that are not applicable at the moment, but I want them preserved for the future because requirements can change or some missing compiler/framework features might be implemented. 

     

    Sometimes, some cruft will end up in repo, but there is not enough of that to be a problem.


  7. 24 minutes ago, Anders Melander said:

    I'm not sure I agree with it.

    As I read it, and I may well be misunderstanding it, it assumes that when I merge my 1000 commit feature branch into master that I want all 1000 commits to appear in the master commit history. Well I definitely don't. The 1000 commits are the sausages being made and I only want the finished sausage in the master commit history.

     

    That part is purely optional. You can do as you please. But when you are merging without rebasing, you also get the whole sausage, unless you squash.

     

    24 minutes ago, Anders Melander said:

    There's also this:

    What it should have said is: never rebase a public branch.

    It's perfectly safe and normal to rebase a private branch on a public branch. Assuming on=onto. Maybe that's what he meant.

    Yes, never rebase a public branch is more clearer. That is what was meant with that sentence. It is never call rebase when you are on public branch. In other words when active branch is public branch.

    • Like 1

  8. 2 hours ago, Rollo62 said:

    Just to clarify that.   What do you mean excactly by "check every change" ?
     

    That means going through diff list and making sure that all changes are expected changes - changes I intended to make (that does not mean I am rereading every line of code).

     

    For instance, first step is making sure that only files that had to change are changed - for instance if I am making changes in for code and I am not touching design and dfm (fmx) file pops in changed files, I will not commit those files and will revert changes after quick inspection.  Next I will go through changes I expect and I did make. If I was working on class xyz and something is also changed in abc, well that means something is wrong. Such things don't happen often, but sometimes you mess things up during debugging or leave unwanted artifacts.

     

    2 hours ago, Rollo62 said:

    What level of granularity do you recommend for commits in your note above ?

     

    I don't commit every two lines of code, but logical tasks. If the task is larger, then it can be spread through several commits. 

     

    • Like 5

  9. 12 hours ago, David Schwartz said:

    Here at work we're advised to do a 'git pull' frequently throughout the day, and especially before and after switching branches.

     

    I guess this is because we've got a lot of quick-turn work rather than long-term dev work.

     

    Does anybody else do that?

    If I am working on a repo in a team then I pull often, otherwise, no - because I know there is nothing to pull :classic_biggrin:

     

    How I use Git (modified git flow)

    • I almost always use GUI client - SourceTree - it was best client on OSX at the time, and I got used to it, so I am using it on Windows, too. It has its quirks (every client has) but I never shoot myself in the foot with it
    • master is always production ready
    • develop is development branch 
    • both master and develop live on remote (and they are the only ones that live on remote)
    • each feature is new branch, usually very short lived - never pushed on remote, unless they live too long and have to be shared for some reason, but for that I use temporary remote

    Basic workflow (if working on team)

    • before creating feature branch pull from remote (there will be no merges, because repo is clean - also, I always do pull with rebase)
    • create feature branch from develop
    • a) work on branch, commit...
    • b) if I think there is something new in the repo and I have committed all my feature branch changes, switch to develop and pull with rebase, same with master 
    • c) if there were changes rebase feature branch on develop, check if everything works - repeat from a)
    • when work is done on feature branch, pull and rebase feature on top of changes (if there were any), check if it works (if checking takes long enough and depending on potential conflicts with others, pull again, rebase... check...) when all is completed merge feature branch to develop and push to remote

     

    If you have problems with workflow, you can setup demo repo and use that to figure out optimal workflow and then apply to real one. 

     

    If you ever feel like you have messed up something locally, rename (or otherwise backup files you have been working on) repo folder, clone from remote, create new feature branch and copy your changed files back.

     

    I said it before, but it is worth repeating. Unless you are GIT DIE HARD, don't use console.

    • Like 1

  10. 11 hours ago, Joseph MItzen said:

    I imagine they did this to get around the memory management issues with the standard classes. 

    Nope. Memory management is completely different castle.

     

    Custom managed records are about ability to use automatic, custom initialization and/or finalization on records and enabling easier implementation of some concepts like nullables. Without automatic initialization/finalization, you can still use records but there are trade offs - you either need to use manual initialization/finalization which is error prone or you need to implement some additional safeguard mechanisms that are more complex and commonly slower than using custom managed record.

    • Like 1

  11. I have to say I have trouble visualizing your actual problem with Git workflow (or any workflow). Also, whatever problem you do have I don't think it is related to Git. If your problem are merges, then without Git your would still have problem, you just would not know you have it, until something would not work correctly.

     

    Having everything in single repo is a bit odd, but if the parts are really independent then you should not have any issues.

     

    Short lived branches and features are actually better than long term feature branches as they create less conflicts - I am not talking about develop and master in that context, but creating some feature branch that takes long time to develop and that changes code other people might change in between.

     

    Your main problem seems to be having conflicts in short term features that sound like they are independent. If that is true, then your tickets are not actually independent and working on multiple tickets means adding changes to same files (even though Git is good in merging code) that is usually recipe for disaster (or at least constant merge struggles).

     

    To fix that, you should either split offending parts of code to smaller pieces (files) because that will decrease chance of more people working on same code at the same time. If that is not possible, then you need to introduce some other level of discipline here. If there is some common file that needs to be updated, you need to coordinate that change between your developers. Probably the best would be that such common files are not fixed inside ticked feature branch, but as separate feature, then merged back to common base branch (develop) from which everyone can get changes.

     

    Next, when I say merge branch, that is not just straight forward merge. For every merge, you need to rebase on top of branch you are going to merge to, make sure your code still works as intended and then merge (if there are changes in that branch in the meantime, you need to rebase again).  Same goes for getting changes - if you know there is some change in base branch, you need to take, rebase your feature branch on top of it to apply those changes. 

     

    For any more detailed advice, you would need to be more exact with description of the problem - for instance, what is exact scenario where you have conflict - like ticket A changes files 1, 2 and 3 and ticket B changes files 1, 4 and 5 and why you cannot separate changes to file 1 or similar. 


  12. Don't use AnsiString if anyhow possible. Only if you exclusively work with 7bit ASCII subset then you can safely use AnsiStrings. The reason is that AnsiString - Unicode conversions is potentially lossy. You can lose original data.

     

    Using 8-bit strings instead of 16-bit strings can be faster under some circumstances, and they also use less memory. But if you have use case where 8-bit string performance and memory consumption is needed you should use UTF8String instead of AnsiString. 

    • Like 2
    • Thanks 1

  13. 15 hours ago, David Schwartz said:

    I'm wondering if anybody here knows of any public discussion groups (of whatever kind) that focus on this kind of topic?

    I don't know any particular forums. 

     

    But, from experience, Git workflow can also be quite dependent on the tool set and existing code base. For instance, some types of files can create huge merge conflicts. So applying workflow that works in one tool set may hit you on the head in another. For instance, Xcode storyboards are nightmare to merge (you can even create unsolvable conflicts as single developer) while handling Android Studio xml UI files is almost trivial.

     

    So discussing here is as good place as any other, because you are interested in managing more abstract workflow and when you have that figured out, finding correct Git commands is not as hard.


  14. 1 hour ago, David Schwartz said:

    Nullables are in the works for Delphi, according to one or two published roadmaps now. Trinary operators ... nowhere to be seen. (I guess "trinary" is the correct term, not "ternary".)

    The only mention for nullables was in context of custom managed records that are prerequisite for implementing nullables.

     

    From that perspective we kind of have nullable support right now in 10.4. But that is far from having full compiler support for nullable types. Nullable types include way more than just defining common RTL type (we don't even have that in 10.4) and one of the things that such support also implies (assumes) is ternary operator.  Additionally ternary operator has more broader usage scope than nullables themselves and thus is more interesting, worthwhile feature.

    • Like 2

  15. On 8/23/2020 at 3:58 AM, David Schwartz said:

    Interestingly, we have to suffer through this mess with lengthy if...then...else tests waiting for "nullable" values to be added to the language (after 25+ years) rather than stoop so low as to implement a simple construct that solves the problem very elegantly and simply, but is regarded as too offensive to "Pascal purists" who think it's sacreligious to steal such a construct from c/c++ for ANY reason!

    Many features can be nicely added to the Pascal and fit well. 

     

    https://docs.elementscompiler.com/Oxygene/Delphi/NewFeatures/

     

    The people you talk about are not really Pascal purists, rather people that can't be bothered to learn something new. Delphi never suffered from purists... even its predecessor Turbo Pascal was always innovative and expanded features on top of standard Pascal. We would not have Pascal with objects otherwise. 

    On 8/23/2020 at 3:58 AM, David Schwartz said:

    Yes, we'll see nullable values added to Delphi long before a trinary operator.

     

     

    Probably not. Ternary operator is easier to implement than whole nullable support. But like with many other compiler features, someone needs to implement all that - that is the weakest point and the core issue, not "purists" as such.

    • Like 2

  16. 46 minutes ago, Cristian Peța said:

    If you are not in the first wagon then yes, others ask for you. :classic_laugh:

    I never said I didn't ask any questions. Sometimes you need to ask, sometimes you don't. Sometimes you don't get the answer... or the answer is not exactly the one you expected (hoped for), but in such cases forums would not help more either.


  17. 18 minutes ago, Lars Fosdal said:

    A forum is a place for reasoning and discussion.

    SO, not so much.

     

    What do you prefer?

     

    I guess most people go to SO to get an answer, provided one exists, and if not - use a forum.

    Edit: I gave up on SO for asking questions a long time ago.

    If you want to discuss then forum is the right place, if you have specific question then SO is the right place to go.

     

    I would not say it is about preferences, but what you need at particular moment. 

     

    Besides Delphi I am using other tools for development, most notably Android Studio and Xcode. I never needed forums to solve my problems... I could find all the answers on SO and official documentation. 

     

    Hanging out with other developers and discussing all kinds of things... well, Delphi forums and communities were more than enough for me.

    • Like 4

  18. 35 minutes ago, brummgrammierer said:

    Because I didn't find any issue, I have now filed RSP-30365 that describes the problem of 'Preserve line ends' off still leaving the first and last line of a file with <LF> as the terminating character.

    I voted for it, but I would not hold my breath... this will be either closed as won't fix or will stay open for a long time.

     

×