Jump to content

Dalija Prasnikar

Members
  • Content Count

    1137
  • Joined

  • Last visited

  • Days Won

    105

Posts posted by Dalija Prasnikar


  1. 2 hours ago, Stefan Glienke said:

    But due to their currently super old LLVM version (hopefully they upgrade that once they are done on the C++ side which personally I could not care less about), they apparently had to turn off some important optimization steps which causes the binary produced by the Linux compiler to be approx as good as -O0 :classic_wacko:

    Imagine how slow would compilation be with those optimizations turned on. 

     

    I hope than newer LLVM could be faster, but I wouldn't keep my hopes up https://discourse.llvm.org/t/if-llvm-is-so-slow-is-anything-being-done-about-it/75389


  2. 51 minutes ago, HaSo4 said:

    This is what I assume, but are there specific rules for specific permissions?

    Each permission affects different functionality so it is only natural that there will be different rules because the impact and the consequences of abuse can be vastly different. But guessing what the actual rules are, beyond what is written in official documentation would be just playing guessing games.


  3. 30 minutes ago, HaSo4 said:

    Shall I keep this as-is, if my app doesn't use it?

    I have found this article and also this, as recommendation to keep the manifest permissions clean.

     

    What is unclear to me, if maybe an unused (dangerous) permission may have any consequences for the app or the PlayStore.

    Are there any known policy links, that clarifies what may happen when those manifest permissions were not cleaned up?

    You should remove any permissions you don't need for application functionality. Applications are analyzed when posted on Play Store and having more dangerous permissions defined can put your application under the magnifying glass and you may need to give additional explanations or comply to some other policies than regular applications that don't use those permissions. Some unneeded permissions can even cause your application to be rejected from Play Store until you fix your submission.

     

    Official documentation is usually the best starting point for being up to date with requirements https://developer.android.com/guide/topics/permissions/overview and https://support.google.com/googleplay/android-developer/answer/9888170

     

     


  4. 5 minutes ago, Lajos Juhász said:

    You can try with your XE3 serial number to activate.

    At that time there were not single license keys for multiple versions. You would need to have separate key for XE5. 

     

    Serial keys should be listed at https://my.embarcadero.com/ But, I don't know if all keys are listed or just the ones that were activated at some point.

     

    For user that purchased and had active subscription for XE3, they would have received have a XE4 and very likely a XE5 license (the XE5 was released a within a year and few days of XE5 release, so only those that bought XE3 immediately would have not received XE5 license, without prolonging their subscription) 


  5. 5 minutes ago, Sherlock said:

    ("well written code is self explanatory")

    The only self explanatory code is the trivial one. In other situations there is no such thing as self explanatory code, especially if it also needs to be performant and thread-safe.

     

    I have written tons of self explanatory code, for which now I don't know what was I smoking at the time. Sometimes, after a lot of digging I can remember why it was written like that and why it should stay that way, but mostly no. So I am not sure if it was just crappy code from the times I didn't know better, or there are valid reasons behind the awkward design choices.

    • Like 9
    • Haha 2

  6. I cannot comment on deadlock trigger being too sensitive as I haven't explored that part of code and its behavior much and whether solution for the deadlock by expanding threads is the most appropriate one. 

     

    However, you are using separate pool for your parallel for loop. So there is no danger of it deadlocking. You just need to set UnlimitedWrokerThreadsWhenBlocked to False and that pool will be restricted to the number of threads you have specified.

     

    function TThreadUtils.GetLoadflowThreadPool: TThreadPool;
    begin
      if not Assigned(LoadflowThreadPool) then
      begin
        aThreadLock.Enter;
        try
          if not Assigned(LoadflowThreadPool) then
          begin
            LoadflowThreadPool := TThreadPool.Create;
            LoadflowThreadPool.SetMinWorkerThreads(TThreadPool.Default.MaxWorkerThreads div 2);
            LoadflowThreadPool.SetMaxWorkerThreads(TThreadPool.Default.MaxWorkerThreads div 2);
            LoadflowThreadPool.UnlimitedWorkerThreadsWhenBlocked := False;
          end;
        finally
          aThreadLock.Leave;
        end;
      end;
      Result := LoadflowThreadPool;
    end;

    Of course, this might not be enough if you have other pools that run intensive tasks that can create excessive number of threads which can slow down everything.

     


  7. 24 minutes ago, mitch.terpak said:

    Thank you for your insights regarding property defaults and constructor behavior. While helpful, the core issue remains with the system library's thread pool expansion beyond the specified limit for no clear reason.

    The reason for expanding number of threads is to avoid deadlock with nested TParallel.For loops. 

     

    If you want to use old behavior then you should set UnlimitedWorkerThreadsWhenBlocked to False on each ThreadPool you are using, including the default one.

     

    For any other issues you mention, there is nothing we can do if you don't provide details about your code and implementation of your parallel loop mechanisms. While it is always possible that there are other bugs in System.Threading, it is also quite possible that your code has bugs, too. Just because you get access violation in Delphi RTL code, that does not mean that the cause is not in your code.

     

    For debugging multithreaded applications, especially the ones that spawn many threads, it is more suitable to use logger and inspect code behavior through logs instead of using debugger. Debuggers are usually too intrusive for multithreaded debugging that involves more that few of background threads as it severely impacts timing. 

    • Like 1

  8. 18 minutes ago, caymon said:

    However, in this legal document: https://www.ideracorp.com/~/media/IderaInc/Files/Embarcadero/v12_0/Embarcadero RAD Studio Delphi CBuilder Software License and Support Agreement English

    in the chapter ''ADDITIONAL LICENSE TERMS APPLICABLE TO SOFTWARE LICENSED ON A SUBSCRIPTION BASIS'' we read:

    This is for C++ Builder PRO TERM LICENSE https://www.embarcadero.com/products/cbuilder/product-editions

     

    Delphi doesn't even have such option.

    • Thanks 1

  9. 9 hours ago, Vandrovnik said:

    I mean this one:

    https://quality.embarcadero.com/browse/RSP-43274 (Arithmetic operations on record fields return incorrect results in certain cases if the "Optimization" compiler option is enabled.)

    This looks like a showstopper bug for me, because it can generate wrong result anywhere in the application. Turning off optimizations will "solve" the problem, but the result will be that all my customers will write to me about how the new version of the app is slow.

    I somehow missed that one. Yes, this one looks serious. I don't know what is the status of this issue.

     

    9 hours ago, Vandrovnik said:

    And as for distribution of the hotfix - is it really impossible to distribute the fix in year 2024 other than through a non-functioning Getit? Downloading files from my.embarcadero.com works normally, so why not put the patch there?

    I thought that my.embarcadero.com was also impacted by the outage. Anyway, I phrased my sentence a bit awkwardly, like the outage could be the only reason why fixes are not released yet as patch also needs to go through testing phase before it is finally released.


  10. 25 minutes ago, Vandrovnik said:

    inability to fix the "integer division bug" that makes Delphi 12 unusable for Win32 applications 😞

    You mean the issue with division with $FFFFFFFF constant? That is nowhere near being a showstopper bug. As a workaround it is possible to use variable instead of constant. And this issue is fixed, but at the moment hotfix cannot be delivered due to outage.


  11. 1 hour ago, Anders Melander said:

    Anyway, it's not my area of expertise but I would have thought that existing solutions to distributed chain-of-trust could be used. I don't really have time to think that through to an actual solution though, so I can't claim that what I ask (no SPOF) is doable at all.

    It is possible to have a dependency manager that does not have a single point of failure. 

     

    There are probably others, and this is not exactly package manager as such, rather a build tool, but Gradle has a good dependency manager. https://docs.gradle.org/current/userguide/declaring_repositories.html

     

    Solution to a server being a point of failure is having multiple servers, and extension of that is that you allow user customizable list of servers (multiple ones) That way, if one server goes down other's would be available and it can be easy to extend the list with new servers if required. Once you have that, you can easily have even your own private server for distributing your own built packages within the company.

    • Like 6

  12. 10 minutes ago, PeterPanettone said:

    To mitigate the impact of long-term server downtime, it's essential to have robust strategies in place. These strategies are not specific to any programming environment,  such as Delphi, but are general best practices in server management and IT infrastructure. Here are some key strategies:  

     

    1. Redundancy and Failover Systems: Implement redundant server systems to ensure that if one server goes down, others can take over its workload. This can be achieved through clustering or using load-balanced server setups.  

     

    2. Regular Backups: Ensure regular backups of all critical data. These backups should be stored off-site or in a cloud environment to prevent data loss in case of physical damage to the primary server location.  

     

    3. Disaster Recovery Plan: Develop a comprehensive disaster recovery plan that includes detailed steps for restoring services in the event of various types of failures.  Regularly test and update this plan.  

     

    4. Monitoring and Alerts: Use advanced monitoring tools to check the health of servers constantly. Set up alerts for any unusual activity or performance degradation so that issues can be addressed before they lead to long-term downtime.  

     

    5. Routine Maintenance: Regularly update and maintain the servers. This includes applying security patches, updating software, and checking hardware health.  

     

    6. Cloud-Based Solutions or Hybrid Approaches: Consider using cloud-based services or a hybrid approach, where critical infrastructure components are hosted on cloud services. This can offer higher reliability and easier scalability.  

     

    7. Third-Party Support Services: Engage with third-party support services for critical components. They can provide expertise and additional resources during major downtimes.  

     

    8. Diversification of Data Centers: If you're hosting your servers in data centers, don't put all your resources in one location. Utilize multiple geographically dispersed data centers to mitigate the risk of a single point of failure.  

     

    9. Testing and Simulation of Downtime: Periodically simulate downtime scenarios to test your strategies' effectiveness and your team's preparedness.

     

    10. Documentation and Training: Ensure that all procedures are well-documented, and train your staff to respond effectively to different downtime scenarios.

     

    Remember, the goal is to prevent downtime and minimize the impact when it does occur. Combining these strategies effectively can significantly reduce the risks and impacts of long-term server downtime.

    I don't think they or anyone needs advice from AI.

    • Like 3
    • Haha 3

  13. 24 minutes ago, Anders Melander said:

    Atlassian hasn't supported the version of Jira that Embarcadero uses since 2017...

    I meant that JIRA Server is no longer supported https://www.atlassian.com/migration/assess/journey-to-cloud and this was known for some time.

     

    24 minutes ago, Anders Melander said:

    As I interpret that, they will not be migrating data from the old to the new system.

    Their internal JIRA will be migrated fully. So there will be no loss of issues in internal system. As far as the public one is concerned, I don't know whether it will be migrated or not and what are the possible issues. Evidently, it will not be migrated right away.


  14. Moving QP is not about saving money, nor it is directly related to current outage. It was something that has been planed for quite some time (this has been disclosed to MVPs) because Atlassian no longer supports JIRA on premises which Embarcadero uses.

     

    https://blogs.embarcadero.com/embarcadero-quality-portal-migration/

     

    So they are logically moving their internal system to Atlassian Cloud and the front for customers will use Jira Service Management. It would be rather ridiculous to use full fledged JIRA for customers and pay millions for features that we couldn't use anyway. JSM seems like a good option in this case. And issues will be visible to everyone.

    • Like 1

  15. 2 hours ago, Anders Melander said:

    Instead they might be going with Jira Cloud internally and Jira Service Management (JSM, formerly Service Desk) externally. JSM is licensed per agent (internal user) with unlimited externals users (called "external customer" in JSM). 

    Your assumption is correct. And like you said self hosted JIRA is no longer an option.

     

    2 hours ago, Anders Melander said:

    As far as I know JSM does not allow one external user to see the issues raised by other external users. I.e. it's a support system; There's no interaction between external users.

    So if JSM is the solution they are going for I don't see it as an improvement.

    AFAIK, there is an option to set visibility of all issues to be automatically visible to all customers, but that option is not a default one. So I don't think visibility should be a problem here.

×