Jump to content

Anders Melander

Members
  • Content Count

    2297
  • Joined

  • Last visited

  • Days Won

    119

Posts posted by Anders Melander


  1. 15 minutes ago, Brandon Staggs said:

    If you know you are going for a monospaced layout, you do not really need to worry about the differences in fonts so much. Of course, there may be an outlier. Uniscribe will give you the logical cluster data you need to know which code points are part of a visual character.

    I think the result in the first post shows that you do have to worry about the font used.

    The clusters a shaper, such as the one in Uniscribe, produces are part of a run of characters with a single script and a single specific font. If a string of text contains different scripts, or characters which can not be represented by the font, then the layer above the shaper will have to split it up into multiple runs and process each run individually. The reason is that different scripts have different shaping rules and the font (assuming it's an OpenType font) dictates many of the rules.

    It may very well be that Uniscribe hide most of these things but in the end the result will still be that multiple fonts are used and that the glyphs shown, and their size/position, depend on the fonts.


  2. 6 hours ago, Brandon Staggs said:

    One needs to lay out the text to determine which code points occupy the same character. There is no other way. Uniscribe on Windows contains functions for doing this kind of work (look at the functions designed to move the caret to the next character, etc). Core Text on Mac does it. I have not looked closely at the FMX text layout encapsulation, it may not expose enough data to determine this for outputting to a plain text file.

    You are talking about shaping which is not relevant to the problem here.  Shaping translates a stream of Unicode code points into glyph IDs and the result 100% depends on the font used to do the shaping. That is the whole point of shaping. FWIW, I've written a shaper so I know a bit about these things...

     

    Anyway I just realized that I forgot to explain why it is that...

    three characters occupy five columns

    ...even though the font is monospaced.

     

    The reason is that the font doesn't contain a mapping for the three Unicode codepoints (U+7F16, U+7801, U+FF1A). So what Windows (or whatever) does, instead of just giving up and display � or □, it searches its fallback font list for a font that 1) supports the Unicode script of the characters (Han in this case) and 2) contains a mapping for the characters (i.e. can map codepoint to glyph id) and then it uses that font to display the glyphs. Since the fallback font isn't monospaced, or at least isn't monospaced with the same character width, you get a different glyph width.

    • Like 1

  3. I can't see how it can be done.

     

    Even if you roll your own, which you would have to, and even with a monospaced font, the width will depend on how the particular font composes Unicode code points into graphemes (individual graphical units) and then glyphs (the visual representation the font produces).


  4. You really shouldn't prevent the user from moving the focus; It's one of the most annoying ways to do validation.

     

    Instead do your validation in the OK handler. If you use TActionList you can also do the validation in the OnUpdate event handler and only enable the OK button when all validations pass.

    If you want to be really nice to your users do the validation while they are entering data but instead of blocking them just indicate errors with color or a symbol.

    • Like 2

  5. 7 minutes ago, DelphiUdIT said:

    It may be that whoever created the automatic memory management also foreseen that perhaps this mechanism may not be infallible, and has given a tool to force "the hand".

    Nope.

    It's there so that a process which

    1. Knows that its access and allocation patterns will have caused the working set to grow.
    2. Knows that it will no longer need that working set for the foreseeable future.
    3. Has virtual memory backing store (i.e. page file).

    can trim the working set in order to lessen the memory pressure on the rest of the system. But you don't have virtual memory so this don't apply to you.

     

    By trimming your working set the only thing you have accomplished is to move pages to the working set cache. Since you don't have virtual memory, nobody else can use those pages anyway because their content can't be saved anywhere. So the only thing that can happen to them is that they will be paged back into your working set at some point when your process needs them. End result: Nothing but overhead.


  6. 5 hours ago, DelphiUdIT said:

    The EmptyWorkSet doesn't discharge the memory pages onto disk, he release the memory to OS

    You still haven't understood how this works.

     

    The memory in your working set isn't "owned" by the process; It's owned by Windows and the process only has it on loan. If Windows needs the memory for something else (e.g. another process) it will take it back without any action needed (or possible) from your process.

     

    The working set pages that you are trimming are unused by the process but was assigned to it at some point because it needed them. The reason that the OS hasn't removed them from your working set by itself is that there has been no need for it; Nobody else has made a memory request that couldn't be satisfied elsewhere. The whole idea behind the working set model is to allow the system to grow and shrink the working sets based on demand.

     

    Why not spend 30 minutes to read up on how it works. Just Google windows working set.

     

    Or simply search for EmptyWorkSet and see if you can find a single person who thinks it's a good solution.


  7. You are.

     

    The working set are the pages of a process' virtual memory that resides in physical memory. They are there because the process has a need for them to be there (e.g. it has referenced an address in virtual memory causing a working set page to be mapped to that address).

    If something else in the system has a need for physical memory, and it's all in use (which, by design, is normally is - because why not) then the least recently used pages will be paged out and eventually written to the page file, so the physical page can be mapped to the other process' working set.

    The above is just a simplification of the virtual memory management but the point is that, unless what you are doing is really extreme, then you don't have to think about it; It just magically works.

     

    I know everybody has to go through the phase of thinking that they can outsmart the OS virtual memory management by messing with the working set but you really should leave it alone. The OS virtual memory system was designed 50 years ago by people who actually knew what they were doing.

     

    You might very well be having memory issues but look to the memory manager instead. The working set isn't the problem.

    • Like 2

  8. That's up to you. I just need the three parts separate: layout, structure, content.

    • Layout is provided by some kind of customizable templates.
    • Structure data is provided by the source code parser.
    • The user provide the content via some nice editors.

    I don't care if the content format is XML as I can easily migrate any existing data to another format.

    Here's a small example of some documentation built with DI: https://github.com/andersmelander/DWScriptStudio/tree/master/Documentation

    And another: https://developer.sigmaestimates.com/api-reference/

     

    Both of these were built by generating pascal source code from DWScript meta data and then let DI have a go at that source. Since the source is generated, the help content has to be external to the source. However, the big problem with the output generated by DI is that it is impossible to integrate it properly into an existing website since it must be embedded in an iframe.

    For example I have another site where the main content of the site is conceptual help. I would like to have the API documentation integrated in this but because of the iframe it is impossible to link from the conceptual help directly into the API help and vice versa.


  9. The integration isn't really the important part although it's nice to be able to navigate the source while the help editor/navigator follows along.

     

    Extracting comments from the source can already be done by the compiler so that isn't that interesting for me personally. Also, I find that, although it appears to make sense to document the API in the source, it ends up completely obfuscating the source.

    I prefer to have the help text separate from the source. DI has an option to either maintain in-source comments or store the data in external files. Like this:

    <?xml version="1.0"?>
    <doc>
      <members>
      <member name="AnsiCompareText(string,string)">
        <summary>Compares strings based on the current locale without case sensitivity.<br></br>
          <br></br>
          AnsiCompareText compares str1 to str2, without case sensitivity. The comparison operation is controlled by the current locale.<br></br>
        </summary>
        <param name="str1">The first string to be compared.</param>
        <param name="str2">The second string to be compared.</param>
        <returns>
          <list type="table">
            <listheader>
              <term>Condition</term>
              <description>Return value</description>
            </listheader>
            <item>
              <term>str1 = str2</term>
              <description>Zero.</description>
            </item>
            <item>
              <term>str1 &gt; str2</term>
              <description>Positive value.</description>
            </item>
            <item>
              <term>str1 &lt; str2</term>
              <description>Negative value.</description>
            </item>
          </list>
        </returns>
        <seealso cref="CompareText"></seealso>
        <seealso cref="AnsiCompareStr"></seealso>
        <seealso cref="CompareStr"></seealso>
      </member>
    ...

     

    If your project/document architecture allows for separating layout (the way it looks), structure (classes, members, functions, types, etc) and content (help text per structure item), then "all you need" is a parser and some UI to manage the parts. Easy-peasy, when can I have it 🙂

    • Like 1

  10. 1 minute ago, sp0987 said:

    Please find the .dproj .

    Assuming you have saved sysutils.pas to "C:\Testprojects\SampleApp\system" that part is okay.

    Now you just need to add the path of all the VCL/RTL units that use that unit so you can avoid "F2051 Unit XXX was compiled with a different version of YYY" - or make sure that you don't modify the interface section. I'm guessing the compiler options used to compile your sysutils copy might also have importance. Maybe someone else knows that.

     

    ...or solve the problem in some other way...


  11. 4 minutes ago, Brandon Staggs said:

    For example, the fundamentally flawed canvas locking scheme that precludes any serious multi-threaded development (that involves bitmaps) in FMX can only be fixed by editing source code.

    I've not done any serious FMX development so I can't speak to that, but given how much code I write and the number and size of projects I work on, I do think I have had more than my fair share of problems with almost all areas of the RTL and VCL over the years. Editing the source files has never been an option I would consider.


  12. 18 minutes ago, Brandon Staggs said:

    Unfortunately, since they broke the most truly useful feature of class helpers by removing the ability of the helper to access private members of the class, they don't work so great as an alternative to editing source files directly. Now you have to write a RTTI monstrosity to accomplish the same thing. 

    I can't recall when I last had the need for editing the source files. There are almost always other ways to solve a problem.


  13. While I have your attention 🙂

    Have you considered adding functionality, or creating a new product, that solves the same needs as Documentation Insight? I.e. API documentation.

     

    I really like the way DI works but it seems like it went into maintenance mode many years ago. Probably because its HTML editor is based on Internet Explorer. Also, I've been waiting a decade for template support which was promised but never delivered.

×