Jump to content

Gary Mugford

Members
  • Content Count

    77
  • Joined

  • Last visited

Posts posted by Gary Mugford


  1. 3 hours ago, Trevor S said:

    For Paradox, the dbiSaveChanges() BDE API function in the dbiprocs unit should be called after posting any records to the database.  The function takes the Table.Handle as the only parameter.  Calling this function ensures that the records are fully committed to the database file.  In my experience, it eliminated the Index out of date and Blob mismatch errors.

     

    If you are not using it already, it may be worth a try.

     

     

     

    Trevor, 

     

      Good points. Was the first procedure for a general library I ever wrote in Delphi 7. And every database is set up and then connected to dbiSaveIt (my procedure's name in GProcs7, which gets added along with GDefs and GBDE) to every project). It's autonomic at this point to create the table (I ALWAYS use woll2woll's InfoPower controls for this stuff and have for most of this century) and immediately go to AfterPost and type in  dbi Ctrl-Space which expands to dbiSaveIt(dataset.handle);  

     

      But I'm going to take a look tonight to see if the statement can be found the same number of times as database objects in my project. Never know, I COULD have missed things back in 2017. Not with the databases where the heavy memo work is, but some of the newer stuff. 

     

      Thanks, GM


  2. Attila,

     

      There are 7M records with notes that range from a few characters to several megabytes in the MAIN database. To BREAK those notes into discreet records, I have to go through each one line by line and evaluate what piece of information is being described; date, time, user, process, and a variable number of lines of changes that roughly have the field name, [before] data that the field held before [now] data the field has now. Not every snippet turns into a record on first try. I run two different alternative decoders with each exception and give up if none of the three work. The issue is that these notes date back to the 90's. Formats changed because I did NOT think clearly enough about the long-term future 30 years later. Back then, auto-auditing was something I could brag about. Today ... not so much.

     

      I'm absolutely sure there's faster ways to do it. I could skip constant screen updates. I could skip testing for a stop button. I could write better AI. Well, *I* couldn't. But I'm pretty sure that my plodding code could be improved. That said, I'm in the realm of 'if something works, move on,' at this point. The bigger picture is a replacement UI and business logic model. When this company started, they had three-week lead times for their work. Today, it's today, tomorrow at the latest. Sure, there are long-term contracts, but the customers no longer give three-week lead times even in those circumstances. Storage costs. Better YOU store it than I. Storage and transportation is now global, with labels to match (lots of fun doing Chinese and Russian labels with a customer company that thinks you can fill labels from edge to edge, using CAD to do so). It's a different world now for the company that it was when I wrote the first version of the program in DOS and we never did a re-design after the one switchover to Windows and Lantastic. Since then, we've added features. And features. And features. THAT's how you get to 249 databases. The original program never had accounting modules, sales forecasting modules, worker performance analyses, etc. It was an inventory program with not much smarts. I tried to normalize as much as possible. I have a database of extended descriptions because ONE company wanted descriptions about three times the size I allotted to that field. So, for their 0.05 percent of the total parts, we have a database that includes those extended descriptions. When printing material for them, I check for the presence of an extended description and replace the normal one with it. Was it better to do that under normalization rules, or simply restructure the main database to have a description field of three times the characters? I went with normalization.  In the rewrite, with space not a concern, the field is three times bigger. Times change. When Manifesting became a thing, I wrote a company specific Current Invoices to manifest database. One time thing I was told. THEN we were told we had to archive them. The CORRECT programming choice at that point would have been to add an Active field and filter. BUT I DIDN'T USE FILTERS at the time. I looked at everything through FULL info lenses, so I just created an archive database and moved things back and forth on the container principle. Then, OTHER companies wanted Manifesting (guess the word got around at trade shows in the hospitality suites). Suddenly, I had a proliferation of these new twin sets of databases (11 at last count). All will now reside in a single database with CustomerID and Active filters to manage everything in a single database. I've reduced the number of databases in NexusDB to 112. The NUMBER OF FIELDS in the main database has almost tripled. Advocates of normalization would look at my database design and try to have me fired on the spot. Sigh. 

     

      We make do with workstations that are a tad in the long tooth, but still work. I'm GUESSTIMATING the breaking time on one of them as a function of being slower than my machine by a factor of three. Could I be wrong based on the sample I used to do all the math and the assumptions contained therein? Sure. But I ran a 12-hour session on my computer and never got past the 10 percent mark. And that's with a LOT of the longest term records to break. 

     

      I'm not saying you're wrong Attila. But I THINK you're math requires better hardware and better programming by me to be accurate.

     

      Thanks, GM

     

    NOTE: And for those that think I'm too self-effacing or fond of self-flagellation, I'm trying to be truthful about my limitations. At the same time, I'm a self-taught programmer who actually spent the first 15 years of my working life as a working sports reporter and broadcaster. I have one computer sciences credit from a major Canadian university, but I got that back in high school. I spent time at one school on a Journalism scholarship before dropping out of school to go 'pro' as the saying goes. I also briefly attended college for two years part-time in order to get a degree to satisfy the folks. Fell a couple of years short when the newspaper I worked at asked me to start coming in on Fridays too (with a raise), meaning I didn't have any day each week to write tests and meet profs at school. I did contribute to a textbook used extensively in the business curriculum at yet another of Canada's largest universities, despite my lack of academic credentials. I KNOW what I don't know and that's a LOT. But I've done well despite my lack of an education. At least that's what my various employers think. They keep paying me. And have for more than three decades. Working man's code. Nothing better. But good enough.


  3. 2 hours ago, limelect said:

    @Gary Mugford At a time BDE was very popular. I have many projects with it

    without any problems. Only this one customer who hold big amounts of records.

    Further more i have a few tables without any problems. Only this one table with

    large records.

    SO who have known that large table data will make problems?

    Limelect,

     

      I wouldn't say we have MASSIVE amounts of data. Runs something a bit north of 1G in total. Almost all text. The problem is my original design. I used a lot of 1 character length Memo fields as History fields (along with a random sprinkling of memo fields that do other things spread through the almost 250 databases). All of it was designed originally under DOS and I was fighting record size a fair bit in those days of limited memory and storage. Having M1's became habit as I always tried to stuff a record into the most efficient size as possible.

     

      Now, as memory and disk space became something that could be squandered (despite Bill Gates' prognostication we'd never need such capability), the structures in terms of keeping memo stubs of larger sizes within the Paradox database surfaced but the gain seemed not worth the pain in terms of restructuring. And Paradox was an amazingly stable database backbone. It just didn't corrupt if you wrote to the disk ASAP on posting. I went DECADES with some clients with no issues REQUIRING REBUILDER. But BDE got pushed aside as various progenitors to Embarcadero tried to re-monetize default data systems. And we are here where we are. The company is paying for a new database backbone. I'm implementing it. And I have a NoteBreaker app to turn all those audit history notes into discreet records that I ran a benchmark to find out how fast I could do it. THREE WEEKS with a standalone computer. More than seven million records. And when the switch time comes, I will have to run NoteBreaker again for the new notes created in the interim. 

     

      Sure wished I had looked into the future better than I did. Better than Gates did. But I didn't. 

     

      Warn your customers about BDE and Win 10. It won't end well. 

     

      Thanks, GM


  4. 2 hours ago, dummzeuch said:

    Apparently this program is not very important for the company, so maybe it should just be retired instead of being replaced.

    Or is somebody maybe having the wrong priorities here?

    dummzeuch, 

     

      The facts are that the company is running tight these days because of economic pressures introduced by the President of the United States on companies here in Canada in certain business sectors. There's simply no money in the budget and I've seen the numbers so I know he's telling the truth. Both the hardware guy and I are external contractors and we've been doing this for this company now for just shy of 30 years. The reality is that we careen from one issue to the next WHILE I am writing the replacement software. A decision was made for a ground up replacement to streamline the application and add new features that weren't needed when I first designed the DOS version of the software 35 years ago. I'm a tad slower in my old age, unfortunately. And I'm labouring with the switch to NexusDB AND the switch to Delphi XE7 at the same time. It's MY fault there is no 21st century solution to this problem and I can't expect others to spend time and money when we CAN MAKE DO, with interruptions on a still manageable scale. I'd rather spend my own money and/or time than suggest solutions that I know will result in losing personnel. Yes, it needs retiring for the replacement I'm writing. No, it is NOT not important (yes, I'm aware of the double negative). And no, there are no wrong priorities here. Just an issue I seek help on. Which, apparently. is beyond the ability of anybody to help. It's not that aid hasn't been offered. It's just an untenable position. As I have known for awhile. But knowing and having it confirmed are two different things. Sigh. As I said, MY FAULT. Nobody else's. 

     

      Really appreciate the time you've taken considering this. Big fan of your blog and your contributions in providing code for the community. 

     

    1 hour ago, Job espejel said:

    Hello

    Please use pxrest.exe (Table Restructure utility) and set the version to 7 and Block size to 32768 for each table, then use reBuilder

    Hope this helps

    Regards

    Job,

     

      When this problem first surfaced a year and a half ago, I started go through the BDE settings.  According to my notes, we switched to 16K block size about 15 years ago when we switched to Microsoft Server and 32K sometime in mid 2011. We've run v7 since it became available. I've been a Paradox user since 1.11i. I also explored on a week by week basis, changes in the Init settings and have them as optimized as I believe I can. Now, is it possible my melange is actually a BAD mix?? Sure, I'd be a narcissist if I thought otherwise. I've had references to Opportunistic Locks beyond what's here. So, I've handed it off for consideration by my partner and he'll look at it. Whether it is implemented or even kept after implementing it, is beyond my area of responsibility. But I will look THROUGH the databases to see if any of them have somehow got the wrong block size. It never hurts to look at even the smallest chance. Thanks for suggesting it. 

     

      GM


  5. Limelect,

     

      THanks for taking the time to reply. As I mentioned in the first post, they have an app that is a slightly modified version of PdxRbd (I zip up all the files BEFORE doing work on them because not every rebuilder session goes smoothly). It's THAT app that is being used for repair.  Prior to the Nov 2017 Windows 10 update, rebuilding sessions occurred an average of three times a year dating back to 1999. Prior to that, data corruption in Paradox was a once every other year issue dating back to Paradox 1.11i's release here in Canada in the 80's. BDE was a great backbone that worked for longer than it should have. Had I been wiser, faster, smarter, this would not be an issue. But it is. 

     

      GM


  6. Back for some thanks. Appreciate that the group of you took the time to reach out to me. My abject apologies in the delay getting back. I have excuses, but excuses is what they are. Attila, I will point out my first line, indicating I'm trying to do what you suggest. Unfortunately, the problematic tables are actually 249 problematic tables ... not counting local ones. That's why the forever-taking complete rewrite using NexusDB.

     

    Dany, I asked the hardware guy about citrix and was told there's no money in the budget for the product AND HIS TIME implementing it. Sigh. 

     

    Now, as for the combined contributions for Uwe/dummzeuch. I had some success making each computer run the application in Windows 7 emulation mode. I'd say the incidence of issues dropped by something close to 60 percent. STILL A LOT. But a pleasant break from the nightmarish lack of running. 

     

    For whatever reason, Mondays and Fridays are the days with the worst incident rates. By a considerable margin with Monday REALLY leading the pack. Yesterday, we had to interrupt work to rebuild FOUR times because of Index Out of Date errors. ONE workstation at the second building running Win10 doppleganged back to a machine running Win 7 at the main building, kept getting that was along the lines of Operations Systems Software does not work (I say along the lines, because THREE different people gave me three variations of the message over the phone with none of them being the same as any other). EurekaLog was running, but the error came early enough that it didn't get going and send me a detailed error report. Sigh. 

     

    I wonder if I'm at fault for yesterday's mess. Over the weekend, I put a new feature into one of the apps that uses the data. ONE of the side effects of running Rebuilder is that we seem to have an auditing log memo field in the database get lopped off about two or three entries down, losing, in some cases 19 years of change by change auditing details, almost 2M of text per record in some cases. So, I wrote in a new feature into an app to UNhackit the note by calling up a backup image of the database and gluing the history in IT to the audit note in the currently used version of the database. I wonder if I needed to revisit the shortcut to the program, telling it to use Win 7 emulation mode, because the program had been updated. Turns out, the setting was still there, BUT maybe updates have SOMETHING to do with the frequency of Monday issues. I tend to keep my updates to the apps themselves to the weekends. Cause and effect??

     

    Which brings me the Uwe's suggestion. I assume we are talking the Microsoft Server for the change and the change would come at the cost of performance for EVERYTHING on the server. So I would have to weight the downtime and data corruption we are currently experiencing against everything slowing down for NON-data related work. Do I have that correct? I couldn't just tell it to use the setting for a particular folder. BUT I might be able to determine it on a file by file basis if I was willing to put in a weekend doing it for a couple of thousand files? (I would certainly be willing to do the latter, given the circumstances). IF so, if it CAN be done file by file, how much beyond the reg setting indicated above, is there to be done, if I may ask??

     

    Thanks for everyone who took the time to offer help. I've turned ON the Notify me of replies this time. My fault completely. GM

     

     


  7. I asked this question in the ReportBuilder support forums but found out our subscription had run out three weeks ago and the Boss is out of the country so there's no renewing it until early next month. Doesn't help me right now so, I've come here to ask the same question. That said, it occurs to me that this is maybe not a ReportBuilder problem, but a different one of my own making. If there's any ideas beyond a uninstall/re-install, I'm all ears and eyes. Thanks for trying to help me out if you can.

     

    My WIn7Pro x64 system that I use for program development crashed ten days ago and it turns out my daily backups are ... useless for Drive C. All 32 generations of them that populated one complete external drive. Every restore landed me right back in the Boot Manager without any hope of booting. So, after a week of trying to avoid it, my HW guy restored back to the last FULL DISK IMAGE of C/D/E (logical volumes on an SSD) from Oct 2018. I got back the D and E drives using my backups, where my Delphi versions 7 and XE7 are installed, but I had to re-install several components because of a lack of synch with the older C drive image. One of those was Report Builder 19.03. It APPEARED that the the install went correctly. No error messages. BUT ...

    When I opened the BIG project, I got an immediate "Metafile is Not Valid" message. I closed it and hoped it would right itself when I recompiled it for the first time in a month. Turns out, no, it didn't go away in subsequent restarts. I went to a sub-form, made some changes in a Text Label and recompiled. This problem was that the I got an error. "System Error. Code 5. Access is denied" if runtime, "Raised Exception class EOSError with message 'System Error. Code: 5. Access is denied'. Process stopped" twice if in the IDE. I closed the blank preview and then clicked the print button a second time and it all worked, both in runtime and the IDE. Noisome, but the user understood the need to click through the error message, close the preview and then immediately print again to get the desired result. Still, not optimal.

    Having a workaround, I now opened the SMALL SIDE PROJECT and survived loading without messages. I then went into the report designer (Report Builder) to work on a report and the JPEG graphic logo was considerably degraded. I tried replacing it with PNGs and even a BMP version. I also tried to resize the graphic in a graphic editor so that no stretching would be required. The pixel depth seemed to go crazy when I did that, taking a 3x1 graphic that I needed to fit into 1.55x0.5 and changed the pixel depth to EIGHT. I fooled around for awhile and got a fairly sharp 1.55x0.57 version PNG and plugged it in. Better, but not passable in a commercial setting. I looked at every report in the project. EVERY single one was degraded. Not all the graphics, just the logo that was stretched (and sometimes as is, in my experimentations).

    Running generates no errors. The Preview and the PDF output differ, but that's nothing new. The text is readable and without problems, despite many accent marks. But that logo ...

    Where do I go to fix this ... embroglio, that I find myself in? Am I facing a complete re-install of both versions? Is there some unit or test I can add that will lead me to the issue with graphic metafiles?

    Thanks in advance for any offered help. GM


  8. Before I start, I have to confess to be trying to rewrite the systems software using NexusDB. Progressing, but but not fast enough.

     

    In the meantime, our decades-old app is encountering pervasive corruption issues with BDE using Paradox file formats, Windows Server 2013 and NOW mostly workstations running Windows 10 and kept up to date. The problem started with the Fall 2017 update to Windows. Ever since, we have had a regular series of Blob is Modified and Index out of Date errors. We cope by having all seats (approx. 50) get out of the app and associated programs. We run reBuilder which is a modified version of PdxRbd. Then it's back in and we wait for minutes, sometimes an hour when the cycle repeats itself. Sometimes we get people jumping back in early during the rebuild and THAT EATS THE DATABASE with a not that there has been a sharing violation. Luckily, we do zip up a copy of the raw data before hand and can recover in that circumstance. But it's a LITTLE stressful to see major databases disappear from one minute to the next. And when it happens when I'm not there, the place ACTUALLY STOPS WORKING with the computer. Everything goes manual and that is NOT a good thing in today's world. 

     

    I have theorized that this issue is related to continuing problems with BDE locks and Windows locks going out of sync, if they were ever IN sync. That said, I'm no Windows tech expert. Might be something else. Probably is something else.

     

    What I am looking forward is a settings that can make BDE work in the Windows 10 environment the way it STILL WORKS for Windows 7. I don't have the ability to retrograde the OSs for the Work Station, although I spent a night writing up a proposal to do exactly that. The answer was no. Find another way. And I haven't. The non-answer has been to make the workers' endure the stop and start operation that occasionally results in unrecoverable data loss (mostly lines out of history memo fields). And it's getting a LOT worse and I'm still too far away with the NexusDB operation to look my fellow workers in the face.

     

    So, I'm reaching out to you DATABASE EXPERTS (give yourself a pat on the back just for coming here to help out people like me) and see if any of you can remember back to when you ran BDE. Or might STILL be running BDE in a small controlled environment. And can suggest a setting, a tool, a whatever. I HAVE started running a virtual box test, but the Workstations all have minimal RAM and having half of it taken up by a vb that is running a short-memory Win7 environment to run the app, turns out to be glacially slow. Although it DOES work. So far, we have two computers set up that way and the hardware guy says he can average three more a week. Which means I would have possible solutions up and running some time in the late spring. At which point, I would hope my direct solution might be ready. So, as much hope as I had originally in the vbox concept, it's NOT the solution for the immediate crisis. 

     

    Regardless of whether you have an answer, I thank you for taking the time to read this. I hope you and yours have the Merriest of holiday seasons and a sane and safe Happy New Year! GM


  9. In the FormCreate procedure, I fill the IllChar array with all the control characters from zero to 31. It is in the line 

    for Kounter := low(IllChar) to high(IllChar) do IllChar[Kounter] := chr(Kounter);

    In the global vars, I had declared IllChar as an array [0..31] of string, so I THINK I got them all. Then in the compare area, I loop through the array and check with the POS() function. Now, it's eminently possible that in my stuffing line, what I assume is completely filling it fails for a particular code. I haven't EVER used this concept before, since I've always depended on Woll2Woll's masks to keep stuff out that I want out. And I never checked after the fact. And I truly wonder if these data corruptions are causing the insertion of the characters after the fact. In fact, it's the ONLY explanation I have. They are happening. I've seen the EurekaLog errors and I've seen the Rebuilder logs with the notations. But they are like ghosts. When I GET a chance to go looking for them. They're gone. 

     

    That said, you're suggestion of parsing each PN character by character and doing Char(ThePN) < #32 is worth trying. Approaching any problem that doesn't produce results from a different direction is always worth trying. Thanks for the suggestion. I'll report back here with the results.


  10. Sue,

     

      Thanks for your response. I've been a fieldByName guy forever. So, I've never thought much about how it retrieves (and potentially changes) the string from the A15 field in the Paradox database. (And yes PN is a constant equal to 'PartNum.' I DO know that the stated problem child has an unusual PartNumber. I remember eons ago putting in a mask for just alphanumerics and a dash. But that changed 'bout five years ago when a client more or less demanded this + part number scheme because they were building out a temporary set of items to fulfill some legal judgement against them ... not having something or other to meet some local code of some sort. It was meant to go away. But they decided after the fact NOT to let it go away. Ergo, a change to the mask to allow the plus sign ended up being permanent. I checked that. So, I'm more or less, of the opinion that it's something evil sneaking into the data. I DID have a problem with one place where the minions were managing to get carriage return characters into a list, rather than just filling in the blank with one long comma-separated list where that was what should have been entering.  The miracles of copy and paste) So I had to add a Strip(DeptNums,crLF) into my posting check. I really only found it exporting to XLS where some deptNums were double height in the spreadsheet. Otherwise it was colourless and odourless. 

     

      The other place where evidence rears it's head is in Rebuilder, which is a customized version of RK Solutions' Pdxrbld. Here's a slightly redacted example for one table in a recent rebuild:

       Table EXAMPLE        - #records: 90802
       Index EXAMPLE.PX     - Index version does not match table version
       Index EXAMPLE.XG0    - Secondary Index onF1 is out of date
       Index EXAMPLE.XG1    - Secondary Index onF2 is out of date
       Index EXAMPLE.XG2    - Secondary Index onF3 is out of date
       Index EXAMPLE.XG3    - Secondary Index onF4 is out of date
       Table EXAMPLE.DB     - Alpha Field SVException in record 68422 contains low ascii chars
       Table EXAMPLE.DB     - Alpha Field SVException in record 69147 contains low ascii chars
       Table EXAMPLE.DB     - Alpha Field SVException in record 70147 contains low ascii chars
       Table EXAMPLE.DB     - Forward block pointer in block 2893 is out of range
       Table EXAMPLE        - Errors fixed (#records: 90803)
       Table EXAMPLE.DB     - successfuly packed

       As you can see, there are three records that trigger a low ascii chars warning. However, by the time I'm alerted to this, the rebuild has changed the database and I'm stuck for a reproducible error to work against. I DO have our Rebuilder zip up the raw data, but it goes to a dated name that gets overwritten all day if there are multiple rebuilds. EVERY TIME I ask for them to look before doing it again, I get blank looks. We hire the cheapest, not the best. (Including, probably, the programmer). 

     

      When I saw this example, I did change my code to test for all string fields in each record. Came up clean as a toothpick in ready-to-eat cake. A bit frustrating, but c'est la vie. I have to get this all transferred into NexusDB and this problem goes away. Maybe I shouldn't be wasting my time and everybody else's time when it's an issue that sort of fixes itself when something else breaks and Rebuilder gets run. Sure is curious though.

     

      Thanks for your DOCUMENTATION code you sent me. I didn't QUITE get it all straight, but I figured out an alternative once I figured out I could get scripts out of the SERVER's SQL context menu where as I had been looking into the SQL found in MANAGER. It was quite the DUH moment. But taking a bit of your code, that creation code and my own stuff, I have something that equates to my needed documentor from BDE. Kicked it all over to the Atomic Scribbler novel writing software and made it into a Documentation Writer too. So, thanks again for help here AND in the past.


  11. It might be relevant to remember the authors of the wizard are Chinese who come from a different reading environment than western European. They read top to bottom, right to left. So, like Uwe says, they would think of row-oriented reading as vertical since you read all of row 1, then all of row 2, etc. And that's vertical from their perspective. 


  12. Down the rabbit hatch again. I'm hoping the smart minds here will continue to try and help me. I am battling to complete a conversion of a BDE project, but it still has to run in the meantime. Written in D7. Been going for close to 30 years dating back to the venerable DOS Paradox 1.11i. Since Microsoft's Fall 2017 Update to Windows 10, our data integrity has been under daily attack. Running a rebuilder session once a day is a good day, seven times in a day is ... all too often. MOSTLY, the problems have been Blob Modified and Index out of Date errors. In both cases, I believe the timing between Win10, SOME Win7 workstations, BDE and the server are just off. Guessing the locking protocols are broken badly. But we persevere. Today, a NEW problem. 

     

    ONE user, running the app in a Win7 VirtualBox on Win10 (one of my workarounds to try and minimize the BDE issues), got an error that clearly says: "Invalid characters. Field: PartNum" I wondered if it might be a picture mask issue, but no, this was an existing part and once into the system, part numbers are inviolate and can't be edited at all. So, I presume our persistent integrity issues were at play and somehow a low-ascii character had been inserted/appended during a rebuild session. Indeed, in rebuilding logs, I have seen a reference to up to four records needing repairing for low-ascii.  So I built a small utility to check for these characters. And found nothing. The error message, very reproducible, and my blank results are ... not mutually compatible. So, I must be doing something wrong in my search for illusive control characters. The array is 0 to 31.  Here's the code:

    procedure TFrmMain.FormCreate(Sender: TObject);
    begin
      t.databaseName := 'GM';
      t.TableName := 'PARTNUMBERS.DB';
      t.active := true;
      TotalRex := t.RecordCount;
      for Kounter := low(IllChar) to high(IllChar) do IllChar[Kounter] := chr(Kounter);
    end;
    procedure TFrmMain.BtnStartClick(Sender: TObject);
    var
      mNote: string;
    begin
      CurrRex := 0;
      with T do begin
        // disableControls;
        first;
        while not eof do begin
        inc(CurrRex);
          g.Position := Trunc(100*CurrRex/TotalRex);
          Application.ProcessMessages;
          ThePN := fieldByName(PN).asString;
          mNote := '';
          for Kounter := low(IllChar) to high(IllChar) do begin
            if pos(IllChar[Kounter],ThePN) > 0 then begin
               if mNote = '' then mNote := PN + ' ... ';
               mNote := mNote + ' ' + IntToStr(Kounter);
               end;
            end;
          if mNote <> '' then m.Lines.Add(mNote);
          next;
          end;
        // enableControls;
        end;
      ShowMessage('Finished');
    end;

    Is it possible that the D7 POS command won't find control characters in strings? Or a sub-set of the control characters that includes the ACTUAL problem characters? Did I goof up the memo lines adding note creation? I slowed the thing down by using a gauge and commenting out the disableControls/enableControls.

     

    Ideas appreciated. Thanks in advance, GM


  13. Oh, and bonus, after using the explaining vars to ... slow down the app??? ... compiling didn't throw one of the F2039 errors during about 10 more compiles. Any changes I made before compiling were mostly a LOT less than 2 minutes apart. So, wins all round. Here's the code I used to either slow things down or actually cure the issue:

    P1Value := fieldByName('Parm1').AsFloat;
    P2Val := fieldByName('Parm2').AsFloat;
    // Assert check to see if each value is actually within range ...
    Assert((P1Value >= 0.00002) and (P1Value <= 20), 'Bad P1Value: ' + fieldByName('Parm1').AsString);
    Assert((P2Val >= 200) and (P2Val <= 500), 'Bad P2Val: ' + fieldByName('Parm2').AsString);
    VPP := P1Value * P2Val / 3;

    When I look at it, it makes no sense to me that this fixes the AV. I admit, repeatedly calling the AsFloat function for the same field contents in a single line is not good. But the assertion NEVER triggered (as I expected) and changing the VPP assignment to use actual numbers rather than functions works, although I don't know why. It worked the original way to PRODUCE the data. But it triggered that exiting AV. I guess we call it a win for Assertions and refactoring, however small.

     

    Thanks Kryvich and the rest of you. GM


  14. I'd already ditched the assignment to a default value of zero. It's an old habit of mine to set some default for a variable before starting some formula work.

    3 minutes ago, Kryvich said:

    Assert((fieldByName('Parm1').AsFloat >= 0.00002) and (fieldByName('Parm1').AsFloat <= 20), 'Bad Parm1 value: ' + fieldByName('Parm1').AsString); Assert((fieldByName('ParmBV').AsFloat >= 200) and (fieldByName('ParmBV').AsFloat <= 500), 'Bad ParmBV value: ' + fieldByName('ParmBV').AsString);

    So adding the assertions didn't change things, other than show no records failed the assertion. BUT adding explaining vars to turn the arrays with the lookup function to determine index DID. I've subsequently ran every filter, data range, data sort and flag for showing zero value records that there is. All sequentially within the same run. NO AV.

     

    THANK YOU ALL. Your patience has been rewarded. I have my app, and a meeting with the boss in a half-hour for the next task.


  15. Attila,

     

      As I've said, I'm hamstrung by employer instructions NOT to post code. As it is, I'm skating on thin ice. I appreciate your frustration. I mirror it.

     

    To everybody,

     

      There are some new pieces of information. The Range-Check did, in fact, reveal an error that contributed to an AV UPON exit of the program without doing anything. Turns out, one of the minions had entered a Dept name WITH A RETURN attached ... obviously through copying and pasting ... and THAT was being read in the form creation and it was kicking out the AV on ending, sending me on a wild goose chase for the AV there. So, I fixed the code NOT to just over-ride bad departments and keep on going through a Try ... finally block that I never checked to see if it failed because it couldn't ... until it did. I now get an error message if another minion decides creativity can be fun. When I now run the program, I could exit without failure, having fixed the record with the bad dept in it. Happiness?

     

      No. I STILL had an AV if I ran the analysis BEFORE I SHUT DOWN. I looked through that button's code for places where things went south. I looked at the assignation of string grid cells looking at a FloatToStr conversion on the run. I commented it out. No fix. I commented out the whole string grid cell assignation code. No fix. I commented out each little part until I came to:

        VPP := 0;
        VPP := fieldByName('Parm1').AsFloat * fieldByName('ParmBV').AsFloat / 3;

      Comment out the second line and everything runs without AV. I changed EACH part to be 1 and EACH time I ran the program and ran the analysis, I got an AV.  VPP is a declared Extended variable. The range of values for Parm1 can be in the 0.00002 to 20 area. The second field runs 200-500. Each can go into five decimal values. And the last part is a static integer. The results are maxed out at about 30K. That's well within the scope of Extended, is it not? Even the temporary 100K which is borderline possible is within scope, isn't it? I DO convert the result to a string later JUST with the FloatToStr function. For sticking into a string grid.

     

      EVEN with the closing AV, the RESULTS of this line leads to correct results in the string grid. I've manually checked EVERY SINGLE cell.

     

      The more I know, the less I understand. And it doesn't help that I have to wait two minutes between compiles or I get the dreaded fatal error F2039 Count not create output file.

     

      So, I HAD two AVs although only one would show at Program exit. Now I'm down to one. And it's with basic math. That actually works.

     

      Dare I ask for ideas in a thread I've tried to shut down twice? Sure, why not?

     

      THanks to everybody for their patience and willingness to help. GM


  16. 2 minutes ago, Kryvich said:

    First of all, you need to set your build configuration to Debug (Projects view | Build Configurations - double click Debug), and enable debugging options (menu Project | Options | Building | Delphi Compiler | Compiling):

    • Optimization: False
    • Stack Frames: True
    • Debugging: all settings to True
    • Runtime errors: all 3 settings to True

    Perhaps after that this error will appear earlier than when you close the application.

     

    Usually you do not need to initialize arrays - they are initialized with zeros automatically. How the arDeptsTotal array is declared? Is it 0-based or 1-based? 

     

    Based on the code snippets you provided, the TList<TYourData> class might be more appropriate for you. But the good old array should works too.

    Kryvich,

     

      I did set the Optimization to false and turned on range-checking due to earlier contributions. I will further augment my debugging environment as you have suggest. Thank you.

     

      My arrays are one-based for convenience sake. Each array is set high at the count of those depts (or tasks in the other pairing). I have snooped around Lists a bit, going back to Orpheus days. The modern-day implementation seems much more to the point. But as you say, the good old array does get the work done. Identifying a LIST element's index through the string value would be nice, though.

     

      GM


  17. Mr. Heffernan,

     

      Offers to help should always be welcome and they are here. Debugging this is obviously beyond my skill set at this point. I'm 'Changing things around' hoping to discover where my AV is ACTUALLY percolating, only to flare up at the end of the program. I'm doing it one change at a time, looking for the Eureka moment. I've started in places that look different to me from my decades of using D7. XE7 is a new world. Even the declaration of the one constant array was subject to researching doing so on the internet since I wanted to use variables as the end/high element in the var declarations. I was hoping for the variable arrays, for example, to be arDepts: Arrays [1..DeptCount] of string. It was there, as a side process that I found a different way of declaring the constant array. Interesting and easier to read. BUT, I wasn't going to be able to get the arDepts declared as tightly as I wanted. Ergo, make it 100 and that covers expansion for the rest of my life in the number of departments. It's currently TWO for this project. Eight if it goes company wide.

     

      Now, posting function by function here would resolve the curiosity AND the appreciated desire to help that folks here have. It would also be against the explicit instructions I got from my employer NOT to do that. Genericizing the material would suffice to some extent, but again, the Boss said no. He needs me to produce tools in a fast-changing environment that we are currently in. My time is HIS time to do with as he sees fit. Producing a 'simple' case would presume I know generally where things are tripping me up. It's the end of the program. Beyond that ... nothing? Creating a stripped out version of the program would require replacing 3rd party components that I can't publish (with GExperts), eliminating items that would specify my employer and do it all while MAINTAINING where the bug's born. I apologize, but I can't do that. Puttering around with MY spare time seems my best compromise because wasting YOUR time and the time of the others is disrespectful.

     

      I hope you understand. Thanks, GM


  18. One more thing ... I left StackOverflow because help seemed to come at a very high price there. I've lurked since. My improvement as a programmer has stagnated because of that. MY FAULT. I should have been less sensitive, better at learning on my own. I found Praxis here and read through the threads, some of which were contentious. But I really appreciated the tone of those dialogs. Help seemed the raison d'etre for the site. And my request for help here has been answered with that same tone of voice. Asking to end the thread seems disrespectful towards the efforts already put forth. Please understand that that is not my purpose. I simply think the process moving forward would waste more of the collective time of the people trying to help. I do not wish to do that, nor insult anybody. THANK YOU for trying to help. I'm simply ill-equipped to aid in the helping of myself. GM


  19. I appreciate each and every suggestion above. The problem with the program is that it's proprietary work and the sum total of the code is several pages. That said, I am looking very much at my use of the arrays. Not surprisingly, each of the two Var arrays are linked, i.e. one being a Dept code, the other the total for that Dept. BUT, I am not NEARLY using the full array. In fact, each run through of the program captures a count of the Depts and I use that to totalize up the Depts. BUT there are only a handful of Depts currently involved. I chose 100 for the limit to protect my backside against successful expansion of this project.

     

    I am NOT initializing the arrays with a loop to '' and 0.00 respectively. Don't see a need. I never go PAST the DeptCount, so I'm not accessing unassigned memory. At least I'm PRETTY SURE that's the case. I AM force capturing the INDEX for the particular pairs. See this:

    function TFrmMain.ReturnDeptIndex(ADept:string): integer;
    var
     idx: integer;
    begin
      Result := 0;
      for idx := 1 to DeptCount do begin
        if ADept = arDepts[idx]
          then begin
            Result := idx;
            break;
            end;
        end;
    end;

    I assumed there was a direct way to get the index in XE7 but couldn't find any. Thus the kludge above. It's pretty representative of my lack of elegance when it comes to programming. That said, I don't see anywhere where this can induce an error accessing memory it shouldn't. Slow, but successful. Besides, EVEN IF I DO NOT CLICK THE BUTTON that does the analysis, even to just starting the programming and immediately exiting, it throws out the AV.

     

    At the top of the function, I include these lines:

    var
      tmpIdx: integer;
      kounter: integer;
    begin
      for kounter := 1 to DeptCount do arDeptsTotal[Kounter] := 0.00;
      ...

    The function is later called via these lines:

    tmpIdx := ReturnDeptIndex(TheDept);
    arDeptsTotal[tmpIdx] := arDeptsTotal[tmpIdx] + LineVal;

    Again, I'm missing what could be wrong with those lines.

     

    I do create a fair number of SQL queries on the fly. There's a bunch of filterable fields, a date range, a results sort picker and a flag check box to include Zero Value lines. It's more "get it to work" code than incredibly featured code. But it runs on for a fair bit. Plus, I couldn't get my boss to give me permission to spend the time and effort to create the stripped down version that all of you are requesting (and reasonably at that).

     

    At this very moment, the decision has been to start using the alpha, AV and all. It's an analysis tool rather than a data manipulator. So, the boss is happy to have something that gives him desired info. I'm still changing things around, literally looking at every procedure and function and seeing if I can re-write them and maybe expose the AV but tearing up the hidey-hole. But you can't be expected to do my work for me. So, I am respectfully going to ask that we call a halt to this thread since I'm wasting your time. I appreciate all who contributed. You've give me directions to go on debugging this. And I will explore each suggestion.

     

    Thanks, GM

     


  20. Cristian,

     

      I was aware that the code did not represent the full code. Several pages worth. The AV occurs on program exit. From looking at the Error Log, I came to the conclusion my mistake was in not clearing out the memory assigned to arrays. I presented my declaration of the arrays in case I was doing that wrong. As for NOT DOING the things I tried, I ruefully am aware that what I was doing was not working.

     

      All that said, it appears something else, other than array finalization is creating the AV. More than possible. As a programmer, I'm self-taught, mostly through examples, it is very possible bordering on probable I've done something careless and that's the source of the AV, Error Log notwithstanding. In my original post, I did point out that the error is frequently the NEXT THING, rather than just what passed, in causing program faults. It does appear that is the case here and I've wasted everybody's time focusing in on the arrays.

     

      Thanks for moving me in that direction. GM


  21. Uwe,

     

    That had been my supposition in the past. But I get an Access Viol error on program exit and when I look at the stack that trips the error (admittedly a presumption since many errors are actually on the invisible next line, so to speak), I see that the program goes into the system for Halt(), FinalizeUnits, my main form's Finalization, _FinalizeArray and lastly _UStrArrayClr. It trips whether I do anything with the arrays or not. Looking at the assembler version of _UStrArrayClr, there's a MOV line that trips an exception, followed by a DEC and JL instruction and then the ominous warning that the next line is an unaccessible location.

     

    Sorry to bring such a newbie question to the forum ... which I've found entertaining, be it at a higher level than I am at. Thanks for helping, GM


  22. I have a small project in XE7, which is newish to me, where I have to get rid of an Access Violation error on exiting the application. From the error log, I know something is amiss vis-a-vis the arrays I'm trying to clean up.

     

    I have one global constant static arrays of strings, two global static arrays of extended and two global static arrays of strings.

     

    I have tried to use the setLength method for the cleaning up of the arrays, the finalize method, freeAndNil, free and setting the arrays to nil. My attempts sometimes survive compiling, sometimes don't. Researching the issue on the internet seems mostly to focus on dynamic array clearing. Not much about static arrays and not a solution that I could find. I've tried placing the clearing attempts in OnCloseQuery and disastrously in OnClose. At this point, I'm stumped.

     

    Is there a definitive way to do this? GM

     

    My creation code in the main form looks like this:

     

    var
      ...
      ar1: Array [1..100] of string;
      ar1Total: Array [1..100] of extended;
      ar2: Array [1..100] of string;
      ar2Total: Array [1..100] of extended;
    
    const
      TonerColors: TArray<string> = ['Cyan', 'Magenta', 'Yellow','Black'];

     

×