Jump to content
Gary Mugford

Blast from the past: BDE and Win10 W/S

Recommended Posts

Before I start, I have to confess to be trying to rewrite the systems software using NexusDB. Progressing, but but not fast enough.

 

In the meantime, our decades-old app is encountering pervasive corruption issues with BDE using Paradox file formats, Windows Server 2013 and NOW mostly workstations running Windows 10 and kept up to date. The problem started with the Fall 2017 update to Windows. Ever since, we have had a regular series of Blob is Modified and Index out of Date errors. We cope by having all seats (approx. 50) get out of the app and associated programs. We run reBuilder which is a modified version of PdxRbd. Then it's back in and we wait for minutes, sometimes an hour when the cycle repeats itself. Sometimes we get people jumping back in early during the rebuild and THAT EATS THE DATABASE with a not that there has been a sharing violation. Luckily, we do zip up a copy of the raw data before hand and can recover in that circumstance. But it's a LITTLE stressful to see major databases disappear from one minute to the next. And when it happens when I'm not there, the place ACTUALLY STOPS WORKING with the computer. Everything goes manual and that is NOT a good thing in today's world. 

 

I have theorized that this issue is related to continuing problems with BDE locks and Windows locks going out of sync, if they were ever IN sync. That said, I'm no Windows tech expert. Might be something else. Probably is something else.

 

What I am looking forward is a settings that can make BDE work in the Windows 10 environment the way it STILL WORKS for Windows 7. I don't have the ability to retrograde the OSs for the Work Station, although I spent a night writing up a proposal to do exactly that. The answer was no. Find another way. And I haven't. The non-answer has been to make the workers' endure the stop and start operation that occasionally results in unrecoverable data loss (mostly lines out of history memo fields). And it's getting a LOT worse and I'm still too far away with the NexusDB operation to look my fellow workers in the face.

 

So, I'm reaching out to you DATABASE EXPERTS (give yourself a pat on the back just for coming here to help out people like me) and see if any of you can remember back to when you ran BDE. Or might STILL be running BDE in a small controlled environment. And can suggest a setting, a tool, a whatever. I HAVE started running a virtual box test, but the Workstations all have minimal RAM and having half of it taken up by a vb that is running a short-memory Win7 environment to run the app, turns out to be glacially slow. Although it DOES work. So far, we have two computers set up that way and the hardware guy says he can average three more a week. Which means I would have possible solutions up and running some time in the late spring. At which point, I would hope my direct solution might be ready. So, as much hope as I had originally in the vbox concept, it's NOT the solution for the immediate crisis. 

 

Regardless of whether you have an answer, I thank you for taking the time to read this. I hope you and yours have the Merriest of holiday seasons and a sane and safe Happy New Year! GM

Share this post


Link to post
Guest

Loong time since BDE. Sounds like you could try TS/Citrix for the 50 in order to minimize corruption. The clients would be "closer" to the pdx tables and could maybe exit gracefully when connection problems arise.

 

/D

Share this post


Link to post

It is indeed quite a while since I had to cope with BDE problems, so my suggestion might be pretty outdated now. Nevertheless you can give it a try as it is only a small change in the registry.

 

What helped us in the past was disabling opportunistic locking at the server side (where the database files reside). For this you have to change the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\EnableOplocks from 1 to 0 (as 1 is the default value, it might even be absent). Note that you have to restart Windows to make this change effective.

 

A drawback of this setting is a possible overall network performance drop for that server system.

 

 

  • Like 2

Share this post


Link to post
3 hours ago, Uwe Raabe said:

What helped us in the past was disabling opportunistic locking at the server side (where the database files reside). For this you have to change the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\EnableOplocks from 1 to 0 (as 1 is the default value, it might even be absent). Note that you have to restart Windows to make this change effective.

If I remember correctly, it is possible to set this on a per file type basis, but I am not sure whether that was on Windows or Linux/Samba.

Share this post


Link to post

I would locate the problematic tables and outsource them into a decent DBMS and rewrite the affected parts using 2 concurrent connections to get some peace for the first time. Then I'd not touch it for the next decade.

Share this post


Link to post

Back for some thanks. Appreciate that the group of you took the time to reach out to me. My abject apologies in the delay getting back. I have excuses, but excuses is what they are. Attila, I will point out my first line, indicating I'm trying to do what you suggest. Unfortunately, the problematic tables are actually 249 problematic tables ... not counting local ones. That's why the forever-taking complete rewrite using NexusDB.

 

Dany, I asked the hardware guy about citrix and was told there's no money in the budget for the product AND HIS TIME implementing it. Sigh. 

 

Now, as for the combined contributions for Uwe/dummzeuch. I had some success making each computer run the application in Windows 7 emulation mode. I'd say the incidence of issues dropped by something close to 60 percent. STILL A LOT. But a pleasant break from the nightmarish lack of running. 

 

For whatever reason, Mondays and Fridays are the days with the worst incident rates. By a considerable margin with Monday REALLY leading the pack. Yesterday, we had to interrupt work to rebuild FOUR times because of Index Out of Date errors. ONE workstation at the second building running Win10 doppleganged back to a machine running Win 7 at the main building, kept getting that was along the lines of Operations Systems Software does not work (I say along the lines, because THREE different people gave me three variations of the message over the phone with none of them being the same as any other). EurekaLog was running, but the error came early enough that it didn't get going and send me a detailed error report. Sigh. 

 

I wonder if I'm at fault for yesterday's mess. Over the weekend, I put a new feature into one of the apps that uses the data. ONE of the side effects of running Rebuilder is that we seem to have an auditing log memo field in the database get lopped off about two or three entries down, losing, in some cases 19 years of change by change auditing details, almost 2M of text per record in some cases. So, I wrote in a new feature into an app to UNhackit the note by calling up a backup image of the database and gluing the history in IT to the audit note in the currently used version of the database. I wonder if I needed to revisit the shortcut to the program, telling it to use Win 7 emulation mode, because the program had been updated. Turns out, the setting was still there, BUT maybe updates have SOMETHING to do with the frequency of Monday issues. I tend to keep my updates to the apps themselves to the weekends. Cause and effect??

 

Which brings me the Uwe's suggestion. I assume we are talking the Microsoft Server for the change and the change would come at the cost of performance for EVERYTHING on the server. So I would have to weight the downtime and data corruption we are currently experiencing against everything slowing down for NON-data related work. Do I have that correct? I couldn't just tell it to use the setting for a particular folder. BUT I might be able to determine it on a file by file basis if I was willing to put in a weekend doing it for a couple of thousand files? (I would certainly be willing to do the latter, given the circumstances). IF so, if it CAN be done file by file, how much beyond the reg setting indicated above, is there to be done, if I may ask??

 

Thanks for everyone who took the time to offer help. I've turned ON the Notify me of replies this time. My fault completely. GM

 

 

Share this post


Link to post

I still have a customer who uses BDE.

At a time i included   PdxRbd source into the program. 

I do not know if it can be in your case.

How ever from my experience of 20 years with my problem

"out of sync" it seem that they closed the computer with out closing

the program. Now a days it is OK. however if it once every 1/2 a year it mite happen so

i run PdxRbd.

At a time after long investigation there was no real solution to that problem.

Further more i did NOT GIVE PERMISSION to update from window 7 to 10 !!!

 

Share this post


Link to post

Limelect,

 

  THanks for taking the time to reply. As I mentioned in the first post, they have an app that is a slightly modified version of PdxRbd (I zip up all the files BEFORE doing work on them because not every rebuilder session goes smoothly). It's THAT app that is being used for repair.  Prior to the Nov 2017 Windows 10 update, rebuilding sessions occurred an average of three times a year dating back to 1999. Prior to that, data corruption in Paradox was a once every other year issue dating back to Paradox 1.11i's release here in Canada in the 80's. BDE was a great backbone that worked for longer than it should have. Had I been wiser, faster, smarter, this would not be an issue. But it is. 

 

  GM

Share this post


Link to post
2 hours ago, Gary Mugford said:

I asked the hardware guy about citrix and was told there's no money in the budget for the product AND HIS TIME implementing it.

Apparently this program is not very important for the company, so maybe it should just be retired instead of being replaced.

Or is somebody maybe having the wrong priorities here?

Share this post


Link to post

@Gary Mugford At a time BDE was very popular. I have many projects with it

without any problems. Only this one customer who hold big amounts of records.

Further more i have a few tables without any problems. Only this one table with

large records.

SO who have known that large table data will make problems?

Share this post


Link to post

Hello

Please use pxrest.exe (Table Restructure utility) and set the version to 7 and Block size to 32768 for each table, then use reBuilder

Hope this helps

Regards

  • Like 1

Share this post


Link to post
2 hours ago, dummzeuch said:

Apparently this program is not very important for the company, so maybe it should just be retired instead of being replaced.

Or is somebody maybe having the wrong priorities here?

dummzeuch, 

 

  The facts are that the company is running tight these days because of economic pressures introduced by the President of the United States on companies here in Canada in certain business sectors. There's simply no money in the budget and I've seen the numbers so I know he's telling the truth. Both the hardware guy and I are external contractors and we've been doing this for this company now for just shy of 30 years. The reality is that we careen from one issue to the next WHILE I am writing the replacement software. A decision was made for a ground up replacement to streamline the application and add new features that weren't needed when I first designed the DOS version of the software 35 years ago. I'm a tad slower in my old age, unfortunately. And I'm labouring with the switch to NexusDB AND the switch to Delphi XE7 at the same time. It's MY fault there is no 21st century solution to this problem and I can't expect others to spend time and money when we CAN MAKE DO, with interruptions on a still manageable scale. I'd rather spend my own money and/or time than suggest solutions that I know will result in losing personnel. Yes, it needs retiring for the replacement I'm writing. No, it is NOT not important (yes, I'm aware of the double negative). And no, there are no wrong priorities here. Just an issue I seek help on. Which, apparently. is beyond the ability of anybody to help. It's not that aid hasn't been offered. It's just an untenable position. As I have known for awhile. But knowing and having it confirmed are two different things. Sigh. As I said, MY FAULT. Nobody else's. 

 

  Really appreciate the time you've taken considering this. Big fan of your blog and your contributions in providing code for the community. 

 

1 hour ago, Job espejel said:

Hello

Please use pxrest.exe (Table Restructure utility) and set the version to 7 and Block size to 32768 for each table, then use reBuilder

Hope this helps

Regards

Job,

 

  When this problem first surfaced a year and a half ago, I started go through the BDE settings.  According to my notes, we switched to 16K block size about 15 years ago when we switched to Microsoft Server and 32K sometime in mid 2011. We've run v7 since it became available. I've been a Paradox user since 1.11i. I also explored on a week by week basis, changes in the Init settings and have them as optimized as I believe I can. Now, is it possible my melange is actually a BAD mix?? Sure, I'd be a narcissist if I thought otherwise. I've had references to Opportunistic Locks beyond what's here. So, I've handed it off for consideration by my partner and he'll look at it. Whether it is implemented or even kept after implementing it, is beyond my area of responsibility. But I will look THROUGH the databases to see if any of them have somehow got the wrong block size. It never hurts to look at even the smallest chance. Thanks for suggesting it. 

 

  GM

Share this post


Link to post
2 hours ago, limelect said:

@Gary Mugford At a time BDE was very popular. I have many projects with it

without any problems. Only this one customer who hold big amounts of records.

Further more i have a few tables without any problems. Only this one table with

large records.

SO who have known that large table data will make problems?

Limelect,

 

  I wouldn't say we have MASSIVE amounts of data. Runs something a bit north of 1G in total. Almost all text. The problem is my original design. I used a lot of 1 character length Memo fields as History fields (along with a random sprinkling of memo fields that do other things spread through the almost 250 databases). All of it was designed originally under DOS and I was fighting record size a fair bit in those days of limited memory and storage. Having M1's became habit as I always tried to stuff a record into the most efficient size as possible.

 

  Now, as memory and disk space became something that could be squandered (despite Bill Gates' prognostication we'd never need such capability), the structures in terms of keeping memo stubs of larger sizes within the Paradox database surfaced but the gain seemed not worth the pain in terms of restructuring. And Paradox was an amazingly stable database backbone. It just didn't corrupt if you wrote to the disk ASAP on posting. I went DECADES with some clients with no issues REQUIRING REBUILDER. But BDE got pushed aside as various progenitors to Embarcadero tried to re-monetize default data systems. And we are here where we are. The company is paying for a new database backbone. I'm implementing it. And I have a NoteBreaker app to turn all those audit history notes into discreet records that I ran a benchmark to find out how fast I could do it. THREE WEEKS with a standalone computer. More than seven million records. And when the switch time comes, I will have to run NoteBreaker again for the new notes created in the interim. 

 

  Sure wished I had looked into the future better than I did. Better than Gates did. But I didn't. 

 

  Warn your customers about BDE and Win 10. It won't end well. 

 

  Thanks, GM

Share this post


Link to post

I don't get it. 7M Records or 1G data in 3 weeks is ~4 record/sec or 551byte/sec. Btw. are there really 250 "databases"? 

Share this post


Link to post

Attila,

 

  There are 7M records with notes that range from a few characters to several megabytes in the MAIN database. To BREAK those notes into discreet records, I have to go through each one line by line and evaluate what piece of information is being described; date, time, user, process, and a variable number of lines of changes that roughly have the field name, [before] data that the field held before [now] data the field has now. Not every snippet turns into a record on first try. I run two different alternative decoders with each exception and give up if none of the three work. The issue is that these notes date back to the 90's. Formats changed because I did NOT think clearly enough about the long-term future 30 years later. Back then, auto-auditing was something I could brag about. Today ... not so much.

 

  I'm absolutely sure there's faster ways to do it. I could skip constant screen updates. I could skip testing for a stop button. I could write better AI. Well, *I* couldn't. But I'm pretty sure that my plodding code could be improved. That said, I'm in the realm of 'if something works, move on,' at this point. The bigger picture is a replacement UI and business logic model. When this company started, they had three-week lead times for their work. Today, it's today, tomorrow at the latest. Sure, there are long-term contracts, but the customers no longer give three-week lead times even in those circumstances. Storage costs. Better YOU store it than I. Storage and transportation is now global, with labels to match (lots of fun doing Chinese and Russian labels with a customer company that thinks you can fill labels from edge to edge, using CAD to do so). It's a different world now for the company that it was when I wrote the first version of the program in DOS and we never did a re-design after the one switchover to Windows and Lantastic. Since then, we've added features. And features. And features. THAT's how you get to 249 databases. The original program never had accounting modules, sales forecasting modules, worker performance analyses, etc. It was an inventory program with not much smarts. I tried to normalize as much as possible. I have a database of extended descriptions because ONE company wanted descriptions about three times the size I allotted to that field. So, for their 0.05 percent of the total parts, we have a database that includes those extended descriptions. When printing material for them, I check for the presence of an extended description and replace the normal one with it. Was it better to do that under normalization rules, or simply restructure the main database to have a description field of three times the characters? I went with normalization.  In the rewrite, with space not a concern, the field is three times bigger. Times change. When Manifesting became a thing, I wrote a company specific Current Invoices to manifest database. One time thing I was told. THEN we were told we had to archive them. The CORRECT programming choice at that point would have been to add an Active field and filter. BUT I DIDN'T USE FILTERS at the time. I looked at everything through FULL info lenses, so I just created an archive database and moved things back and forth on the container principle. Then, OTHER companies wanted Manifesting (guess the word got around at trade shows in the hospitality suites). Suddenly, I had a proliferation of these new twin sets of databases (11 at last count). All will now reside in a single database with CustomerID and Active filters to manage everything in a single database. I've reduced the number of databases in NexusDB to 112. The NUMBER OF FIELDS in the main database has almost tripled. Advocates of normalization would look at my database design and try to have me fired on the spot. Sigh. 

 

  We make do with workstations that are a tad in the long tooth, but still work. I'm GUESSTIMATING the breaking time on one of them as a function of being slower than my machine by a factor of three. Could I be wrong based on the sample I used to do all the math and the assumptions contained therein? Sure. But I ran a 12-hour session on my computer and never got past the 10 percent mark. And that's with a LOT of the longest term records to break. 

 

  I'm not saying you're wrong Attila. But I THINK you're math requires better hardware and better programming by me to be accurate.

 

  Thanks, GM

 

NOTE: And for those that think I'm too self-effacing or fond of self-flagellation, I'm trying to be truthful about my limitations. At the same time, I'm a self-taught programmer who actually spent the first 15 years of my working life as a working sports reporter and broadcaster. I have one computer sciences credit from a major Canadian university, but I got that back in high school. I spent time at one school on a Journalism scholarship before dropping out of school to go 'pro' as the saying goes. I also briefly attended college for two years part-time in order to get a degree to satisfy the folks. Fell a couple of years short when the newspaper I worked at asked me to start coming in on Fridays too (with a raise), meaning I didn't have any day each week to write tests and meet profs at school. I did contribute to a textbook used extensively in the business curriculum at yet another of Canada's largest universities, despite my lack of academic credentials. I KNOW what I don't know and that's a LOT. But I've done well despite my lack of an education. At least that's what my various employers think. They keep paying me. And have for more than three decades. Working man's code. Nothing better. But good enough.

Share this post


Link to post

For Paradox, the dbiSaveChanges() BDE API function in the dbiprocs unit should be called after posting any records to the database.  The function takes the Table.Handle as the only parameter.  Calling this function ensures that the records are fully committed to the database file.  In my experience, it eliminated the Index out of date and Blob mismatch errors.

 

If you are not using it already, it may be worth a try.

 

 

 

Share this post


Link to post
3 hours ago, Trevor S said:

For Paradox, the dbiSaveChanges() BDE API function in the dbiprocs unit should be called after posting any records to the database.  The function takes the Table.Handle as the only parameter.  Calling this function ensures that the records are fully committed to the database file.  In my experience, it eliminated the Index out of date and Blob mismatch errors.

 

If you are not using it already, it may be worth a try.

 

 

 

Trevor, 

 

  Good points. Was the first procedure for a general library I ever wrote in Delphi 7. And every database is set up and then connected to dbiSaveIt (my procedure's name in GProcs7, which gets added along with GDefs and GBDE) to every project). It's autonomic at this point to create the table (I ALWAYS use woll2woll's InfoPower controls for this stuff and have for most of this century) and immediately go to AfterPost and type in  dbi Ctrl-Space which expands to dbiSaveIt(dataset.handle);  

 

  But I'm going to take a look tonight to see if the statement can be found the same number of times as database objects in my project. Never know, I COULD have missed things back in 2017. Not with the databases where the heavy memo work is, but some of the newer stuff. 

 

  Thanks, GM

Share this post


Link to post

I didn't read carefully, but I have some remarks (my experience with bde goes back years and of course I've had similar problems):

Besides all the technical stuff, some parameter tuning, some reasing, ..  etc. I didn't read much about your activities in searching for "the root cause" or anything alike.

Well, it's BDE and we all know its weaknesses. But you wrote something about peaks on fridays and mondays!

Let's take this as an example: What happens on fridays and mondays? Did You try to find out?

Do people realy turn off the pc without leaving the form, closing your app, ..?

perhaps without leaving the form, the dataset,  in 'dsEditing' .. ?

Is there any logging in your app. Could You find some correlations between certain error rates (friday, monday, ..) and certain user behvior or app states or form states?

Or:

Is there some simple correlation between working with "THEBIGTABLE" with "THE BIG CONTENT" and the error rates?

What about users, which don't "produce" or "experience" errors? What about user roles or the smooth running features versus the bad ones?

 

P.S: You are doing a migration to Nexus?

Is it possible to publish a little example of code, showing changes in related records, like, create/ edit/ sync master and detail record?

Is it correct that this software is not something internal but used by and sold to customers?

Edited by jobo

Share this post


Link to post

Jobo;

 

  Appreciate your extensive inquiry. As for the preponderance of issues on Mondays and Fridays, I have looked into usage patterns. The main app, some 2M lines of code worth, is NOT the only program accessing the data. There are 15 other apps, SOME written in Delphi XE7 with the 32 bit personality, the rest in Delphi 7. I tried to spot the possibility of an XE7 app being used on the target days. No such pattern from the 39 users who use the product during the days and the 9 that use it at night. The programs will refuse to exit if still in dsEditMode. Now, I can't prevent exiting via Task Manager, but that's a step I don't believe any users were using as an out.  And I do have EurekaLog running on all the apps. Looking at the errors, they mainly pertain to two: Index is out of Date and Blob has been Modified errors. The latter is almost always, I believe, a conflict between two users making changes to the same record and having both produce an audit change to add to the top of the History memo field. The former?? no idea, other than possibly cached updates via Windows not keeping with the BDE changes, which I write to disk IMMEDIATELY on posting via the AfterPost routines.  Now, there are TWO other smaller percentage issues. Windows updates somehow affect the Win.INI file that is a remnant leftover that BDE REQUIRES to know which idapi.cfg file to use (we use the centrally located one on the server). So, we get an error about initializing the BDE. Users know to run BDEadmin and 're-load' the central CFG file. Noisome, but somewhat predictable. The other issue is an error on exiting by some users some of the time citing a Variant or safe array is locked. THIS error is attributable to library that I used to create the records for the auditing. The third party library has a flaw in it cleaning up after itself. It's STILL there because the program that some of the users use was frozen in time after a crash made the source code to change it inaccessible. (actually multiple failures due to natural disaster ... but again, that's excuse making). In their daily use, there are certain sequences that trigger use of the library and it's predictable that that one or two sequences will mean the user MUST stay long enough for that error to appear on shutdown, so that they can exit out of the error message. This lets the program exit finally. BUT, the user in question, has to make a bus that leaves right around our shift end. Sometime, she doesn't stay long enough. And the backup that night fails because she is STILL in the program, occupying BDE. Syncback just ignores files in use. Sigh. Now, I've looked and the last time we ran into the variant error and backup failure was in November of last year. The time before that was in June. Three times in all last year. Not really a contributor to the REGULAR WEEKLY ISSUES, but not nothing either.

 

  We DID run a correlation analysis on the Blob errors. We found one machine that was involved a high percentage of the time. his machine was rebuilt. Blob is Modified errors fell by a third. But did NOT go away. On the other hand, Index out of Date errors zoomed. 

 

  The newer XE7 apps do time out after two hours of non-activity. I don't have logs of in and out access, but there IS that feature in the new program.  The new program is completely written off-site (I'm a contractor, not an employee, with other projects and customers. But when you work for a guy for 30 years, when the guy pays you even when you are out sick OR NOT PRODUCING UP TO PAR, you identify WITH THE COMPANY). Each project is highly, highly customized and not for sale elsewhere. I DID look at selling the software in question to another company in the same sector that a Bridge partner was starting up, but even a first glance saw that it would be too much effort for the discount price my pal was offering. The companies might have been in the same sector, but they differed greatly in process and even the terminology was different. My customer, who was aware of the side sale and was willing to allow it for a percentage of the new cashflow, employs a veritable mini United Nations and terminology became colloquial fairly quickly. It wasn't like I could just slip in different language set. So no, there is no retail sales of any of my software. It's systems software for THAT company. The one exception is a POS system for Comic Book shops that has been sold, but by the company I wrote the software for originally with both of us splitting the take. I maintain that software for shops from Ontario to Nova Scotia, but it's only about ten in all. That's my one foray into retail sales. (I'm not a sales kind of guy). 

 

  As for the code ... it's probably not helpful in the current context. A snippet out of millions of lines doesn't say much. In terms of what the new program does, it doesn't use Paradox/BDE like functions and procedures at all. It's a ground up rewrite that minimizes the UI in terms of data shown so that we eliminate the Forest/Trees issue with so much data for all users to hone in one just the screens of data needed for them to do their job. They don't start from the lobby in this software, they go right to their office, if that metaphor helps. 

 

  I've detailed that this issue ONLY STARTED with Windows November 2017 update. Had their been hiccups before that? Sure. Especially the Blob is Modified errors. But in general, rebuilding was a once or twice a year issue for decades IF THAT. Then it became a problem needing a solution. Being a one-man shop, I knew the job of replacing the software was not an overnight job. I WAS hoping things would work until I could get it done. And I was on pace to do just that despite some health issues. Nov 17 took that margin away. I've TRIED to find ways to eliminate the problems and failed, just like apparently everybody else. I then had to minimize the issues. And failed again. After some help elsewhere, I WAS able to slow down the rate somewhat. Still not good enough. So I brought the issue here and have had some good suggestions. None of exactly resolved the issue, but the opportunistic locks will be tried on the weekend (the HW guy checked and the registry entry is NOT on the server, so it will have to be created). Maybe that helps. Multiple users have suggested it. Maybe it only cuts it down a bit. But I take bits these days. 

 

  Hope that clears things up a bit for you. I've certainly not been clear enough because you were confused. I'm guilty of running on and on and on and on. Thankfully, folks like you still want to help.

 

  Appreciate your interest and willingness to help. GM

Share this post


Link to post
4 hours ago, jobo said:

I didn't read carefully, but I have some remarks (my experience with bde goes back years and of course I've had similar problems):

Besides all the technical stuff, some parameter tuning, some reasing, ..  etc. I didn't read much about your activities in searching for "the root cause" or anything alike.

Well, it's BDE and we all know its weaknesses. But you wrote something about peaks on fridays and mondays!

Let's take this as an example: What happens on fridays and mondays? Did You try to find out?

Do people realy turn off the pc without leaving the form, closing your app, ..?

perhaps without leaving the form, the dataset,  in 'dsEditing' .. ?

Is there any logging in your app. Could You find some correlations between certain error rates (friday, monday, ..) and certain user behvior or app states or form states?

Or:

Is there some simple correlation between working with "THEBIGTABLE" with "THE BIG CONTENT" and the error rates?

What about users, which don't "produce" or "experience" errors? What about user roles or the smooth running features versus the bad ones?

 

P.S: You are doing a migration to Nexus?

Is it possible to publish a little example of code, showing changes in related records, like, create/ edit/ sync master and detail record?

Is it correct that this software is not something internal but used by and sold to customers?

Jobo,

 

  One other factor I should stress. There isn't a budget to run solutions against. The company employs a fair number of people, none of whom are getting rich, including the owner. We scrimp and save and get by. We have some pressures currently due to political shenanigans, but so does everybody in the sector here in Canada. Moaning about the issue doesn't help. We use eight-year old Work Stations that weren't new when we got them. We improvise to network in a second building via doppleganger machines and the internet. The HW guy is brilliant at making things work with nothing more than will to do so. We use older Delphi and don't keep our licence up to date because we simply can't afford to. The cost of keeping Delphi up to date is an extra part-time worker to help with the workload that keeps all of our workers working almost to their maximum. I'm PROUD to be associated with them. They MOSTLY work to the work's end, not to the time clock. And they put up with me ... and that ain't easy. They are my friends. 

 

  GM

Share this post


Link to post
Guest
10 hours ago, Gary Mugford said:

I'm PROUD to be associated with them. They MOSTLY work to the work's end, not to the time clock. And they put up with me ... and that ain't easy. They are my friends.

This is something i have strived for for 30 years. I hold clients for 25 years, still holding. Making clients pay up to limit technical debt is very very difficult because they do not see what they get for the bucks. I have tried variations of agreements but it all comes down to patience and belief (belief in the client to have similar values or something else that suits you). The ones who invested in me are very pleased when they look at their competitors choices. I am typing this just to say that even though you are i a dire strait at the moment you should be proud and you have my respect for not simply culling you business in those straits.

Share this post


Link to post

Dany,

 

  Thanks for your kind words of solidarity. Honestly, the biggest burden is borne by my partner in crime, the Hardware Guy. What he does scrounging here and there for stuff at prices the company can afford is sometimes spectacular. And when not, just merely great. Me? There was a time where I averaged two updates every three days for four years. I was young and full of energy. And I loved what I was doing. Now, the first two of those things are no longer true. But the third one is. I do get frustrated at times, but then I remember a guy that paid me while I was in hospital after heart surgery. Paid me when I didn't produce a single lick of code. Paid me when I was producing the odd clunker code that cost him money (accepting the blame for incorrect formulas I'd been given). I'd no sooner give up on him then I would family. He and his minions ARE family. (And no, not everyone. There are a few I'd gladly have exported to a war zone). I accept (because I've seen the books) that money is tight. It's been that way for years because every time it's good times, workers get raises. And when belt-tightening is called for, everybody shares the pain, a bit. Even me. 

 

  So, I make do. Dreaming of that that can't be had is the way to madness.

 

  GM

Share this post


Link to post

Well, heart surgery, that reminds me of friends who got too involved...

 

Order:

bde and 250 databases means: 250 folders with paradox files?

"Nexus" means: not in use, development in progress?

DX7 means? .. still 250 pdx folders or some sql server in place?

and not talking about DX7 means, talking about just D7 .. ?

so which database formats are in use actually (production)?

just to be clear, you are using bde to access solely pdx files/tables?

"each project" means, this is kind of generic software, multitenant capable or just running "same" (yes, it's highly customized) software in several instances?

 

my points:

I just asked the "customer" question, to get a picture of the nature of the software and the options that remain / are suitable in experimenting / trouble shooting.

"code" would simply add some real facts. 2 Million lines of code are not easy to summarize, but there should be some ~15 % pattern, showing principles. Perhaps some anonymous code.

You are writing a lot, but i don't get the picture, I'm afraid. Maybe it costs quite an effort, but code would be helpful at this point. Or what are you expecting to be the "right moment" in a programming forum to show code?

Maybe you're afraid of the criticism that's coming. But finally, what is there to lose?

Questioning the code, the logs, the user habits, spotting the root cause, the evil table or something like that has the possibility of discovering a misunderstanding in the picture even by chance.

Identifying a bad pattern and eliminating it -one by one- could be more helpful than it's mathematical share. For instance, the concurrent history access you mentioned is something i would try to avoid at all costs, here is one idea:

- doing history / logging stuff should never harm the business case

- that counts in itself and

- it counts all the more in concurrent business cases

- obvious examples:

-logging the error "no more table space left" into a table, wouldn't be a great benefit.

-logging "the whole complicated process" into a table, which gets finally affected by rollback in case of an error, preserves nothing at all when it's most needed.

so

> do the history log just by adding (insert) a single record instead of changing the same (big) one

> even better, do the history stuff outside the system responsible for the business case itself

> use a plain text log file with a common, well known format for analytic or other use.

> well, what about "writeln(...);"?

 

Might be, this example is not applicable in your case, but .. you know..

 

so finally, you might be lucky and it's not about irreversible changes by ms updates - which might have been the straw to break the camel's back- but tweaking of a few (code) scenarios (used often) and remove serious weight from the camel's back.

Share this post


Link to post

Jobo,

 

  Once more into the breach (and thanks for the warning about being too involved ... I'm a Type A personality, it's the only way I know)

 

  The organization of this software for THIS client of this built to THEIR specs software ...

  All applications (the main one, 16, IIRC, applets) goes into T:\DataApp. The apps are with three exceptions, written in Delphi 7. The other three are written in Delphi XE7, the latest licence I own.

  All Config files for BDE and the Network file for BDE goes into T:\Idapi

  All databases go into T:\DataApp\Data. That's ALL 249 Paradox databases with their attendant files, ballpark 2000 files in all when you add up the DBs, the MBs, the X# and Y# index files etc. There's an ASC file and a REP file in there somewhere dating back to 1995.

  The REORGANIZED new version the software will still see the applications in DataApp. The Config files and such will be set up on the server because NexusDB is the new database backbone and is a true database server, not a relational database like Paradox. The Databases will be going into T:\DataApp\NxData

 

  This is NOT generic software. Far from it. It uses their business model to do THEIR business the way THEY do it. Terminology is extremely individual. (BOM is Recipe, for example). This is NOT any sort of modification of an existing software base of anything else. It was built to be the most rudimentary of inventory tracking back in the day when the office was two people and the CEO (freshly elevated to the position by his retiring father, after being everything from janitor to salesman under his Dad). One computer for the accountant slash office manager. The other computer for head of purchasing who WAS keeping her purchases straight on a bank calendar (TRUE STORY). That was in the mid 80's and I was working with Paradox DOS. The company had one customer, a large American firm that you've heard the name of. Twice in the next three years, the company was named the International Supplier of the Year by their customer. Two more sugar daddy clients had been added and the company wasn't in a strip mall any more, it was in its own building co-owned with the bank. And there were now THREE, count 'em THREE computers in use. At that point, the Boss went to a trade show and was given a spiffy CD with a presentation of All-in-one software for their specific niche. I took a look at it overnight and went in and told him he had to fire me. That I couldn't reproduce the interface in less than a year and I wouldn't even start to guess at the business rules. I was just one guy, this software was produced by a team of 10. I was just being honest with him. We shook hands and parted friends. I went on to write some Warranty-entering software for a company in a different sector. Eight months later, I got a call. The ladies in the office had rebelled. The original sugar daddy customer had rebelled, warning that one more consecutive negative performance evaluation would end their business relationship. And I was asked to come in on a Friday afternoon and discuss the problems the all-in-one software was creating for the company. I was asked to create 'something' for further discussion Monday AM. Which I did, an OBVIOUSLY ALPHA product. Which they start using after lunch and never stopped using in the (I looked it up) 34 years since. Much against my protests, they've NEVER run in parallel at any time. I've never made them regret that trust except for one time when miscommunications caused my forecasting software to go awry and result in under AND over commitments to raw material, netting a loss for the company for the three days before it was discovered. 

 

  Do I write software to academic AND real world best standards. No. Not even close. I'm not elegant. I just plug away to get the end result. Might not be the fastest. in fact might isn't even close. It's NOT the fastest. I'm a nibbler. A tester of a small change and once that works, next change up. It's not the fastest developmental strategy.  I use direct manipulation of table data with table objects (mostly woll2woll stuff) And remember, I'm working with a RELATIONAL database, not a database server product. Heck, Delphi 7 doesn't even have a decent implementation of SQL built in.  I designed against normalization rules I gleaned from reading back in the 80's. Did I understand the full form set? Nope. But I was aware of every byte of memory each record took and I used normalization rules as they made sense to me. It made MORE sense to me to have a small database of extra data than to create a field in the main database for something less than 1 percent utility. And I didn't want to have the overhead of that waste byte or two. These self-directed rules have never changed, even if worrying about a byte in a database design has become silly. 

 

  And I have always, always felt obligated to have auditing EVEN WHEN NO AUDITING EXISTED. There was no auditing of a Paradox database in those days unless you wrote your own. And I stuffed it into a memo field at the end of the actual database because that made sense to me at the time. There's a central auditing history database in the new design. But it's STILL my design, NOT THE AUDITING that is built into NexusDB because of the need for a database server to be able to roll back changes. MY AUDITING WASN'T DESIGNED FOR THAT. What it was DESIGNED for was that small two-person office where changes needed to be tracked so that we didn't run into the problem of finding WHO had done it and WHY SHE HAD DONE IT. (True story, the inspiration was a Bil Keane The Family Circus cartoon about the little kid being asked who'd done something and the kids automatically naming the ghostly Idunno and NotMe characters). In MY OFFICES (all of my projects), we didn't waste time on figuring out who had done whatever, we moved directly to the why. Blame evasion was cancerous in offices as workers automatically resorting to saying 'Not me,' when they didn't remember or did remember and wanted to avoid censure. That crap went out the door at MY OFFICES. Just get to the chase of finding out why something had been changed and making sure it didn't happen again if there was some problem. I have always worked on the ground level and I know just how petty office politics can be. 

 

  Now, with due respect, since we haven't been on the same page due to the lack of clarity in my writing, the code would be meaningless to you. What code snippet do you want? The code that writes to the history field? Taken completely out of context, it would be meaningless to you. It works. Has worked. Right up until November 2017. STILL WORKS. Have ABSOLUTELY NO IDEA why it RANDOMLY helps generate a Blob is modified error in something like a tenth of a case per hundred events, about ten times more than it did in the decades before. And it doesn't have ANYTHING to do with Index out of Date errors that I can see. I have never indexed a memo field and have operated under the impression it was impossible anyway. Those Index out of Date errors ONLY GENERATE when somebody signs in and the program burps immediately before doing anything. The Indexes went out of data sometime earlier and the users who are already in DON'T EVEN KNOW IT. It's only on program startup that it's detected. So, what part of my program, if it is indeed my program, is guilty of being the cause? 

 

  Is my code error-free? Of course not. I'm not stupid enough to think that. But when code works mostly problem free for a LOOOOOONG time and then doesn't work, I did what you wanted me to do, a lot of navel gazing. And for a long time, I thought I was at fault. Until one user detected the near statistical correspondence with Windows updates, starting with the November 2017 Update. Turns out, that was the case for most of the first half of 2018. Since then, the problems now seem to occur on Mondays (after HW Guy does some updates over the weekends, which he manually rolls out to control the process) and Fridays. Today is Friday. Today will not be a good day. Or maybe it will be. We've made the Opportunistic Locks changes and I've looked and found six Paradox Level 5 databases provided to us by a customer to schedule their production and deliveries. They've been upped to Level 7 and I will be on the lookout for new schedules arriving from the customer in the future. EVERY LITTLE BIT HELPS. I've also checked the afterPost event for all the databases in all the table objects (Over 400 in total) in all 16-17 apps. All WERE properly set to save immediately. 

 

  Jobo, I know you are trying to help and questioning and answering is how truths can emerge. The problem here, by consensus, is that I should NOT be using BDE and the sooner I can do my rewrite and start employing NexusDB as the database backbone, the sooner I can stuff this whole issue back into Al Capone's Vault. Thanks for being generous with your time. It hasn't been the case where your experiences and mine hew close enough to give us a common platform to talk about. 

 

  Thanks, GM

Share this post


Link to post
On 12/22/2018 at 6:40 AM, Uwe Raabe said:

It is indeed quite a while since I had to cope with BDE problems, so my suggestion might be pretty outdated now. Nevertheless you can give it a try as it is only a small change in the registry.

 

What helped us in the past was disabling opportunistic locking at the server side (where the database files reside). For this you have to change the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\EnableOplocks from 1 to 0 (as 1 is the default value, it might even be absent). Note that you have to restart Windows to make this change effective.

 

A drawback of this setting is a possible overall network performance drop for that server system.

 

 

The testing of the registry ADDITION (it was absent) has begin. I will report the (happy?????) results. Thanks, GM

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×