Jump to content

jobo

Members
  • Content Count

    15
  • Joined

  • Last visited

Everything posted by jobo

  1. jobo

    error on Interbase 2020 server

    Also just guessing, I don't use IB, try another Alias but "inc", maybe it's a reservered word. And not so guessing, but not tested either, try more explicit grouping here: from( select i.goods_id, sum(i.qty) as inc, cast(0 as float) as sale, cast(0 as float) as writeoff from income i where cast(i.recdate as date) <= :d group by i.goods_id union all select s.goods_id, cast(0 as float), sum(s.qty) as sale, cast(0 as float) from sales s where cast(s.recdate as date) <= :d group by s.goods_id union all select w.goods_id, cast(0 as float), cast(0 as float), sum(w.Qty) as writeoff, from writeoff w where cast(i.recdate as date) <= :d group by w.goods_id) dt1 "Save" rule for grouping is group everything, which is not aggregated. You are only grouping one column out off 3 unaggregates ones. Exact rules for grouping depend on database and even datamodel (most of all PK Definition). At the moment I know only MYSSQL and Maria beeing very problematic when it comes to grouping. Which leads to wrong results or missleading error messages. Finally, run the union part stand alone to dig down the error.
  2. jobo

    FireDAC Postgres ByteA field

    bytea is not standard blob, more like a string, encoded to cover every case of special chars or nulls. There are large objects in pg, but this type has even less support. try using ftStream instead of ftBlob Look up some special flags for pg blob connection in Emba docs. If your data is "close to text" which could be the case reading "pages", you also might try another column type like text or even json/jsonb.
  3. jobo

    FireDAC Postgres ByteA field

    Could you please add column definition from this table (ddl) and the query statement?
  4. jobo

    Blast from the past: BDE and Win10 W/S

    Well, I|ve never been there, neither Canada, nor Baden Baden, just Karlsruhe, it's a nice place. Nobody is forced to spent time, adding posts to forums threads. Even if it's not helpful to you, it might be to somebody else. Ich drück Dir die Daumen!
  5. jobo

    Blast from the past: BDE and Win10 W/S

    Didn't know about type A personalities, here in germany we only have "Alpha Männchen" or a little nicer "Silberrücken" (different story). Seems to be medical terminology, invented for the main risk group for heart disease. And please don’t get mad on me touching this, but you don't have to justify yourself for anything. You asked for help, I’m asking for facts (or source code), I won’t accuse anybody. Me giving an example for potential risk of (unnecessarily) provoke ressource conflicts during logging, is not about embarrassing you. Ok, reading all these path settings, here's my most concrete and urgent note, although this is probably a wll known issue of BDE: be aware of 150% correct or more specifically identical path settings for each and every BDE configuration accessing the same database! Now, how could they differ? Well, the later on suggested UNC notation might describe the same location like any share definition. In fact, even UNC alone makes 2 variants when mixing up IP and DNS usage. And of course you can have “endless” number of share definitions pointing to same file. This is poison to BDE. And another hint: Instead* of doing the big rework you could do little changes, improvements, tuning. Having and hoping about the all new fancy app is nice, but doesn’t solve existing problems. It’s a question of priorities, budgets and so on. Of course, that's at your discretion. * probably in parallel
  6. jobo

    Blast from the past: BDE and Win10 W/S

    Well, heart surgery, that reminds me of friends who got too involved... Order: bde and 250 databases means: 250 folders with paradox files? "Nexus" means: not in use, development in progress? DX7 means? .. still 250 pdx folders or some sql server in place? and not talking about DX7 means, talking about just D7 .. ? so which database formats are in use actually (production)? just to be clear, you are using bde to access solely pdx files/tables? "each project" means, this is kind of generic software, multitenant capable or just running "same" (yes, it's highly customized) software in several instances? my points: I just asked the "customer" question, to get a picture of the nature of the software and the options that remain / are suitable in experimenting / trouble shooting. "code" would simply add some real facts. 2 Million lines of code are not easy to summarize, but there should be some ~15 % pattern, showing principles. Perhaps some anonymous code. You are writing a lot, but i don't get the picture, I'm afraid. Maybe it costs quite an effort, but code would be helpful at this point. Or what are you expecting to be the "right moment" in a programming forum to show code? Maybe you're afraid of the criticism that's coming. But finally, what is there to lose? Questioning the code, the logs, the user habits, spotting the root cause, the evil table or something like that has the possibility of discovering a misunderstanding in the picture even by chance. Identifying a bad pattern and eliminating it -one by one- could be more helpful than it's mathematical share. For instance, the concurrent history access you mentioned is something i would try to avoid at all costs, here is one idea: - doing history / logging stuff should never harm the business case - that counts in itself and - it counts all the more in concurrent business cases - obvious examples: -logging the error "no more table space left" into a table, wouldn't be a great benefit. -logging "the whole complicated process" into a table, which gets finally affected by rollback in case of an error, preserves nothing at all when it's most needed. so > do the history log just by adding (insert) a single record instead of changing the same (big) one > even better, do the history stuff outside the system responsible for the business case itself > use a plain text log file with a common, well known format for analytic or other use. > well, what about "writeln(...);"? Might be, this example is not applicable in your case, but .. you know.. so finally, you might be lucky and it's not about irreversible changes by ms updates - which might have been the straw to break the camel's back- but tweaking of a few (code) scenarios (used often) and remove serious weight from the camel's back.
  7. jobo

    Blast from the past: BDE and Win10 W/S

    I didn't read carefully, but I have some remarks (my experience with bde goes back years and of course I've had similar problems): Besides all the technical stuff, some parameter tuning, some reasing, .. etc. I didn't read much about your activities in searching for "the root cause" or anything alike. Well, it's BDE and we all know its weaknesses. But you wrote something about peaks on fridays and mondays! Let's take this as an example: What happens on fridays and mondays? Did You try to find out? Do people realy turn off the pc without leaving the form, closing your app, ..? perhaps without leaving the form, the dataset, in 'dsEditing' .. ? Is there any logging in your app. Could You find some correlations between certain error rates (friday, monday, ..) and certain user behvior or app states or form states? Or: Is there some simple correlation between working with "THEBIGTABLE" with "THE BIG CONTENT" and the error rates? What about users, which don't "produce" or "experience" errors? What about user roles or the smooth running features versus the bad ones? P.S: You are doing a migration to Nexus? Is it possible to publish a little example of code, showing changes in related records, like, create/ edit/ sync master and detail record? Is it correct that this software is not something internal but used by and sold to customers?
  8. "returning" mentioned here is part of sql command, it >returns values of auto id and other expressions. Maybe this is not available in Pervasive. Here is firebird documentation explaining it: http://www.firebirdsql.org/refdocs/langrefupd21-insert.html This is of course a solution coming from the database side. Firedac offers client side solutions.
  9. jobo

    Best site/source for SQL Server questions?

    I'm not a ms sql specialist. In fact I don't like it. So I have no specific points here. From Your description it sounds like everything is mixed up a little. Don't know if this is just an impression from the description or if this describes present situation. The problem I mentioned above occurs on selling limited real world articles (not like software license keys). And selling things which are already sold is really a problem. But this is different from optimizing picking paths, keeping pallets available etc. So without specific problems in question I can just give some final hints: clean up and separate issues (maybe even separate systems) do application and processing optimization on every single step (having a few lazy spots is sufficient to break the system*) in fact sql tuning (single statements) could speed up things by factors you can't afford in hardware tuning could be achieved by - denormalization for speed (space/precalculation is the price for speed, speed avoids growing locks) - normalization for speed (find old flaws, which call for double work and endanger data accuracy within the time critical process) - index optimization (mostly indexing the proper columns) - using most advanced features of the system (which could be i.e. sql feature like CTE or also a really cool new index technology- I'm not up to date with sql server here) - avoid local (client side) processing in favour of SP (the famous "if loop" and so on) - avoid costly deduction of states, use modelled, explicit, simple states - consider additional technologies to notify clients on state changes etc. (decreasing stock count, ..) - consider specialized use cases to easy situation, include data model changes if necessary (see also: de-normalization) *Finally healing a few smaller flaws could add up to more than the sum of the single effects.
  10. jobo

    Best site/source for SQL Server questions?

    In my point of view, this is best way to make use of database transaction control, when you are on client side. Servers transaction handling is the server side implementation to guarantee a complete, error free transaction, considering each and every rule known to the server. So called client side "transaction handling" tries to achieve the same without being the server. Of course you can do this, but it f.. ups the server principle and comes with a price. It uses more resources and by doing so it leads more quickly to the situation you'd like to avoid. Timeouts, lock escalation (mssql), locked tables, dead locks. So first thing to do, is making the process as lean as possible. Prepare everything possible before the final transaction. Use highly specific SP if possible (RDBMS offers strong SP features), which could mean for example, avoid parameter driven "smartness" of the SP and choose the best possible SP (and processing path) from client. If necessary model the "processing path" more explicit in your database, which could be a table for "reservations" or distinguish between view mode (of to be processed data) and "I'm really buying now" mode. Leave the developer comfort zone and stop using client transactions, but make tiny, meaningful steps in backend (transactions) and frontend and try to handle raising up resource problems gracefully.
  11. jobo

    Best site/source for SQL Server questions?

    some thoughts and questions to this - the only requests like this one I know from ms sql "users" - so some systems seem to be picky about row level locking others not - the sql handling problem itself here is bound very close to the real world problem of demanding a physically limited thing - row level locking in ms sql server under load could cause «lock escalation> aka a much more general locking like page locking - maybe one has to think about some denormalisation, which of course have to be accompanied by accompanied by adequate constraints, keys, triggers, etc. - handling the «grabbing» might be easier, quicker, more robust, if handled by a single, atomic Grab Stored Procedure (considering the denormalisation)» - which version is in use? - could you elaborate «..the actual grabbing is totally void of transactions..» please? - or describe the workflow a little?
  12. jobo

    General DB access question -- paging query results

    Add a paging mechanism by using restrictions in sql where clause certainly doesn't use extra indexing. The base query itself my use indexes, if well designed and fitting. I think Lars just was surprised, that components today realy do that ~"offset trick" aka make use of more modern SQL Features.
  13. Ok, sorry, I read somewhere about a general discussion and up to now, nobody but You knew about record counts. Of course, a few thousand records can't itself be a problem. So I already asked the magic question and one must die a death. Use some of the magic switches already mentioned . Do intelligent initialization instead of swallowing the whole database into memory. At the moment processing seems to be complete opposite to "scaling" design. This includes a restriction right at the beginning. Does abstraction make problems simpler and lead to simple solutions? I don't think so, devide and conquer! Have a look at the details! All the best for 2019 to all forum visitors!
  14. Well, I‘m afraid You have basically two big problems from a technical perspective. Taking Your explanation from the other thread https://en.delphipraxis.net/topic/476-any-suggestions-on-how-to-lazy-load-a-blobclob-from-oracle-db/ into account, these problems result from an incomplete or imperfect “migration” from BDE. When or where ever a dataset is opened by “select * from anytable;”, it misses the required precision in horizontal and vertical data limitation. a) * doesn’t cope with specific field handling requirements like BLOB data b) missing filter (where clause) forces the used components to fetch all data from remote server (ignoring bandwidth shared with others) Now the question is, how do you imagine a cure? (I’m cross-eyed on the other thread again) Are You looking for a kind of magic switch? Are You doomed to an unintelligent solution? (Because of management, resources, ...) When I wrote my idea of a “workaround” I wasn’t really aware, You are selecting thousands or millions of such BLOBed records unfiltered. (Which is in my point of view a no go even without BLOB fields involved) Okay, the good news is: There are magic switches, limiting the number of fetched records. The bad news: Talking about big and complex queries You will sooner or later face the problem, that a big and complex query itself takes its time to be evaluated even on server side. Any sort operation for example is where the trouble starts. And like explained in the other thread, the paging mechanism you are looking for is implemented with sort and limit and offset clauses. So if You ask the server to run a complex query against millions of rows, also returning millions of rows without grace, then the magic switch asking for just a few of the records will not help in every situation. Let me put it this way: even a “modern” paging mechanism doesn’t help, if data design doesn’t scale. You will not find such an app, which offers high speed data retrieval without some kind of windowing or boxing or filtering your data prior to showing it. And a normal use case in fact offers a lot of such limits, it’s “natural”. A clerk works on one order after the other, not with thousands or millions. Also the customer hotline. They ask for my number each time and yes, after that we are talking about my order and everything I ordered, but this isn’t about millions of order items. And I’m sorry, but your requirements (from the other thread) sound a little like “wash me, but don't wet me”. In any case, you can make use of a powerful database and some good components you seem to have already available to get this problems under control. (Even with an older version of delphi) At least that was what I did, when I was actively developing delphi apps. Starting with BDE, moving to ADO/OLEDB, ODBC using mechanisms discussed here. Sure my suggestion would help, if You implement it.
  15. is this Yours http://forums.allroundautomations.com/list.html ? If so, it would have been nice to mention this here. Anyway, my idea is: Create a view which returns all fields 1:1, except Your clob column, which is truncated to a just a view characters. (may be an individual truncation comment could be added also, if it realy happened. Now change Your app to showing the view instead of the table and add an event for editing & showing the complete clob. This is no paging but a quite minimal change to the app.. Of course it depends on capabilities of your components, I never used them. But at least the tools of aa are well known to me and they are quite fast. I guess they use their own data access components and this workaround would be a quick fix.
×