-
Content Count
28 -
Joined
-
Last visited
-
Days Won
1
jeroenp last won the day on March 12 2022
jeroenp had the most liked content!
Community Reputation
26 ExcellentTechnical Information
-
Delphi-Version
Delphi XE8
Recent Profile Visitors
4556 profile views
-
Delphi MT940 implementation (reader, writer)
jeroenp replied to Stéphane Wierzbicki's topic in Algorithms, Data Structures and Class Design
The C# library Raptorious.Finance.Swift.Mt940 has been ported 5 years ago to .NET core at https://github.com/mjebrahimi/SharpMt940Lib.Core/tree/master I have used the raptorious one to successfully parse ABN AMRO MT940 files and convert them to CD at https://github.com/jpluimers/MT940-to-CSV It should be relatively straightforward to port them to Delphi. --jeroen -
Point them to the Problems in different writing systems section of the Wikipedia Mojibake page. Did you really solve them of worked around with the table-based approach? Because if you did that, you are bound to be incomplete. That's a thing many web-tools do. You should reproduce this and report it as an issue to the TMS WEB Core bug category. The classic WordPress editor suffers from the same issue (and a whole lot more issues: search my blog for more), but they marked it as "legacy" while forcing the very a11y* unfriendly (and pretentiously named) Gutenberg editor (which has other issues, but I digress) into peoples face. The WordPress classic editor, trying to be smart, does that too especially when you switch between preview and HTML text modes a few times. The TMS WEB Core might fall in a similar trap. Be sure to report it as bug to the TMS people. --jeroen * a11y: accessibility
-
That would be my first try too. Since could just as well be the odd way the PDF to text on-line exporter makes an encoding error (it wouldn't be the first tool or site doing strange encoding stuff, hence the series of blog posts at https://wiert.me/category/mojibake/ ) and why I mentioned ftfy: it's a great tool helping to figure out encoding issues. Looking at https://ftfy.vercel.app/?s=… (and hoping this forum does not mangle that URL) two encode/decode steps are required to fix, so it does not look like a plain "read using UTF8" solution: s = s.encode('latin-1') s = s.decode('utf-8') s = s.encode('sloppy-windows-1252') s = s.decode('utf-8')
-
Run these oddly looking Mojibake character sequences through ftfy: fixes text for you analyser which lists the encoding/decoding steps to get from the oddly looking text to proper text, then repeat these encoding sequences in Delphi code (using for instance the TEncoding class). This is way better than using a conversion table, because likely that table will be incomplete. It also solves your problem where apparently your Delphi source code got mangled undoing your table based conversion workaround. That code mangling can have lots of causes including hard to reproduce bugs of the Delphi IDE itself or plugins used by the IDE. BTW: if you install poppler (for instance through Chocolatey), the included pdftotext console executable can extract text from PDF files for you.
-
That is part of what I mentioned in "Failing: [Archive] Internal error - RAD Studio Code Examples (Alexandria: CodeExamples)" For now it looks like en/de/fr/ja work for RADStudio and Libraries, but not for Code Examples.
-
There is some progress: response times have become faster (no more 10 seconds to wait for the error 500 response) and some bits of the Alexandria wiki are working. Working: - [Wayback/Archive] RAD Studio (Alexandria: RADStudio) - [Wayback/Archive] RAD Studio API Documentation (Alexandria: Libraries) Failing: [Archive] Internal error - RAD Studio Code Examples (Alexandria: CodeExamples) But hey: getting 2 out of the some 42 wikis for the English language (in the mean time I figured out there are 3 wikis for each Delphi version; the actual number is times 4 as there is English, French, German and Japanese) working in some 38 hours is still slow. From [Wayback/Archive] Docwiki https - EmbarcaderoMonitoring Response Time Last 2 days 4952.09ms / 11117.00ms / 214.00ms Avg. response time / Max. response time / Min. response time
-
That chance is slim because DNS records for docwiki.embarcadero.com — NsLookup.io A records IPv4 address 204.61.221.12 QUASAR DATA CENTER, LTD. Location Houston, Texas, United States of America AS AS46785 AS name QUASAR DATA CENTER, LTD. The blog post I wrote thanks to loads of input in this thread: The Delphi documentation site docwiki.embarcadero.com has been down/up oscillating for 4 days is now down for almost a day. « The Wiert Corner – irregular stream of stuff
-
Whoa, that at first very much confused me thinking there was a data integrity error, but then after realising lib_sydney_en_l10n_cache - Google Search didn't return results I reproduced it with a different one taking some 10 seconds for the request to even get displayed: Error: 1146 Table 'wikidb.rad_xe8_en_l10n_cache' doesn't exist (10.50.1.120) Besides the very long response time (how slow can a database lookup be?), look at the table names: lib_sydney_en_l10n_cache rad_xe8_en_l10n_cache They have two different prefixes (lib and rad) and two different product codes (sydney and xe8). This looks like a a setup with each product version having at least two different Wikimedia databases each having an l10n_cache table (and likely copies being made for each new product version, which I can understand from a versioning perspective) all integrated in one documentation site. Searching for [Wayback/Archive] l10n_cache - Google Search resulted in [Wayback/Archive] Manual:l10n_cache table - MediaWiki (which describes the table for all ranges of Mediawiki versions) and a whole load of pages with various circumstances in which people bump into missing this table. Then I looked at the status monitor [Wayback/Archive] Docwiki https - EmbarcaderoMonitoring - docwiki https where it looks someone started working on it almost 15 hours ago: Response Time Last 2 days 5012.65ms / 10372.00ms / 214.00ms Avg. response time / Max. response time / Min. response time Recent events Down for 14 h, 40 min The reason is Internal Server Error.500 Details:The server encountered an unexpected condition that prevented it from fulfilling the request. March 7, 2022, 17:57 GMT +00:00 Running again March 7, 2022, 17:46 GMT +00:00 I really really hope they know what they are doing, as right now the databases don't look well and things have not improved for more than 15 hours (I was interrupted while writing this reply). --jeroen
-
This is in part why Meik Tranel offered to host an export of the database. Another part that it is driving people nuts that the only reasonable search index will point to (sometimes cached) pages, but one cannot access them. Hopefully: - not all Embarcadero docwiki database servers have issues - there is recent a back-up that can be restored if they have Fingers crossed....
-
That last sentence might not be completely true. Based on the MediaWiki 1.31.1 source code, I drafted a blog post yesterday. It is not published yet as I ran into some WayBackMachine and Archive Today trouble: they are both slow and the Archive Today redirect mentioned at https://blog.archive.today/post/677924517649252352/why-has-the-url-archive-li-changed-to coincided with HTTP-302 redirect loops on my side. This was my conclusion in the blog post is this: Some of the requests succeed, so there seem to be three possibilities: Sometimes the load balancer cannot get a database connection at all Sometimes the load balancer gets a valid connection and that connection then fails returning a query Sometimes the load balancer gets a valid connection and that connection succeeds returning a query That their largest and most important site is still failing and there is no communication from Embarcadero either on social media or 3rd party forums (they do not have their own forums any more) is inexcusable. So it might be that it is not a single point of failure, and even underdimensioned might not cut it. Then to your next excellent question: I have been wondering about this since like forever. Even in the Borland days, uptime was often based on fragility and this has not improved much. Having no status page is also really outdated. Looking at some TIOBE index languages surrounding Delphi: - https://status.rubygems.org/ - https://cran.r-project.org/mirmon_report.html - https://status.mathworks.com/
-
This is true: some of the time it is up. Over the last 24 hours downtime is "more" than 55% (not the kind of SLA I would be satisfied with, but hey: I'm not Embarcadero IT), see the saved status at https://archive.ph/2HXRI: From the history on that page (or browsing the live status at https://stats.uptimerobot.com/3yP3quwNW/780058619), you can see that both uptime and downtime periods vary widely between roughly 5 and 90 minutes. To me that sounds like intermittently failing or underdimensioned hardware. Hovering over the red bars you see the uptime during the weekend (especially on Sunday) is better than during weekdays. Speculating further, this could have to do with access rates being less during these days. The non-public status on the UptimeRobot maintenance page shows this graph which has better resolution than the public status page: Response times are about twice as large as my blog on WordPress.com, see https://stats.uptimerobot.com/AN3Y5ClVn/778616355 I have also modified my docwiki http check to use deeper page (instead of the home page). That monitor is at https://stats.uptimerobot.com/3yP3quwNW/780058617 Let's see what the monthly average will be some 30 days from now. --jeroen
-
It is indeed confusing. In 30 days it should all be "clear". When you look at the event history of that monitoring page (I saved it at https://archive.ph/RNeuP) and you see that it is oscillating between down and up. Since the free monitoring is only every 5 minutes, the oscillation appears a very regular interval which might not be that regular. Being up part of the time, that will also likely influence these numbers. I also searched and found back where I originally created the monitoring pages https://stats.uptimerobot.com/3yP3quwNW (there are close to 50 of them, but I did not keep track of which sites went permanently down as there is no clear documented Embarcadero provided list of them). The original article was at https://wiert.me/2022/01/19/some-uptime-monitoring-tools-that-are-still-free-and-understand-more-than-http-https which I started writing at 20210528. The monitors themselves are way older: I tracked them back in my email archive to February 2018, so slightly more than 4 years ago.
-
As the free plan is limited to 50 entries, I redirected an existing monitor to the new URL () then turned it on. My guess is that even while the old URL was turned off off, it was counted as "up", and reflected in these numbers. The numbers are already going down, as while writing it is this from the ~53% you noticed: --jeroen
-
The MySQL uptime and the connection to are still responsibilities of the IT department. Anyway: made it UptimeRobot watch a deeper link which is now available at https://stats.uptimerobot.com/3yP3quwNW/780058619 for anyone to keep an eye on. Luckily, https://web.archive.org and https://archive.is have a lot of pages archived.
-
It's flaky. Again. I archived https://archive.ph/TIWRy from https://docwiki.embarcadero.com/RADStudio/Alexandria/en/Main_Page: [8c1240dd3d6ee2e766657d26] /RADStudio/Alexandria/en/Main_Page Wikimedia\Rdbms\DBQueryError from line 1457 of /var/www/html/shared/BaseWiki31/includes/libs/rdbms/database/Database.php: A connection error occured. Query: SELECT lc_value FROM `rad_alexandria_en_l10n_cache` WHERE lc_lang = 'en' AND lc_key = 'preload' LIMIT 1 Function: LCStoreDB::get Error: 2006 MySQL server has gone away (etnadocwikidb01) Backtrace: #0 /var/www/html/shared/BaseWiki31/includes/libs/rdbms/database/Database.php(1427): Wikimedia\Rdbms\Database->makeQueryException(string, integer, string, string) #1 /var/www/html/shared/BaseWiki31/includes/libs/rdbms/database/Database.php(1200): Wikimedia\Rdbms\Database->reportQueryError(string, integer, string, string, boolean) #2 /var/www/html/shared/BaseWiki31/includes/libs/rdbms/database/Database.php(1653): Wikimedia\Rdbms\Database->query(string, string) #3 /var/www/html/shared/BaseWiki31/includes/libs/rdbms/database/Database.php(1479): Wikimedia\Rdbms\Database->select(string, string, array, string, array, array) #4 /var/www/html/shared/BaseWiki31/includes/cache/localisation/LCStoreDB.php(52): Wikimedia\Rdbms\Database->selectField(string, string, array, string) #5 /var/www/html/shared/BaseWiki31/includes/cache/localisation/LocalisationCache.php(357): LCStoreDB->get(string, string) #6 /var/www/html/shared/BaseWiki31/includes/cache/localisation/LocalisationCache.php(271): LocalisationCache->loadItem(string, string) #7 /var/www/html/shared/BaseWiki31/includes/cache/localisation/LocalisationCache.php(471): LocalisationCache->getItem(string, string) #8 /var/www/html/shared/BaseWiki31/includes/cache/localisation/LocalisationCache.php(334): LocalisationCache->initLanguage(string) #9 /var/www/html/shared/BaseWiki31/includes/cache/localisation/LocalisationCache.php(371): LocalisationCache->loadItem(string, string) #10 /var/www/html/shared/BaseWiki31/includes/cache/localisation/LocalisationCache.php(292): LocalisationCache->loadSubitem(string, string, string) #11 /var/www/html/shared/BaseWiki31/languages/Language.php(3177): LocalisationCache->getSubitem(string, string, string) #12 /var/www/html/shared/BaseWiki31/includes/MagicWord.php(352): Language->getMagic(MagicWord) #13 /var/www/html/shared/BaseWiki31/includes/MagicWord.php(280): MagicWord->load(string) #14 /var/www/html/shared/BaseWiki31/includes/parser/Parser.php(4848): MagicWord::get(string) #15 /var/www/html/shared/BaseWiki31/extensions/TreeAndMenu/TreeAndMenu_body.php(24): Parser->setFunctionHook(string, array) #16 /var/www/html/shared/BaseWiki31/includes/Setup.php(948): TreeAndMenu->setup() #17 /var/www/html/shared/BaseWiki31/includes/WebStart.php(88): require_once(string) #18 /var/www/html/shared/BaseWiki31/index.php(39): require(string) #19 {main} The main page gives a HTTP 404, but also loads https://docwiki.embarcadero.com/RADStudio/Alexandria/e/load.php?debug=false&lang=en&modules=ext.fancytree%2Csuckerfish|jquery.accessKeyLabel%2CcheckboxShiftClick%2Cclient%2Ccookie%2CgetAttrs%2ChighlightText%2Cmw-jump%2Csuggestions%2CtabIndex%2Cthrottle-debounce|mediawiki.RegExp%2Capi%2Cnotify%2CsearchSuggest%2Cstorage%2Cuser|mediawiki.api.user|mediawiki.page.ready%2Cstartup|site|skins.duobook2.js&skin=duobook2&version=149306z which delivers a nice 500. A long time ago I made https://stats.uptimerobot.com/3yP3quwNW for monitoring http(s) status in the hope they would be watching it. That one monitors the home page, but maybe I should have it monitor a sub-page like the main page for XE8 or so, as this one also gives a nice 500 error: https://docwiki.embarcadero.com/RADStudio/XE8/en/Main_Page On the other hand: from a bigger organisation like Idera, one would exepct more IT infrastructure competence (hello 24x7 monitoring!) than back in the days when just the core CodeGear DevRel team was keeping the documentation sites up and running (and by using Delphi web front-end plus InterBase database back-end provided valuable quality feed back to the R&D team). Guessing from `BaseWiki31` I guessed they might still be on MediaWiki 1.31 which was an LTS version, but is unsupported now and has been replaced by 1.35 LTS. My guess was right as in https://docwiki.embarcadero.com/ (archived as https://web.archive.org/web/20220217084835/http://docwiki.embarcadero.com/) you see this: <meta name="generator" content="MediaWiki 1.31.1"> MediaWiki versions are at: - https://www.mediawiki.org/wiki/MediaWiki_1.31 - https://www.mediawiki.org/wiki/MediaWiki_1.35 - https://www.mediawiki.org/wiki/Version_lifecycle Back in the days they were keen in advocating life cycle management. Maybe time to show they indeed still understand what that means. --jeroen