Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation on 07/09/25 in Posts

  1. Not that it solves your RestDebugger but, but there is also a nice tool Bruno, which might help to dig deeper. https://www.usebruno.com/
  2. Anders Melander

    TeeBI free full sources released

    Needs more arrows!
  3. Angus Robertson

    Blocking hackers

    Not one country, currently 619,000 IPs worldwide, spread evenly around the world, I've specifically blocked 107 countries, but not Europe yet. Possibly from a massive botnet of cheap Chinese hardware that comes compromised from the factory at very low prices, cheap IPTV boxes and dongles, Android tablets, etc, acting as an HTTPS proxies for whoever controls the botnet. Angus
  4. david berneda

    TeeBI free full sources released

    Showcasing the TeeBI Query editor, drag and drop fields to create pivot tables of multiple dimensions, custom functions and features not available with SQL (like histograms, by code expressions, or using your functions and events to control the query "where/having" filters etc). Free and opensource: https://github.com/Steema/TeeBI
  5. Kas Ob.

    Bitmaps to Video for Mediafoundation

    You lost me here. What 10m, and what's gps? Honestly i lost my self reading that, fps not gps, (stupid auto correct and clumsy fingers), and 10m=10000000 vs 1m =1000000, as dominator for the rate at setup.
  6. Kas Ob.

    Bitmaps to Video for Mediafoundation

    In 10 tests i did, it is synced and difference is at the beginning is 4 ms and in the middle 4ms and at the end still 4ms, that is very accurate considering the acceptable desyncing between audio and video is constrained and small; https://video.stackexchange.com/questions/25064/by-how-much-can-video-and-audio-be-out-of-sync What is still perplexing me is; 1) why the frames are grouped, so i added something like this "OutputDebugString(PChar('Audio: '+IntToStr(AudioSampleDuration)));" before SafeRelease, same for video, the debug output is clearly showing an interleaved frames one by one ! beautiful interleaving, yet the result video frames are grouped, so it might be something has to do with WMF and its codec or missing some some settings somewhere, in other words you code is doing it right. 2) the duration at 1000 and i am not talking about the timestamp but the relevancy of nominator and video frames is 1000, i tried to tweak things and it didn't change, even used the recommended 10m instead of 1m you are using, still didn't change, so this also might be like above a setting or a constrained bit/frame/packet limitation specific to this very codec, one test video is 60gps with 200 duration, the output is 1000 at 30fps, while it should be 400. Yes in some way, see if there is gap then the audio is distorted and the final video is bad or low quality, so yes decoding the audio into PCM from some exotic audio format, then use more standard audio codec from WMF will be the best thing to keep the quality. Anyway, here a nice answer on SO leading to very beautiful SDK, you might find it very useful https://stackoverflow.com/questions/41326231/network-media-sink-in-microsoft-media-foundation https://www.codeproject.com/Articles/1017223/CaptureManager-SDK-Capturing-Recording-and-Streami#twentythirddemoprogram Now, why i keep looking at this drifting in audio and video you might ask, the answer is long time ago i wanted to know how those media players could read from slow HDD huge chunks of data and decode them then render them, everything is irrelevant here except one behavior you can watch, they like WMP and VLC do strange thing, they read the header of the video, then load huge buffers form the beginning then seek to the end of that file then again load huge chunk, from the end they try to see how much the streams drifted, only after that they play, those players saw it all, so they do tricks of resyncing at there own, when the video/audio stream are desynced and it is possible then adjust and cover it (fix it) Why is this is relevant here if all modern and used players doing this and fix things, because this will fail when you stream that video there is no way to seek to the end, so the player will play what he get, being WebRTC, RTMP, RTSP... Think video conference or WebCam or even security cams being received by server that will encoded and save the videos while allowing the user to monitor one or more cam online, audio and video syncing is important here, and players tricks will not help. Anyway, nice and thank you, you did really nice job.
  7. Lars Fosdal

    RESTDebugger fails where Postman succeeds

    I've used https://www.charlesproxy.com/ with success for seing exactly what is sent/received in a REST API. You can also inject it into a https connection as it acts as a proxy.
  8. Dave Nottage

    RESTDebugger fails where Postman succeeds

    I had been working on a tool of my own (Slumber) but this looks like it will do pretty much everything I visioned. Thanks!
  9. Renate Schaaf

    Bitmaps to Video for Mediafoundation

    I think I solved the audio-syncing ... kind of. First observation: Audio and video are perfectly synced if the audio comes from a .wav-file. You can check this using the optimal frame-rates 46.875 or 31.25. So for optimal synching, compressed audio should be converted to .wav first. I have added a routine in uTransformer.pas which does this. In the demo there are some checkboxes to try this out. Second observation: For compressed input the phase-shift in audio happens exactly at the boundaries of the IMFSamples read in. So this is what I think happens: The encoder doesn't like the buffer-size of these samples and throws away some bytes at the end. This causes a gap in the audio-stream and a phase-shift in the timing. I have a notorious video where you can actually hear these gaps after re-encoding. If I transform the audio to .wav first, the gaps are gone. One could try to safekeep the thrown-away bytes and pad them to the beginning of the next sample, fixing up the time-stamps... Is that what you were suggesting, @Kas Ob.? Well, I don't think i could do it anyway :). So right now, first transforming audio-input to .wav is the best I can come up with. For what I use this for it's fine, because I mix all the audio into one big .wav before encoding. Renate
  10. Nice demo. 👍 Perhaps you forgot to mention that its available on GetIt https://getitnow.embarcadero.com/delphi-yolo-onnx-runtime-wrapper/ and on GitHub https://github.com/SoftacomCompany/Delphi-YOLO-ONNX-RuntimeWrapper and Medium https://medium.com/@softacom.com/object-detection-in-delphi-with-onnx-runtime-2b1e28b0e1b3
  11. mjustin

    Profiling Tools for Delphi Apps?

    https://www.delphitools.info/samplingprofiler/ "SamplingProfiler is a performance profiling tool for Delphi, from version 5 up to both 32bits & 64bit Delphi 12.x (and likely the next ones). Its purpose is to help locate bottlenecks, even in final, optimized code running at full-speed." ... "With version 1.7+, SamplingProfiler includes a small http web server which can be used for real-time monitoring of the profiled application. The monitor provides code hot-spot information in real-time, in HTML or XML form."
  12. Carlos Tré

    Tools API - Changing Key Assignment

    I wrote it myself a few years ago, based on an article written by Cary Jensen that you can find here. Attached is the code for the expert in its current state, the changing of the format source key assignment is still a work in progress. Editor.zip
Ă—