

Renate Schaaf
Members-
Content Count
139 -
Joined
-
Last visited
-
Days Won
6
Everything posted by Renate Schaaf
-
Thanks very much for the input, I hadn't looked at those filters more closely before, should be easy to test them out. Thank you! Here is a first result for radius 10. It only passed the Gauss-RMS-test after I changed the sigma-to-radius-ratio to the same as yours. Need to give that a closer look. For other radii my routine failed some of the uniform color tests, (and edge detection as a consequence,) so it's back to the drawing board for that.
-
I managed to get it, source or not. For the same amount of "blurriness" my parallel version needs about 1.5 times the time of yours. Source would still be nice, I'm sure we'd learn something. Renate
-
I already did. I meanwhile learned how to do pull requests 🙂 Looking forward to the mailman! Renate
-
Bitmaps2Video for Windows Media Foundation
Renate Schaaf replied to Renate Schaaf's topic in I made this
It won't work, unless the images are all of the same size. Changing the size would cause at least a partial re-initialization. -
I was fooled by the names in the example-project. I just had a closer look and, yes it's box blur. As far as I can see GaussianBlur isn't Gaussian either, it just introduces weights of 1/square_distance_from_center. They do handle the alpha-channel, though. I would have liked to see the alpha-blended result, but I couldn't get the ImageView to show it. Seeing as my (unthreaded) routine is truly Gaussian, and that it outperforms GR32.GaussianBlur by a factor of roughly 2-3, would it make sense for me to suggest it as a replacement in GR32? How can I better test the quality? Where can I find your routine then? Quality usually takes precedence over performance for me, unless the performance annoys me, well it's also fun to get something faster 🙂
-
OK, I plugged my unsharp-mask into the Blurs-example of GR32. Doing so, made me aware of the need to do gamma-correction when you mix colors. So I implemented that, but see below. Also, I finally included options to properly handle the alpha-channel for the sharpen/blur. The repo at GitHub has been updated with these changes. Results: Quality: My results seem a tad brighter, otherwise I could see no difference between Gaussian and Unsharp. Performance: Unthreaded routine: For radii up to 8 Unsharp is on par with FastGaussian, after that FastGaussian is the clear winner. Threaded routine: Always fastest. If anybody is interested, I am attaching the test project. It of course requires to have GR32 installed. It also requires 10.3 or higher, I guess. Gamma-correction: I did it via an 8bit-Table same as GR32. This seems very unprecise to me, but I wouldn't know how to get it any more precise other than operating with floats, no thanks. Sadly, this can produce visible banding in some images, no matter which blur is used. Here is an example (for uploading all images have been compressed, but the effect is about the same): Original, a cutout from a picture taken with my digital camera. Result of Gaussian with Radius = 40 and Gamma = 1.6 When gamma-correction is used for sharpening, bright edge-artifacts are reduced, but dark edge-artifacts are enhanced. My conclusion right now would be to not use gamma-correction. But if anybody has an idea for how to implement it better, I'm all ears. Thanks, Renate BlurTest.zip
-
Bitmaps2Video for Windows Media Foundation
Renate Schaaf replied to Renate Schaaf's topic in I made this
But you would have to tell the encoder that every frame is a key-frame, otherwise it only stores the differences. But it can be done ... Also, the access to the frames for decoding needs to be sped up. But you could this way create a stream of custom-compressed images. Of course, no other app would be able to use this format. -
Bitmaps2Video for Windows Media Foundation
Renate Schaaf replied to Renate Schaaf's topic in I made this
HEIF is a container format like .mp4, as far as I see Windows manages this file format via WICImage only. MFPack contains headers for this, but all that goes a bit over my head. If you want to test HEVC-compression, you can do this via BitmapsToVideoWMF by creating an .mp4-file with just one frame using the procedures below. This is anything but fast, because of the initialization/finalization of Mediafoundation taking a long time. A quick test compresses a .jpg of 2.5 MB taken with my digital camera to an .mp4 of 430 KB. No quality loss visible at first glance. uses VCL.Graphics, uTools, uTransformer, uBitmaps2VideoWMF; procedure EncodeImageToHEVC(const InputFilename, OutputFileName: string); var wic: TWicImage; bm: TBitmap; bme: TBitmapEncoderWMF; begin Assert(ExtractFileExt(OutputFileName) = '.mp4'); wic := TWicImage.Create; try bm := TBitmap.Create; try wic.LoadFromFile(InputFilename); WicToBmp(wic, bm); bme := TBitmapEncoderWMF.Create; try // Make an .mp4 with one frame. // Framerate 1/50 would display it for 50sec bme.Initialize(OutputFileName, bm.Width, bm.Height, 100, 1 / 50, ciH265); bme.AddFrame(bm, false); finally bme.Free; end; finally bm.Free end; finally wic.Free; end; end; procedure DecodeHEVCToBmp(const mp4File: string; const Bmp: TBitmap); var vi: TVideoInfo; begin vi := uTransformer.GetVideoInfo(mp4File); GetFrameBitmap(mp4File, Bmp, vi.VideoHeight, 1); end; -
Bitmaps2Video for Windows Media Foundation
Renate Schaaf replied to Renate Schaaf's topic in I made this
This is why that makes no sense (quoted from https://cloudinary.com/guides/video-formats/h-264-video-encoding-how-it-works-benefits-and-9-best-practices) "H.264 uses inter-frame compression, which compares information between multiple frames to find similarities, reducing the amount of data needed to be stored or transmitted. Predictive coding uses information from previous frames to predict the content of future frames, further reducing the amount of data required. These and other advanced techniques enable H.264 to deliver high-quality video at low bit rates. " -
Bitmaps2Video for Windows Media Foundation
Renate Schaaf replied to Renate Schaaf's topic in I made this
Do you mean to Jpeg or Png? A video-format would not make any sense for a single image. NVidea-chips can apparently do that, but I don't know of any Windows-API which supports that. If you want to encode to Png or Jpeg a bit faster than with TPngImage or TJpegImage use a TWicImage. Look at this nice post on Stackoveflow for a way to set the compression quality for Jpeg using TWicImage: https://stackoverflow.com/questions/42225924/twicimage-how-to-set-jpeg-compression-quality -
Bitmaps2Video for Windows Media Foundation
Renate Schaaf replied to Renate Schaaf's topic in I made this
I'll see what I can do. Stream might be tricky, I would need to learn more about this. -
New version at https://github.com/rmesch/Parallel-Bitmap-Resampler: Has more efficient code for the unsharp-mask, and I added more comments in code to explain what I'm doing. Procedures with explaining comments: uScaleCommon.Gauss uScaleCommon.MakeGaussContributors uScaleCommon.ProcessRowUnsharp and see type TUnsharpParameters in uScale.pas. Would it be a good idea to overload the UnsharpMask procedure to take sigma instead of radius as a parameter? Might be easier for comparison to other implementations.
-
Hi Anders, It's great that you think of it, but hold off on that for a bit. I noticed that I compute the weights in a horrendously stupid way. The weights are mostly identical, it's not like when you resample, dumb me. So taking care of that reduces memory usage by a lot and the subsequent application of the weights becomes much faster. I've also changed the sigma-to-radius ratio a bit according to your suggestion. I find it hard to make results look nice with cutoff at half the max-value, I changed it to 10^-2 times max-value. But this still allows for smaller radii, and it becomes again a bit faster. So, before you do anything I would like to finish these changes, and also comment the code a bit more. (Forces me to really understand what I'm doing 🙂
-
I took sigma = 0.2*Radius, but it's easy to change that to something more common. I just took a value for which the integral is very close to 1. With respect to other implementations, I'm ready to learn. I just implemented it as accurately as I could think of without being overly slow. Performance is quite satisfying to me, but I bet with your input it'll get faster 🙂
-
I just uploaded a new version to https://github.com/rmesch/Parallel-Bitmap-Resampler Newest addition: a parallel unsharp-mask using Gaussian blur. Can be used to sharpen or blur images. Dedicated VCL-demo "Sharpen.dproj" included. For FMX the effect can be seen in the thumbnail-viewer-demo (ThreadsInThreadsFMX.dproj). This is for the "modern" version, 10.4 and up. I haven't ported the unsharp-mask to the legacy version (Delphi 2006 and up) yet, requires more work, but I plan on doing so. Renate
-
I did a quick test with the demo of the fmx-version of my resampler, just doing "Enable Skia" on the project. In the demo I compare my results to TCanvas.DrawBitmap with HighSpeed set to false. I see that the Skia-Canvas is being used, and that HighSpeed=False results in Skia-resampling set to SkSamplingOptionsHigh : TSkSamplingOptions = (UseCubic: True; Cubic: (B: 1 / 3; C: 1 / 3); Filter: TSkFilterMode.Nearest; Mipmap: TSkMipmapMode.None); So, some form of cubic resampling, if I see that right. Result: Timing is slightly slower than native fmx-drawing, but still a lot faster than my parallel resampling. I see no improvement in quality over plain fmx, which supposedly uses bilinear resampling with this setting. Here are two results: (How do you make this browser use the original pixel size, this is scaled!) This doesn't look very cubic to me. As a comparison, here are the results of my resampler using the bicubic filter: I might not have used Skia to its most favorable advantage. Renate
-
OK, I'll stop thinking about it. Time to get some sleep:)
-
Hi Anders, Thanks for explaining. I had a feeling that the compression is too "global" for parallelizing. But .. From what I have meanwhile read, it seems that parts of the decompression could be done in parallel. This link is about compression, but couldn't it apply to decompression too? (not that I know anything about it 🙂 https://stackoverflow.com/questions/61850421/how-to-perform-jpeg-encoding-of-a-big-rgb-image-in-parallel Anyway, there are research papers which claim that they got a speedup from doing the decoding partly in parallel.
-
Sorry, no, but it sounds like a good idea. Naively. I have no idea how parallelizable jpeg-decoding is 🙂
-
Sure you can.
-
Bitmaps2Video for Windows Media Foundation
Renate Schaaf replied to Renate Schaaf's topic in I made this
The use of TVirtualImageList is totally unessential, just cosmetics. I have replaced it by the old TImageList now. Hope it works, and thanks! -
Bitmaps2Video for Windows Media Foundation
Renate Schaaf replied to Renate Schaaf's topic in I made this
I think I know what you mean, I'll play around with it some. Problem is that I really need to have a well defined video-frame-rate, otherwise the user can't time the input. I have meanwhile figured out the audio-sample-duration for given sample-rate and use it for the amount of audio-samples to be read ahead. Audio-sync is now acceptable to me. Problem is now, that some decoders are sloppyly implemented in Windows, like .mpg. I'm not sure I even have control over that. Isn't the encoder just using everything that is in the leaky bucket to write the segments in what it thinks is the right way? Why can't you compile the code? I think your feedback would be very valuable to me. Did you try the latest version on GitHub? https://github.com/rmesch/Bitmaps2Video-for-Media-Foundation (there's one silly bit of code still in there that I need to take out, but it usually doesn't hurt anything) I was hoping to have eliminated some code that prevented it to run on versions earlier than 11.3. Or have you meanwhile developed an aversion against the MF-headers? 🙂 -
Bitmaps2Video for Windows Media Foundation
Renate Schaaf replied to Renate Schaaf's topic in I made this
A first version of TBitmapEncoderWMF is now available at https://github.com/rmesch/Bitmaps2Video-for-Media-Foundation It only supports .mp4 and codecs H264 and HEVC(H265) at the moment. See the readme for more details. I had some problems getting the video-stream to use the correct timing. Apparently the encoder needs to have the video- and audio-samples fed to it in just the right way, otherwise it drops frames or changes timestamps at its own unfathomable judgement. I think I solved it by artificially slowing down the procedure that encodes the same frame repeatedly, and by reading ahead in the audio-file for "just the right amount". Would be interested in how it fares on other systems than mine. -
Please support Stack Overflow moderators strike against AI content policy
Renate Schaaf replied to Dalija Prasnikar's topic in Tips / Blogs / Tutorials / Videos
What makes you so negative about this? I certainly have my problems with participation on stackoverflow. After having received some unfair downvotes, I've stopped asking questions, and I only answer, if I'm dead sure of what I say, maybe a good thing. But stackoverflow has been a valuable and mostly reliable source of information, and I would have to spend much more time finding good info, if I couldn't rely on content being mostly accurate anymore, or it the moderators would stop doing their job. I upvote every answer that has been useful, and I downvote every blatantly wrong answer, just to keep the quality up. I would hate for ChatGPT flooding the answers. -
Bitmaps2Video for Windows Media Foundation
Renate Schaaf replied to Renate Schaaf's topic in I made this
Well, I must have written lots of sloppy code yesterday, but I can now encode audio to AAC together with video to H264 or H265. I'm using the CheckFail approach wherever possible. I wish I could keep all these attribute names in my head. Wonder whether it would be possible collecting them into records, so you could consult code completion about them.