Jump to content

Kas Ob.

Members
  • Content Count

    733
  • Joined

  • Last visited

  • Days Won

    9

Kas Ob. last won the day on January 10

Kas Ob. had the most liked content!

Community Reputation

264 Excellent

1 Follower

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I missed that with FillChar, you are right there.
  2. Right, but i am talking about this case in general, see, it is very rare to design your own code where you need to pass TArray (or any other defined indexed type) to a function with "pointer to array" with element length, most likely the destination belongs to another realm or simply put belongs to a code by different coder (or library), and on same side in cases other than the above example where length established locally, the length might be 0, in this case i prefer to have a nice overflow exception instead of unpredicted behaviour in case i missed explicitly a check against 0 length. It is just personal point of view, in many cases i prefer the compiler checks, unless performance is priority to consider.
  3. You are totally right there, but i prefer sometimes to trigger it explicitly (in development) to make sure (in similar case to this as with an external API) that the called function is well equipped to handle such case, in other words to make sure i am not leaving it unhandled and unchecked, just as safe measure.
  4. Should be FillArray(data, 100 * SizeOf(Integer)); And you can use const or var with the same result in this case, As David pointed to declare the function with PInteger and pass it by PInteger(@data[0]).
  5. Kas Ob.

    Problem with local resources in RDP Session

    In my opinion RDP was introduced as (how to put it) as an extension to Windows Explorer, it wasn't intended to deliver stable resource sharing over internet, but RDP allowed initializing and sharing drivers for the user himself, but not for an application, and by user i meant a user using Windows Explorer. Away from you are depending on technology that can fail anytime, and you should rethink how you or your users are depending on it, i think you can extend your application to upload and download files in right manner, by HTTP or IIS or using a web browser after authorization, there is many solution to think about this where you can have peace of mind about its reliability.
  6. Kas Ob.

    Problem with local resources in RDP Session

    Both will work the same, also using the Delphi RTL or the API directly will be OK, with the same result. (i think) No need, copy or move will do it, but my suggestion is to build the archive file (zip) in memory without disk operation then save it locally or not, again i am not familiar with your zip library and if does allow such usage, also if the archive is very big then building it in memory will take long time on sending, but again it is up to you to measure the size and tune it. My point is against using Seek in general with THandleStreams and files as it is encapsulate hidden read, a full disk operation that have the same delay and latency, on other hand using Seek on RDP resource shares might be buggy to begin, RDP had its share of such bugs over years.
  7. Kas Ob.

    Problem with local resources in RDP Session

    That is not a heap or memory problem, the exception message is red herring and wrong. To understand this first we must understand RDP resource sharing, which is in fact an emulation for local files, this emulation process is done over the RDP protocol, hence you can see that even Windows Explorer block and prevent updating the directory update when you paste files local drive and a directory on RDP, these operation in fact should be only executed using overlapped operation as it might take time to monitor the file over RDP protocol, also there is the permission handling which is different from network shares, also you are using THandleStream this is direct call for file write (and read ...) these API's are not very friendly for net operation, and one in specific is that SetSize, as it is in fact is trying to perform Seek, while Seek internally In Delphi RTL does use SetEndOfFile, on other handle this API perform write (or read !) in Windows RTL, both these operation as executed in an emulated manner over RDP protocol over wire, and there is no timeout for such low level API per se, but the RDP commands does have its own timeout, hence the arbitrary failure, this will happen based on the network speed and busyness. So my thought on this 1) Don't use Seek on TFileStream (or THandleStream..) because it is slow, if the file size small relatively then use TMemoryStream then load or save it in full. 2) Watch and get information on SetSize calls, if they are beyond the specific range (or sizes), to be honest i don't know how much is this relevant, but i have my doubts about the low level API calls if they are being called form the local or remote and where the low level emulation being executed triggering the timeout. 3) don't trust the message "out of memory" when you are using RDP resource shares, (this include printing too), as it might falsely reporting the RDP protocol buffer size not anything else. 4) I am not familiar with Zip Forge, but the sequence of these calls and their naming make me think if this library is RDP resource share friendly, specifically for this case you listed above, i mean this DeleteFiles -> EndUpdate -> ForceUpdate -> SetSize This means some files had been deleted and the overall archive file size need to be truncated, is it possible to write new file instead updating the existing one ? or just need to use overlapped operation and give the RDP protocol time to finish the file size updating. Also this is strange as SetSize based on the operations should be less, we are deleting files, so the cause for out of memory is not clear, unless it is trying to bring the whole file remotely then adjust its size and send the updated version back, which makes some sense, (may be) These were my thoughts and hope they are helps you.
  8. Kas Ob.

    Email Tampering

    I wouldn't depend on date to filter emails, nor will suggest it, while when it happen it means 99% that such email is a spam, but the 1% is real and important, imagine you turned you phone and unplugged the battery for some reason then returned the battery and turned the mobile on, an email comes along and you replayed before the mobile got the correct time from the provider, there is also another examples like using your own SMTP server hosted somewhere on a dedicated server like Hyper-V, these servers tend to stick the host timing and no matter what or how you fix the the guest time it will revert to host timing and the only possible fix is configuring time zone, so in theory if the host have UTC time zone and you wanted your guest to have other time zone, you will see differences , if these time zones does have different summer/winter times.
  9. Kas Ob.

    MAP2PDB - Profiling with VTune

    You can fix this by finding the address (offset) from the PE header itself, don't depend on the section index number but use the section/segment name. Also you can see why i called the start addresses (offests) by Delphi linker map file are wrong and should be relative to 0 instead of $400000.
  10. Great to hear it is working, your are doing it right now, but still few things to reconsider. 1) Angus and Remy comments are very valuable and you really should read all the above and remember it, i missed to point you to the fact you always will be safer when using the working demo in any library as these demos/samples are done by the guys who know. 2) You supplied code, a working code "per your word" and still afraid if it is right, so i recommend that start experiment on your own, and please again if your are using ICS then test its demos and samples and try to understand how it is built, the only thing your will lose is your lack of confidence. 3) I pointed that you are better with 64k buffer for received, but as this subject does need many information, so i will explain a little again, and here i am assuming that you are familiar with UDP characteristics, are you ? Are you sure you need UDP not TCP ?, (Google will help with these questions) i will not go long on this as these information are on the internet, what i want to point is if you are not sure of how to receive UDP buffer then most likely you still don't know how to prevent lost packets in other word as you can't prevent it you need to manage and recover from it, with UDP there will be losing packets, and that depends on the data and your app (and many other factors out of your hand and control), in that case if the packets are critical then they need to be resend hence need a confirmation mechanism ,etc.. The point is read more, and the internet had many resources on this. now back to 64k, you always better with receive buffer at 64k, always !, but this is not the case with send buffer !, so for receive with UDP i highly recommend to stick to 64k, while send buffer might be like Angus said 4k, why this is important A) Smaller UDP packets size have lower rate of loss or dropped on the wire. B) There is packets of all kind floating around, and your might need to test different approach, so by setting receive buffer maxed then you are forcing yourself to handle the data based on its content not its length, as this is very common mistake, also your app is ready for testing and tuning with different parameters in the future without the need to rebuild and redeploy both client and server, means you can tweak your system online or even can make it dynamic, like the more dropped packets, then lets the packets become smaller. ps: i suggest in my code as best practice to use the buffer size as global const, and i don't understand the point of changing it to local var with fixed value, keeping such numbers as global constant will be better for tweaking in the future.
  11. While waiting for someone with knowledge to answer your question, i want to point few things will help you not in this case only but for other cases. So lets start with the IO operations, see, socket receiving and sending are Input/Output operations, hence they are out of control (your software) they mostly depends on the OS and the hardware, so as a rule of thumb you always should check for the result against your request, in UDP case and in socket in general you perform an operation with n bytes and all this methods should depends on OS methods and they always will return the m bytes ( m here is how did they managed to do) also m<n always, if you issued a send with 765 bytes then you should check if 765 bytes are sent, this is also true for files reading or writing... most IO operations are designed not designed to work as black and white or success or fail, but in many cases designed like "ok i managed to do 4 out of your 9" so without in mined, your "but always returns 0" with your code does show how you do get that 0 ?!, and even in success reading you asked for 0 !, so your code wrong in at least two places and it should be like this setLength(MBytes, SomeBufferSize); BytesReceived := udRecive.Receive(MBytes, length(MBytes)); // SetLength(MBytes, BytesReceived); //We can perform this or not that is up to you if BytesReceived > 0 then begin end else begin end; But here how to decide SomeBufferSize , we can depends on IO operation to get some values, but in many case it is less efficient than ask for more or max we can handle and let the IO operation fill what it can, so is the best value for UDP buffer ? I Googled "maximum udp buffer size" and got a confirm 64k is the maximum UDP packet, the number is relatively small and manageable but any code, so why not to make it our standard buffer, so this code will be in general a better approach. const OUR_MAX_UDP_BUFFER = 4 * 1024; procedure TForm1.udReciveDataAvailable(Sender: TObject; ErrCode: Word); var MBytes: TBytes; begin setLength(MBytes, OUR_MAX_UDP_BUFFER); BytesReceived := udRecive.Receive(MBytes, length(MBytes)); if BytesReceived > 0 then begin ... end; also you can move that MBytes from local var to be a field on your Form1 hence you will need to allocate it once at biggest size like above and you don't need to trim it or free it.
  12. Kas Ob.

    Range Check Error ERangeError

    Before updating anything try to reproduce the error first. It is way easier than you think to reproduce, all what you need is to make sure the address are higher than MaxInt, so use FastMM4 from here https://github.com/pleriche/FastMM4 and notice this option AlwaysAllocateTopDown here https://github.com/pleriche/FastMM4/blob/ca64b52ac6d918f4dbd06c20c28e8f961a7e450f/FastMM4Options.inc#L178 leave it on, and you will catch them all red handed.
  13. Kas Ob.

    QueryPerformanceCounter precision

    You called it a bug, and i didn't correct you on that, there is no bug at all, because these are controlling your OS tempo, like a maestro, the thing is they are hiding the real hardware frequency, and this is how Microsoft does this differently from lets say Apple, if you overclock your hardware these might change but the result of using this pair together will be consistent, unlike Apple, if you ever tried to install Hachintosh (not illegal to install) on your OEM PC then you will might faced this problem, i have unlocked CPU i7-2600K, and by default from first start it went to run on 3.8GHz instead of 3.4GHz, every time i reset the bios the motherboard does that, anyway installing/running Hackintosh on overclocked/downclocked PC will fail unless you do specify the timing parameter on boot settings with 100% accuracy, this is not needed on Apple hardware. Now to access hardware timing, like motherboard RTC, you need to be able to execute specific hardware instructions that are prohibited in user mode, plain and simple. Also, while timing on OS will not be accurate, you can use different approach for timing, which is getting cycles per instructions block, but again is not the cup of tea for everyone. this is the most accurate way to compare speed and performance between algorithms/code blocks. QPF and QPC values are worthless alone with being used together, but they are always right. ps: 10m is coming form the motherboard bus clock, which in turn being associated with other clock timing base and effective (usually the same and usually 100Mhz), while your CPU will run on an multiplayer of that clock to reach its frequency, also memory modules runs on completely different clock multiplayer with different timing (something like 800,1333,1600...Mhz) , the OS pick Frequency base to run at and this will not reflect the speed of your device, sometimes the OS on older devices prefer to choose less/smaller frequency to be in control like my device, and in newer devices it might prefer to use higher frequency.
  14. Kas Ob.

    QueryPerformanceCounter precision

    I deliberately left it on debug, it wasn't a mistake, and your question is good one, but lets try put this right once and for all. Starting with the Debug vs Release, that was small portion of code and the only difference will be in optimize enabled or not, the difference in code cycle consumption will be also very small, something like between 2 and not sure but lets say 20 cycle, even on 1000 cycle how this can affect CPU result, remember this, your CPU is most like something >3GHz means 3 billion cycle per second, also >3 million cycle per millisecond, and 3 thousand cycle per microsecond, and >300 cycle per nanosecond, now do you think it will be that much relevant ? No, it will not, but this will introduce very ugly fact, why the result is that different even for your last posted result, the answer is very complex as there is many factors playing roles here. Remember that we are controlled and protected sandbox (the Windows OS is emulating a sandbox), and it does control some of the aspects of your code execution (software) also it does control how CPU cores does switch between hardware execution point, also emulating threads, this is done to protect integrity also simulating real world multitasking, also cache miss does have huge impact on the result one will say we don't have memory access to larger than the CPU L1 cache in that code, yes we don't have but we have context switch will direct the CPU core to read code from different place and load that chunk also that code will need its stack, hence these are two reads, these two read most likely reside in very different realm, hence the L1 lading will trigger L2 and L3 loading too, and while L1 access lines of 64 byte L2 and L3 works on larger block and will ask for full page load means 4k bytes., and for every load there is an eviction to the already data there, this must be guaranteed to be shipped to the memory module. One context switch might take 10k cycle but also it might take few billions and that something you can't predict or control ( there is was to mitigate or control this to some extent but i am not posting any of them on this forum as they are close to writing a rootkit ) Also to understand this effect i just ran your last code twice , and here is the result I didn't close my opened IDE's and this browser. In different runs i got more different result from the screenshot, see 10 second run and there was more than 30 context switch with delta between them of 5 switches and around 700 million cycle, while my system reports >1180 threads are up and running, no way to guarantee a switch happened to the same other thread, So after all of that, do you think measuring this stuff will be accurate on Windows ? what accuracy you can reach ? Nope, zero guarantee that will ever happen but statistically it will happen, now refer to table of cycles per fraction of second i explained above. The only way to measure time with higher precision on Windows is by using averages over longer running time (also ir might help to use other statistics methods like deviation and excluding ranges, like remove some result as margins caused by OS interference based on there distance from the median...etc) Hope that was clear and helpful.
  15. Kas Ob.

    QueryPerformanceCounter precision

    The result after closing everything on my Windows And with one running IDE also a Chrome with YouTube video opened ( yes once opened YouTube will start playing around with time resolution, hence it will affect the IDE and everything else, the IDE does the same !) Now lets point to your code, 1) MilliSecondOf might retuned 0 this means ms1=-1 -> endless loop ! 2) You are assuming you will hit the same millisec plus one, based on what ? that if equal logic does hold unpredicted behaviour, and to rephrase this, lets assume windows timer is very accurate, and the timer precision is 2 this means we should hit either evens or odds and this will consistence more as the OS is more accurate but this assumption itself means we might hit the other (even or odd) and this assumption is valid, so what if we hit only once the other value and continued for 1000 (or forever), the same logic can be applied to the biggest prime number less than 1000, we will hit it again after a cycle depends on its value, right ? 3) there is no point going after timing measure using protected mode on not-real-time-OS like Windows, you can't not achieve that in well defined and documented way. Anyway, i respect your curiosity and persistence to know, don't lose that !, Just time your time you are spending on timing.
×