Jump to content
Skullcode

Twsocket udp how to get the exact buffersize that received ? ?

Recommended Posts

I am trying to receive some bytes that I have sent.

but I am not able to specify the length of the byte since I do not know how to get the size of received bytes.

 

currently, I do  

 

procedure TForm1.udReciveDataAvailable(Sender: TObject; ErrCode: Word);
var
MBytes: TBytes;
begin

setLength(MBytes, 0);


udRecive.Receive(MBytes, length(MBytes));

but it always returns 0 length because I do set the length to 0 since I don't know how to set it to the exact length of the received Packet 

 

how do I know the exact length of the received packet to set it to the Mbytes variable?

Share this post


Link to post
Guest

While waiting for someone with knowledge to answer your question, i want to point few things will help you not in this case only but for other cases.

2 hours ago, Skullcode said:

but it always returns 0 length because I do set the length to 0 since I don't know how to set it to the exact length of the received Packet 

 

how do I know the exact length of the received packet to set it to the Mbytes variable?

So lets start with the IO operations, see, socket receiving and sending are Input/Output operations, hence they are out of control (your software) they mostly depends on the OS and the hardware, so as a rule of thumb you always should check for the result against your request, in UDP case and in socket in general you perform an operation with n bytes and all this methods should depends on OS methods and they always will return the m bytes ( m here is how did they managed to do) also m<n always, if you issued a send with 765 bytes then you should check if 765 bytes are sent, this is also true for files reading or writing... most IO operations are designed not designed to work as black and white or success or fail, but in many cases designed like "ok i managed to do 4 out of your 9"

so without in mined, your "but always returns 0" with your code does show how you do get that 0 ?!, and even in success reading you asked for 0 !, so your code wrong in at least two places and it should be like this

setLength(MBytes, SomeBufferSize);

BytesReceived := udRecive.Receive(MBytes, length(MBytes));
// SetLength(MBytes, BytesReceived); //We can perform this or not that is up to you
if BytesReceived > 0 then
  begin
      
  end else
  begin
  
  end;

But here how to decide SomeBufferSize , we can depends on IO operation to get some values, but in many case it is less efficient than ask for more or max we can handle and let the IO operation fill what it can, so is the best value for UDP buffer ? 

I Googled "maximum udp buffer size" and got a confirm 64k is the maximum UDP packet, the number is relatively small and manageable but any code, so why not to make it our standard buffer, so this code will be in general a better approach. 

const 
  OUR_MAX_UDP_BUFFER = 4 * 1024;
  
procedure TForm1.udReciveDataAvailable(Sender: TObject; ErrCode: Word);
var
  MBytes: TBytes;
begin
  setLength(MBytes, OUR_MAX_UDP_BUFFER);
  BytesReceived := udRecive.Receive(MBytes, length(MBytes));

  if BytesReceived > 0 then
    begin
    ...
    end;

also you can move that MBytes from local var to be a field on your Form1 hence you will need to allocate it once at biggest size like above and you don't need to trim it or free it.

Share this post


Link to post

You can use the OverbyteIcsUdpLstn sample to see how to receive UDP data.  Your won't have megabytes of data waiting when that event is called, in fact you will never know how much UDP data is being sent since it only arrives one packet at a time.  So typically you use ReceiveFrom to receive a maximum of say 4K, and then add that to a large receive buffer. 

 

Much easier to use the new OverbyteIcsIpStreamLog component which does all this for you, look at the OverbyteIcsIpStmLogTst sample.

 

Angus

 

Share this post


Link to post
14 hours ago, Kas Ob. said:

While waiting for someone with knowledge to answer your question, i want to point few things will help you not in this case only but for other cases.

So lets start with the IO operations, see, socket receiving and sending are Input/Output operations, hence they are out of control (your software) they mostly depends on the OS and the hardware, so as a rule of thumb you always should check for the result against your request, in UDP case and in socket in general you perform an operation with n bytes and all this methods should depends on OS methods and they always will return the m bytes ( m here is how did they managed to do) also m<n always, if you issued a send with 765 bytes then you should check if 765 bytes are sent, this is also true for files reading or writing... most IO operations are designed not designed to work as black and white or success or fail, but in many cases designed like "ok i managed to do 4 out of your 9"

so without in mined, your "but always returns 0" with your code does show how you do get that 0 ?!, and even in success reading you asked for 0 !, so your code wrong in at least two places and it should be like this


setLength(MBytes, SomeBufferSize);

BytesReceived := udRecive.Receive(MBytes, length(MBytes));
// SetLength(MBytes, BytesReceived); //We can perform this or not that is up to you
if BytesReceived > 0 then
  begin
      
  end else
  begin
  
  end;

But here how to decide SomeBufferSize , we can depends on IO operation to get some values, but in many case it is less efficient than ask for more or max we can handle and let the IO operation fill what it can, so is the best value for UDP buffer ? 

I Googled "maximum udp buffer size" and got a confirm 64k is the maximum UDP packet, the number is relatively small and manageable but any code, so why not to make it our standard buffer, so this code will be in general a better approach. 


const 
  OUR_MAX_UDP_BUFFER = 4 * 1024;
  
procedure TForm1.udReciveDataAvailable(Sender: TObject; ErrCode: Word);
var
  MBytes: TBytes;
begin
  setLength(MBytes, OUR_MAX_UDP_BUFFER);
  BytesReceived := udRecive.Receive(MBytes, length(MBytes));

  if BytesReceived > 0 then
    begin
    ...
    end;

also you can move that MBytes from local var to be a field on your Form1 hence you will need to allocate it once at biggest size like above and you don't need to trim it or free it.

Thank you very much for clarification i do something like this now 

 

var
OUR_MAX_UDP_BUFFER : integer;
MBytes: TBytes;
BytesReceived : integer;
begin

OUR_MAX_UDP_BUFFER := 4 * 1024;

setLength(MBytes, OUR_MAX_UDP_BUFFER);

BytesReceived := udRecive.Receive(MBytes, length(MBytes));


if BytesReceived > 0 then
begin
setLength(MBytes, BytesReceived);
//use the Mbytes as it needed

it is working but is what I am doing is correct? i am trying to handle it in a best way possible

Share this post


Link to post
4 hours ago, Skullcode said:

it is working but is what I am doing is correct? i am trying to handle it in a best way possible

Another option would be to peek the socket first to query the size of the next available datagram without reading it, then allocate the buffer to that size, then read the datagram into the buffer.  But a dynamic array of raw bytes is trivial to shrink in size, so over-allocating a little is not a bad thing (unless you are running in an embedded system).

Edited by Remy Lebeau
  • Thanks 1

Share this post


Link to post
Guest
7 hours ago, Skullcode said:

it is working but is what I am doing is correct? i am trying to handle it in a best way possible

Great to hear it is working, your are doing it right now, but still few things to reconsider.

 

1) Angus and Remy comments are very valuable and you really should read all the above and remember it, i missed to point you to the fact you always will be safer when using the working demo in any library as these demos/samples are done by the guys who know.

 

2) You supplied code, a working code "per your word" and still afraid if it is right, so i recommend that start experiment on your own, and please again if your are using ICS then test its demos and samples and try to understand how it is built, the only thing your will lose is your lack of confidence.

 

3) I pointed that you are better with 64k buffer for received, but as this subject does need many information, so i will explain a little again, and here i am assuming that you are familiar with UDP characteristics, are you ?

Are you sure you need UDP not TCP ?, (Google will help with these questions)

 

i will not go long on this as these information are on the internet, what i want to point is if you are not sure of how to receive UDP buffer then most likely you still don't know how to prevent lost packets in other word as you can't prevent it you need to manage and recover from it, with UDP there will be losing packets, and that depends on the data and your app (and many other factors out of your hand and control), in that case if the packets are critical then they need to be resend hence need a confirmation mechanism ,etc..

 

The point is read more, and the internet had many resources on this.

 

now back to 64k, you always better with receive buffer at 64k, always !, but this is not the case with send buffer !, so for receive with UDP i highly recommend to stick to 64k, while send buffer might be like Angus said 4k, why this is important

A) Smaller UDP packets size have lower rate of loss or dropped on the wire.

B) There is packets of all kind floating around, and your might need to test different approach, so by setting receive buffer maxed then you are forcing yourself to handle the data based on its content not its length, as this is very common mistake, also your app is ready for testing and tuning with different parameters in the future without the need to rebuild and redeploy both client and server, means you can tweak your system online or even can make it dynamic, like the more dropped packets, then lets the packets become smaller.

 

 

ps: i suggest in my code as best practice to use the buffer size as global const, and i don't understand the point of changing it to local var with fixed value, keeping such numbers as global constant will be better for tweaking in the future.

Share this post


Link to post

The most important issue about the DataAvailable event is not the size of your Receive/ReceiveFrom buffer, but that you should loop within the event continually reading all waiting data into a larger public receive buffer or stream until Receive/ReceiveFrom returns 0 or less.  If you don't do that, the event will be called again immediately you exit it to empty the internal receive buffers. 

 

There is no guarantee about the length of data any call to Receive/ReceiveFrom will return even for UDP, it might take several events for a full packet to be assembled, rarely, but it can happen. 

 

As I said before, all this is done for you in the OverbyteIcsIpStreamLog component

.

Angus

 

Share this post


Link to post
8 hours ago, Angus Robertson said:

it might take several events for a full packet to be assembled, rarely, but it can happen. 

That is true for TCP, where there is no 1:1 relationship between sends and reads, it is just a stream of bytes.  But that is not true for UDP, where there is a 1:1 relationship, since sends/reads deal only in whole datagrams.  A datagram can't span multiple read events, a read operation must read the whole datagram in one go, or else the unread portions will be lost.

  • Thanks 1

Share this post


Link to post

I agree that UDP should send whole packets, but when they arrive two or more may be buffered before they are read in the DataAvailable event.  Also, they may not be sent as complete packets, for instance a record may be sent in one send/packet then a CRLF as the next send/packet, so if the application is waiting for that CRLF as a record separator, it needs two packets.  So best to treat UDP as a stream. 

 

Angus

 

Share this post


Link to post
6 hours ago, Angus Robertson said:

I agree that UDP should send whole packets, but when they arrive two or more may be buffered before they are read in the DataAvailable event. 

Even so, each read operation will only receive 1 datagram at a time.  And as you said yourself, if you leave unread data in the socket when exiting the OnDataAvailable event, it will just be fired again.  So, you can read 1 datagram per event triggering, no need for a loop inside the event (although you can certainly do that, too - unless the socket is operating in blocking mode, in which case DON'T use a loop!).

6 hours ago, Angus Robertson said:

Also, they may not be sent as complete packets, for instance a record may be sent in one send/packet then a CRLF as the next send/packet

If application data spans multiple send() calls, then it would be treated as separate datagrams, and each datagram is independent of other datagrams, regardless of how the network fragments them into packets during transmission.  The socket provider handles fragmentation, so 1 send() = 1 read() as far as applications are concerned.  And in that vein, it makes no sense to use delimiters across UDP datagrams, since each datagram is self-contained.  Record data should not span across datagram boundaries.

6 hours ago, Angus Robertson said:

so if the application is waiting for that CRLF as a record separator, it needs two packets.  So best to treat UDP as a stream.

I would not treat UDP as a stream, because it is not a stream.  UDP is message-oriented, not stream-oriented.  Each datagram is meant to be treated on its own, regardless of other datagrams.  For example, when sending records over UDP, only 1 send per record should be used, not multiple sends per record, that simply does not work over UDP.

Share this post


Link to post
19 hours ago, Kas Ob. said:

Great to hear it is working, your are doing it right now, but still few things to reconsider.

 

1) Angus and Remy comments are very valuable and you really should read all the above and remember it, i missed to point you to the fact you always will be safer when using the working demo in any library as these demos/samples are done by the guys who know.

 

2) You supplied code, a working code "per your word" and still afraid if it is right, so i recommend that start experiment on your own, and please again if your are using ICS then test its demos and samples and try to understand how it is built, the only thing your will lose is your lack of confidence.

 

3) I pointed that you are better with 64k buffer for received, but as this subject does need many information, so i will explain a little again, and here i am assuming that you are familiar with UDP characteristics, are you ?

Are you sure you need UDP not TCP ?, (Google will help with these questions)

 

i will not go long on this as these information are on the internet, what i want to point is if you are not sure of how to receive UDP buffer then most likely you still don't know how to prevent lost packets in other word as you can't prevent it you need to manage and recover from it, with UDP there will be losing packets, and that depends on the data and your app (and many other factors out of your hand and control), in that case if the packets are critical then they need to be resend hence need a confirmation mechanism ,etc..

 

The point is read more, and the internet had many resources on this.

 

now back to 64k, you always better with receive buffer at 64k, always !, but this is not the case with send buffer !, so for receive with UDP i highly recommend to stick to 64k, while send buffer might be like Angus said 4k, why this is important

A) Smaller UDP packets size have lower rate of loss or dropped on the wire.

B) There is packets of all kind floating around, and your might need to test different approach, so by setting receive buffer maxed then you are forcing yourself to handle the data based on its content not its length, as this is very common mistake, also your app is ready for testing and tuning with different parameters in the future without the need to rebuild and redeploy both client and server, means you can tweak your system online or even can make it dynamic, like the more dropped packets, then lets the packets become smaller.

 

 

ps: i suggest in my code as best practice to use the buffer size as global const, and i don't understand the point of changing it to local var with fixed value, keeping such numbers as global constant will be better for tweaking in the future.

I am coming from a Java/Kotlin background using a UDP client in those platforms are different that's why I was really confused about receiving the event, the socket demo of ICS is very unorganized, for someone who doesn't have solid knowledge in Delphi As me, in other platforms a UDP client have a parameter in the receiving event That holds the bytes received and ready to use. I am reading each reply and I am trying to digest to understand better

Share this post


Link to post
1 hour ago, Skullcode said:

in other platforms a UDP client have a parameter in the receiving event That holds the bytes received and ready to use

Those other libraries are likely either reading the UDP data for you and then giving you each datagram's actual data in the event, or they have peeked the socket to determine the available bytes and then telling you that size,   ICS doesn't do either of those, it is just notifying you that datagrams have arrived, but is not doing anything to gleam information about them to present to you. 

 

For comparison, Indy's TIdUDPClient (which is not event-driven) requires you to provide it with a pre-allocated buffer for it to receive bytes into.  But TIdUDPServer (which is event driven) will read each datagram into an internal buffer and then fire an event giving you the actual bytes that were read into that buffer.

 

So, there is room for different mentalities, depending on your needs.

Edited by Remy Lebeau
  • Thanks 1

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×