Dmitry Onoshko 0 Posted August 22 I’ve recently got really worried about TCP send buffer overflows. I use TServerSocket and TClientSocket in asynchronous mode, and I’m going to send file data interleaved with smaller packets through them. What really scares me and doesn’t fit well in the whole picture is the case when SendBuf might return –1 (or anything else not equal to the size of the buffer being sent). I basically have two questions: 1) What is the proper way to detect a client that just doesn’t read the data from its socket (say. because it got hung)? 2) How should one handle the case when SendBuf says the buffer hasn’t been sent? In general I do understand I should try to SendBuf a little later, even better — in OnClientWrite event handler. To do that I’d have to use some sort of queue and put the data there. But then I think about the client that doesn’t read data: since there are shorter messages, not only file blocks, the queue might start growing and the logic gets really complicated. Any thoughts are appreciated. Share this post Link to post
Remy Lebeau 1421 Posted August 22 (edited) 2 hours ago, Dmitry Onoshko said: I’m going to send file data interleaved with smaller packets through them. That will complicate your protocol, but it can work if your protocol design allows for it. Such as by breaking up the file data into small chunks and send each chunk as an individual packet, and each packet carries a header specifying its type and size so the client can differentiate one packet from another. Make sure to NEVER overlap packets - don't send a new packet until a previous packet is done. That requires putting a lock on the socket if you send packets from multiple threads. 2 hours ago, Dmitry Onoshko said: 1) What is the proper way to detect a client that just doesn’t read the data from its socket (say. because it got hung)? Assuming your protocol doesn't require the client to explicitly ack everything it receives (most protocols don't), then the only way to detect that condition is when SendBuf() fails with a WSAEWOULDBLOCK error because the socket's outgoing buffer filled up due to the client not removing data from that buffer. Until that buffer fills up, any data "sent" is simply buffered locally by the socket without error and is picked up by the OS behind the scenes, so you won't know if the data was actually transmitted and received until an error occurs. 2 hours ago, Dmitry Onoshko said: 2) How should one handle the case when SendBuf says the buffer hasn’t been sent? You have to check the socket error code to find out why SendBuf() failed. Any error code other than WSAEWOULDBLOCK should be treated as a fatal error so close the connection. Otherwise, cache all current unsent data and future data somewhere until the OnClientWrite event tells you that the socket can accept data again, then you can try re-sending your cached data, removing any bytes that are accepted by SendBuf() without error, until either the cache is empty or a new error occurs. Quote In general I do understand I should try to SendBuf a little later, even better — in OnClientWrite event handler. To do that I’d have to use some sort of queue and put the data there. Yes, exactly. Quote But then I think about the client that doesn’t read data: since there are shorter messages, not only file blocks, the queue might start growing and the logic gets really complicated. The logic is actually really simple: When sending a new packet, if the cache is not empty then append the entire packet to the end of the cache and don't attempt to send it yet. Otherwise, send as many bytes as possible for the packet. If SendBuf() fails, stop sending the packet. If the error is WSAEWOULDBLOCK then save the remaining unsent bytes to the end of the cache. In the OnClientWrite event, if the cache is not empty then send as many bytes as possible from the cache, removing any bytes that SendBuf() accepts. If SendBuf() fails for any reason (you don't need to look at the error code here), stop sending the cache and leave the remaining unsent bytes in it for a future event to re-try. I've posted examples of this many times in the past in various forums. I use the TCustomWinSocket.Data property to hold the cache (usually using a TMemoryStream), that way the cache follows the socket as its passed around. But yes, the cache could grow very large if you don't put a max cap or timeout on it. If you try to send a lot of data to an unresponsive client, or if data stays in the cache for a long period of time, then assume the client is dead (even if the OS hasn't reported it yet) and close the connection. On a similar note, the OnClientRead event can do similar caching. It should read all available bytes from the socket and put them into a separate cache, then you can remove and process only complete packets from the cache, waiting for future events to finish incomplete packets. Packets can span across multiple OnClientRead events per socket. Edited August 22 by Remy Lebeau 2 Share this post Link to post
Dmitry Onoshko 0 Posted August 23 10 hours ago, Remy Lebeau said: The logic is actually really simple: When sending a new packet, if the cache is not empty then append the entire packet to the end of the cache and don't attempt to send it yet. Otherwise, send as many bytes as possible for the packet. If SendBuf() fails, stop sending the packet. If the error is WSAEWOULDBLOCK then save the remaining unsent bytes to the end of the cache. In the OnClientWrite event, if the cache is not empty then send as many bytes as possible from the cache, removing any bytes that SendBuf() accepts. If SendBuf() fails for any reason (you don't need to look at the error code here), stop sending the cache and leave the remaining unsent bytes in it for a future event to re-try. Thanks, that really makes it much more clear, and since I already have custom wrappers around those TClientSocket and TServerSocket adding a Tx buffer of customizable size to the wrappers looks promising. But one thing is still unclear: SendBuf doesn’t return the WinSock error code and neither of the socket classes seem to have properties to access it. I can only see two possibilities here: (1) the OnError handler that is called from SendBuf if the error is not WSAEWOULDBLOCK and (2) calling WSAGetLastError again myself. The first option leads to complication of the OnError handler which now has to store the error code somewhere just in case anyone asks. The second option feels like jumping through levels: that would require adding WinSock library to uses clause and relying on the fact that the last operation to fail inside SendBuf is the send() I’m interested in. Is there any better option I can’t see? Share this post Link to post
Remy Lebeau 1421 Posted August 23 (edited) 11 hours ago, Dmitry Onoshko said: But one thing is still unclear: SendBuf doesn’t return the WinSock error code and neither of the socket classes seem to have properties to access it. I can only see two possibilities here: (1) the OnError handler that is called from SendBuf if the error is not WSAEWOULDBLOCK and (2) calling WSAGetLastError again myself. If you need the error code, you would have to get it from either the OnClientError event or by calling WSAGetLastError() directly, yes. However, it is a bit simpler than that, because SendBuf() returns 0 on a graceful disconnect and -1 on failure, and if it fails and the error code is not WSAEWOULDBLOCK then it will raise an ESocketError exception unless an OnClientError handler sets its ErrorCode parameter to 0. So, that means that by default, SendBuf() will return -1 only for WSAEWOULDBLOCK, and will raise an exception otherwise. So, be prepared to catch that exception and act accordingly. Edited August 23 by Remy Lebeau Share this post Link to post
Dmitry Onoshko 0 Posted August 24 16 hours ago, Remy Lebeau said: However, it is a bit simpler than that, because SendBuf() returns 0 on a graceful disconnect and -1 on failure, and if it fails and the error code is not WSAEWOULDBLOCK then it will raise an ESocketError exception unless an OnClientError handler sets its ErrorCode parameter to 0. So, that means that by default, SendBuf() will return -1 only for WSAEWOULDBLOCK, and will raise an exception otherwise. So, be prepared to catch that exception and act accordingly. Well, in fact I do set the ErrorCode in the handler, ’cause the requirement is to provide custom user-friendly socket error handling, and it seems to be much easier to have a single point of failure instead of putting try…excepts every now and then. BTW, am I right that the whole point of checking for WSAEWOULDBLOCK vs some other error code is to avoid adding data to Tx buffer is the socket has already failed? And, if you don’t mind, another small TServerSocket-related question. It seems to only have an OnClientError event but no “OnError”-kind event for the listening socket. Which makes me wonder if there’s any better way to handle, say, listening port being already taken by another program than using SetErrorProc (which feels a bit messy, too). Share this post Link to post
Kas Ob. 121 Posted August 24 Hi, Well, i have this feeling that you are hesitating to do the right out of fear of braking things, i suggest to stop looking at these part of code as fragile as glass vase. Remy brilliantly explained all what do you need to know, now step back to look to the big picture here, you need functionality out of these TClientSocket and TServerClient, they are nice but far from best or very practical design, so .. adjust them to your need and this is the whole point of OOP to begin with. Example, SendBuf do raise exceptions unless you suppress them by changing the ErrorCode in the event, do you need this functionality ? it is not a big deal, but why not add your own function like SendData which takes care of the buffer too, this will simplify the buffering, also if needed add your own events, add somting like buffering triggered to signal traffic congestion, slowness, not responding peer... utilize this in you GUI, nice to see colored stuff indicating connection status. If you see what is going inside SendBuf, all what is needed to extended or replace it with better version is two things 1) Dont forget to Lock then Unlock, accessible "ClientSocket1.Socket.Lock" .. 2) you have the SocketHandle to pass to "send" (the API), also accessible "ClientSocket1.Socket.SocketHandle" Extend them from outside, or if you prefer OOP then things way simpler. For your question about handling errors for Listen in TServerSocket, these are shortcoming in the design, try follow DoActivate logic, and i think you have all what you need to overload these functions with better and more suitable functionality. You can override TCustomWinSocket.Listen with your own, that raise exception in case your (added) event OnServerError is not assigned. In other words, have faith in your self, you got it ! 55 minutes ago, Dmitry Onoshko said: SetErrorProc I really suggest to stay away from this, as it is utilizing a threadvar, unless you really know what are you doing, you must know exactly when and who is accessing it, as you described it, can go messy very quick. Good luck and happy coding ! Share this post Link to post
Kas Ob. 121 Posted August 24 And one more thing as reminder, about socket send API https://learn.microsoft.com/en-us/windows/win32/api/winsock2/nf-winsock2-send You always need to check the result of sent bytes, compare with your own data length, this is standard usage, but keep in mind if send sent part of the packet, this doesn't not mean that the next send for the rest will return fail with WSAEWOULDBLOCK, No, send can partially send for many other situations like busy/full locally or the network driver imposing interrupt moderation... and none of these situations does/should concern you, Just keep in mind unless send return SOCKET_ERROR and WSAEWOULDBLOCK is the WSAGetLastError, then you can safely try to send more, i always use loops. I saw this bad design that caused sluggishness in sending out of wrong assumptions ! Share this post Link to post
Dmitry Onoshko 0 Posted August 24 (edited) 10 minutes ago, Kas Ob. said: Just keep in mind unless send return SOCKET_ERROR and WSAEWOULDBLOCK is the WSAGetLastError, then you can safely try to send more, i always use loops. Thanks, made a few corrections to take that into account. I also saw somewhere that receive notification message would be posted to the message queue whenever ReceiveBuf leaves some data unread, so no point making a loop for ReceiveBuf, right? Edited August 24 by Dmitry Onoshko Share this post Link to post
Kas Ob. 121 Posted August 24 22 minutes ago, Dmitry Onoshko said: I also saw somewhere that receive notification message would be posted to the message queue whenever ReceiveBuf leaves some data unread, so no point making a loop for ReceiveBuf, right? This is long story !, in short the answer is yes, loop is way better and will yield better throughput. The little longer one with a suggestion , There is very popular wrong practice of checking for the length of available data over socket then allocate a buffer then perform the receive (recv) on that buffer with the already (assumed) known length, this is waste of time ! i suggest to pre-allocate buffers for TCP, it could be 64kb (exactly 1024*64), or 16kb (any size will work but it is better to have a multiple of 4kb size and not less than 8kb), but it depends on how you expect you traffic speed, anyway.. with that allocated buffer just perform recv with the largest size your buffer can handle/fit, no need to check for available data length. Will work as expected and will squeeze speed and relax the network (vertically cascade) drivers (and TCP stack and provider) from this extra useless operation, operation that go into the kernel with afd driver then return empty handed, while you can recv as much as you can and it will not behave any different, on the contrary will give you all what is there to receive in one shot. Share this post Link to post
Dmitry Onoshko 0 Posted August 24 6 minutes ago, Kas Ob. said: 42 minutes ago, Dmitry Onoshko said: I also saw somewhere that receive notification message would be posted to the message queue whenever ReceiveBuf leaves some data unread, so no point making a loop for ReceiveBuf, right? This is long story !, in short the answer is yes, loop is way better and will yield better throughput. I don’t really understand this part. My point was that to receive data the loop is not the way to go (well, not the only one) since that would generally lead to several posted messages, and when processing them — to a few useless recv calls if no data has arrived since then. But you seem to suggest using the loop anyway. 9 minutes ago, Kas Ob. said: i suggest to pre-allocate buffers for TCP, it could be 64kb (exactly 1024*64), or 16kb (any size will work but it is better to have a multiple of 4kb size and not less than 8kb), but it depends on how you expect you traffic speed, anyway.. with that allocated buffer just perform recv with the largest size your buffer can handle/fit, no need to check for available data length. This is already done. After all, at least for TCP, knowing the amount of data available in advance has no benefits for the code: the “protocol message” might arrive in pieces and/or concatenated with the rest of the previous and the beginning of the next one, so there still has to be some storage to be able to find the message boundaries. Unless, maybe, if the protocol used is simple enough to make the data stream being parseable with a simple finite automaton byte-by-byte. Share this post Link to post
Kas Ob. 121 Posted August 24 2 minutes ago, Dmitry Onoshko said: I don’t really understand this part. My point was that to receive data the loop is not the way to go (well, not the only one) since that would generally lead to several posted messages, and when processing them — to a few useless recv calls if no data has arrived since then. But you seem to suggest using the loop anyway. Stupid me !, not confirming the negation, sorry for that, and you know.. what you can go with loop or without it, the difference is little and close to none, for any non very intensive socket usage like sending/receiving large data (like huge files) also i missed to point the need to remove ioctlsocket , even you are not asking for the available data, ReceiveBuf uselessly internally is doing it, this can be removed by using your own RecvData. Share this post Link to post
Remy Lebeau 1421 Posted August 24 (edited) 10 hours ago, Dmitry Onoshko said: Well, in fact I do set the ErrorCode in the handler, ’cause the requirement is to provide custom user-friendly socket error handling, and it seems to be much easier to have a single point of failure instead of putting try…excepts every now and then. Fair enough. In that case, SendBuf() would return -1 without raising an exception on all socket errors, so you would have to call WSAGetLastError() manually after SendBuf() returns. However, on a real error, SendBuf() will call closesocket() before exiting, so you might lose the error code. 10 hours ago, Dmitry Onoshko said: BTW, am I right that the whole point of checking for WSAEWOULDBLOCK vs some other error code is to avoid adding data to Tx buffer is the socket has already failed? Basically, yes. There is no point in wasting overhead adding data to the Tx buffer if you know it's never going to be sent. 10 hours ago, Dmitry Onoshko said: It seems to only have an OnClientError event but no “OnError”-kind event for the listening socket. Which makes me wonder if there’s any better way to handle, say, listening port being already taken by another program than using SetErrorProc (which feels a bit messy, too). If the listening socket fails to bind/open the port, an ESocketError exception is raised unless SocketErrorProc is assigned (which it is not by default). So, again, I would suggest using a try..except block. Your aversion to using exception handling is actually complicating your usage rather than simplifying it. 8 hours ago, Dmitry Onoshko said: I also saw somewhere that receive notification message would be posted to the message queue whenever ReceiveBuf leaves some data unread, so no point making a loop for ReceiveBuf, right? TServerSocket/TClientSocket use WSAAsyncSelect() to trigger events for non-blocking sockets. The documentation for WSAAsyncSelect() explains this behavior: Quote For FD_READ, FD_OOB, and FD_ACCEPT events, message posting is level-triggered. This means that if the reenabling routine is called and the relevant condition is still met after the call, a WSAAsyncSelect message is posted to the application. This allows an application to be event-driven and not be concerned with the amount of data that arrives at any one time. Consider the following sequence: Network transport stack receives 100 bytes of data on socket s and causes Windows Sockets 2 to post an FD_READ message. The application issues recv( s, buffptr, 50, 0) to read 50 bytes. Another FD_READ message is posted because there is still data to be read. With these semantics, an application need not read all available data in response to an FD_READ message—a single recv in response to each FD_READ message is appropriate. If an application issues multiple recv calls in response to a single FD_READ, it can receive multiple FD_READ messages. Such an application can require disabling FD_READ messages before starting the recv calls by calling WSAAsyncSelect with the FD_READ event not set. In general, when you receive an OnClientRead event, it is best to simply read everything that is currently available in the socket in one go, even if that means buffering it somewhere. But, if you were to call ReceiveBuf() in a loop, that is not the end of the world, you just may receive additional OnClientRead events with no data available, which is not a problem as long as you are handling WSAEWOULDBLOCK correctly while reading. Edited August 24 by Remy Lebeau Share this post Link to post
Dmitry Onoshko 0 Posted August 24 1 hour ago, Remy Lebeau said: If the listening socket fails to bind/open the port, an ESocketError exception is raised unless SocketErrorProc is assigned (which it is not by default). So, again, I would suggest using a try..except block. Your aversion to using exception handling is actually complicating your usage rather than simplifying it. I’ve just tried it and it seems I get no exception when trying to Open a TServerSocket on a busy port. In debugger I can see both bind and listen calls somehow succeeded. WinSock < 2.0 stuff, maybe? 1 hour ago, Remy Lebeau said: But, if you were to call ReceiveBuf() in a loop, that is not the end of the world, you just may receive additional OnClientRead events with no data available, which is not a problem as long as you are handling WSAEWOULDBLOCK correctly while reading. I’m somewhat worried about the maximum size of the Windows message queue. AFAIR, it was something around tens of thousands of messages by default (at least, in WinXP), and even if it has been increased since then, tens of thousands of clients are expected for the program, so (1) saving a few messages seems to be a good idea, (2) I start feeling the good old library is not that good for the task, (3) especially since SOMAXCONN is 5 there, (4) why on earth there’s no other good socket library included with Delphi and (5) OMG, I’ll probably have to implement IOCP-based sockets myself handling all the quirks of multithreading, but then again DB access is required for most of the messages that get received, so serialization is still a thing, and… Share this post Link to post
Kas Ob. 121 Posted August 25 9 hours ago, Dmitry Onoshko said: (2) I start feeling the good old library is not that good for the task, You picked the one of the worst also outdated socket library, literally it is not installed by default in the IDE for many years, it is deprecated. 9 hours ago, Dmitry Onoshko said: (1) saving a few messages seems to be a good idea, You can't hit that messages limit easily, in fact i doubt that even possible if you tried to do so, to be able to hit that limit with socket messages would be an accomplishment. 9 hours ago, Dmitry Onoshko said: (3) especially since SOMAXCONN is 5 there, SOMAXCONN has nothing to do with max connection server can serve or connect, it is about backlog, how many sockets trying to connect at the same exact time (literally at the same millisecond or microsecond), the rest at this exact time will be dropped. 9 hours ago, Dmitry Onoshko said: (4) why on earth there’s no other good socket library included with Delphi You have many to pick from Indy, ICS, mORMot/mORMot2, RealThinClient, these are opensource Share this post Link to post
Remy Lebeau 1421 Posted August 25 (edited) 9 hours ago, Dmitry Onoshko said: I’ve just tried it and it seems I get no exception when trying to Open a TServerSocket on a busy port. In debugger I can see both bind and listen calls somehow succeeded. WinSock < 2.0 stuff, maybe? Not likely. If they didn't fall, then the port was available. Verify that with the command-line netstat, or SysInternals TCPView, etc. For example, if something else is bound to 127.0.0.1:12345 and you bind to 0.0.0.0:12345, it will succeed as long as port 12345 is available on another IP besides 127.0.0.1. By default, TServerSocket binds to 0.0.0.0 only, it takes some effort to bind it to a specific IP. Quote I’m somewhat worried about the maximum size of the Windows message queue. AFAIR, it was something around tens of thousands of messages by default (at least, in WinXP), and even if it has been increased since then, tens of thousands of clients are expected for the program, so (1) saving a few messages seems to be a good idea, (2) I start feeling the good old library is not that good for the task Do not use TServerSocket in non-blocking mode with that many concurrent clients. Even if the message queue could handle that many messages, the message loop runs in only 1 thread and can handle only 1 message at a time, so overall performance will be terrible. More so if your server has a UI to service, too. Try using TServerSocket in blocking mode instead, which handles clients using threads instead of window messages. But even so, with that many concurrent clients, you are probably better off using direct socket API calls with Overlapped I/O, I/O Completion Ports, or Registered I/O. That will give you the best performance with minimal overhead, as those APIs are designed for high performance servers with large client loads. The APIs that TServerSocket uses are good for only small-to-medium sized servers. Quote (3) especially since SOMAXCONN is 5 there SOMAXCONN does not limit the number of clients that can be connected to the server concurrently. That number is limited only by available memory. The listen() API has a parameter to set the size of the server's backlog, which is a staging area where connected clients wait to be accept()'ed by the app's code. SOMAXCONN is a special flag to that parameter, it lets Winsock decide a reasonable size for the backlog. Quote (4) why on earth there’s no other good socket library included with Delphi Umm, you don't consider Indy to be a good alternative? It's been bundled in every version of Delphi for the past 20-odd years. Quote (5) OMG, I’ll probably have to implement IOCP-based sockets myself handling all the quirks of multithreading With the sheer large number of clients you want to handle, I would recommend that, yes. Quote but then again DB access is required for most of the messages that get received, so serialization is still a thing Certainly, though thread pooling and connection pooling are good to use for that kind of stuff. Definitely do not create thousands of DB connections. Lots of clients should be able to share resources. For instance, while one client is reading a message, it doesn't need the DB yet, so another client could be using it in the meantime. And when that client is done using it, let another client use it. Edited August 25 by Remy Lebeau Share this post Link to post
Dmitry Onoshko 0 Posted August 25 1 hour ago, Kas Ob. said: You can't hit that messages limit easily, in fact i doubt that even possible if you tried to do so, to be able to hit that limit with socket messages would be an accomplishment. Well, maybe not so easy with sockets, but I remember hitting it when I was a student, writing a program that did some stuff in a separate thread and notified the UI by posting messages. Really easy. 1 hour ago, Kas Ob. said: SOMAXCONN has nothing to do with max connection server can serve or connect, it is about backlog, how many sockets trying to connect at the same exact time (literally at the same millisecond or microsecond), the rest at this exact time will be dropped. 1 hour ago, Remy Lebeau said: SOMAXCONN does not limit the number of clients that can be connected to the server concurrently. That number is limited only by available memory. The listen() API has a parameter to set the size of the server's backlog, which is a staging area where connected clients wait to be accept()'ed by the app's code. SOMAXCONN is a special flag to that parameter, it lets Winsock decide a reasonable size for the backlog. Yep, I know. But (1) what if even part of the expected clients connects simultaneouly and (2) why would thevalue change from 5 to $7FFFFFFF from WinSock1 to WinSock2? I understand the second value should have been used since WinSock1, but they probably had reasons to choose 5, and will it be treated as real maximum when WSAStastup initializes to version 1 is a big question for me. 1 hour ago, Kas Ob. said: You have many to pick from Indy, ICS, mORMot/mORMot2, RealThinClient, these are opensource The point was to avoid using third-party libraries and to have asynchronous sockets. 1 hour ago, Remy Lebeau said: Umm, you don't consider Indy to be a good alternative? It's been bundled in every version of Delphi for the past 20-odd years. I’m still worried about the fact it uses blocking sockets, so I’m not sure what effect would that have with tens of thousands of clients. And with the need of DB access when serving the requests. I understand it doesn’t (shouldn’t?) use a thread-per-client approach, but… Share this post Link to post
Remy Lebeau 1421 Posted August 25 (edited) 2 hours ago, Dmitry Onoshko said: (1) what if even part of the expected clients connects simultaneouly If the backlog fills up, subsequent clients will be rejected with a (WSA)ECONREFUSED error until space clears up. Quote (2) why would thevalue change from 5 to $7FFFFFFF from WinSock1 to WinSock2? Why do any APIe evolve over time? Because they learned from their past experience and choose a better value that made more sense. The value 5 was used in Winsock 1 at a time when Windows had limited resources. Then it grew. Winsock 2 reflected that. And, for what it's worth, just because SOMAXCONN is now defined as 0x7fffffff does not mean the backlog will actually be that big. https://stackoverflow.com/questions/4709756/listen-maximum-queue-size-per-windows-version Quote I understand the second value should have been used since WinSock1, but they probably had reasons to choose 5, and will it be treated as real maximum when WSAStastup initializes to version 1 is a big question for me. Why is that even an issue? Why would you even allow it to initialize to v1? You have to specify a desired version when calling WSAStartup(), and you are responsible for validating that an appropriate version that is suitable for your needs was actually chosen. If the version is too low, stop, don't move forward. Period. Winsock v2 was released 30 years ago. So, unless you are targeting Windows 95, forget about v1 any any limitations related to it. Quote The point was to avoid using third-party libraries and to have asynchronous sockets. Nothing built-in to Delphi uses async sockets. So either use a 3rd party lib that already implements them, or re-invent the wheel yourself. Your project, your choice. Quote I’m still worried about the fact it uses blocking sockets, so I’m not sure what effect would that have with tens of thousands of clients. Indy has been tested with thousands of simultaneous clients. Tens of thousands is probably pushing it. Quote And with the need of DB access when serving the requests. You are going to run into that problem no matter what socket technology you choose to use. Quote I understand it doesn’t (shouldn’t?) use a thread-per-client approach, but… Indy does use a thread-per-client approach for its TCP servers. It tried implementing fibers and IOCP many years ago, but that approach didn't work out and the effort was abandoned. Maybe it will try again one day, or maybe not, who knows. Edited August 25 by Remy Lebeau Share this post Link to post
Dmitry Onoshko 0 Posted August 25 6 minutes ago, Remy Lebeau said: And, for what it's worth, just because SOMAXCONN is now defined as 0x7fffffff does not mean the backlog will actually be that big. https://stackoverflow.com/questions/4709756/listen-maximum-queue-size-per-windows-version Yep, I understand that, that’s why I wrote they should’ve used some value like DWORD(–1) or MAXINT, or MAXLONG, or something like that in the first place, since that’s quite a common, let me call it so, design pattern. If my memory serves, Raymond Chen called similar stuff sentinel values. 8 minutes ago, Remy Lebeau said: Why is that even an issue? Why would you even allow it to initialize to v1? You have to specify a desired version when calling WSAStartup(), and you are responsible for validating that an appropriate version that is suitable for your needs was actually chosen. If the version is too low, stop, don't move forward. Period. Winsock v2 was released 30 years ago. So, unless you are targeting Windows 95, forget about v1 any any limitations related to it. Well, since for now I’m with TServerSocket and his friends, it’s the one who calls WSAStartup and initializes to v1. If I call it myself to be the first one, it might work but the classes might break, ’cause certain stuff has definitely changed between major WinSock versions. For instance, ScktComp uses Winapi.Winsock which defines SOMAXCONN as 5. Now if I’m the first to initialize WinSocks in the application, I get a backlog of 5 clients instead of whatever the OS could choose. Some stuff has also changed around Vista times in the treatment of SO_REUSEADDR. 16 minutes ago, Remy Lebeau said: Indy does use a thread-per-client approach for its TCP servers. So, for like 10’000 clients we get 10’000 threads each taking quite a piece of resources, like memory and handles? And they start competing for the poor processor cores (let’s hope 8 or 16 if the end-user doesn’t try to use a PC instead of real server hardware)? Share this post Link to post
Remy Lebeau 1421 Posted August 25 7 hours ago, Dmitry Onoshko said: they should’ve used some value like DWORD(–1) or MAXINT, or MAXLONG, or something like that in the first place MAXINT is 0x7fffffff. 7 hours ago, Dmitry Onoshko said: Well, since for now I’m with TServerSocket and his friends, it’s the one who calls WSAStartup and initializes to v1. Fair enough. 7 hours ago, Dmitry Onoshko said: If I call it myself to be the first one, it might work but the classes might break, ’cause certain stuff has definitely changed between major WinSock versions. True. If you initialize first to v2, TServerSocket will still negotiate to v1, but actually use v2, since Winsock can be initiated only once. 7 hours ago, Dmitry Onoshko said: So, for like 10’000 clients we get 10’000 threads each taking quite a piece of resources, like memory and handles? Yes. Indy is not intended for large servers with very high client loads. But most users don't need to write such large servers. There are better technologies for that purpose if you need it. Which you appear to need. So I wouldn't recommend any existing libraries, you should custom tailor your socket code to meet your specific high performance/resource needs. Share this post Link to post
Dmitry Onoshko 0 Posted August 25 3 minutes ago, Remy Lebeau said: Yes. Indy is not intended for large servers with very high client loads. But most users don't need to write such large servers. There are better technologies for that purpose if you need it. Which you appear to need. So I wouldn't recommend any existing libraries, you should custom tailor your socket code to meet your specific high performance/resource needs. Well, I gave it a try at the beginning of the project, with hand-written IOCP-based overlapped sockets. But the problem of properly shutting them down when the user just invokes the destructor from a GUI thread, plus lack of spare time for brave experiments made me switch to something asynchronous at hand till better times. Now I feel the better times will have to come sooner than expected. Share this post Link to post
Remy Lebeau 1421 Posted August 25 (edited) 1 hour ago, Dmitry Onoshko said: Well, I gave it a try at the beginning of the project, with hand-written IOCP-based overlapped sockets. But the problem of properly shutting them down when the user just invokes the destructor from a GUI thread Not that hard, really. Common practice is to keep a counter of active I/O requests. During shutdown, set a flag to prevent new requests from starting, and cancel active requests still in progress. When requests complete, decrement the counter. Wait for the counter to fall to 0, then close the socket and exit. Probably not the best thing to do in a destructor in a GUI, though. I would have the GUI start the shutdown process and let it run asynchronously, then disable the UI and tell the user to wait, and then finish cleaning up the GUI only when all pending requests have finished. Edited August 25 by Remy Lebeau 1 Share this post Link to post
Dmitry Onoshko 0 Posted August 25 (edited) 1 hour ago, Remy Lebeau said: Not that hard, really. Common practice is to keep a counter of active I/O requests. During shutdown, set a flag to prevent new requests from starting, and cancel active requests still in progress. When requests complete, decrement the counter. Wait for the counter to fall to 0, then close the socket and exit. Probably not the best thing to do in a destructor in a GUI, though. I would have the GUI start the shutdown process and let it run asynchronously, then disable the UI and tell the user to wait, and then finish cleaning up the GUI only when all pending requests have finished. Well, not destroying an object implicitly feels like a code smell. At least, it requires more attention when reviewing code for possible leaks and stuff. So, yes, stopping an overlapped socket is almost east. Especially since the only way to guarantee pieces of TCP stream are processed in the right order is to have at most one outstanding Tx and Rx operation per socket. But then events come into play. Being notified of client disconnection is useful for bookkeeping, and this implies either synchronizing with the GUI thread (which is waiting for pending requests to finish in the destructor) or making an event that gets invoked from arbitrary thread. The second way might not be as bad, really, but felt too complicated back then. I remember also trying to make the wait for pending operations alertable (so Synchronize works) but that also feels somewhat wrong. Frankly speaking, the ease of event handling provided by TServerSocket and TClientSocket and the simplicity of code, classes and documentation (as compared to Indy, in fact) led to me making the wrong choice. Although, at least I made a minimum viable “product” (sort of) with them. Just to see how bad they are for the task. Edited August 25 by Dmitry Onoshko Share this post Link to post
JonRobertson 72 Posted August 27 On 8/25/2024 at 2:28 AM, Dmitry Onoshko said: I understand it doesn’t (shouldn’t?) use a thread-per-client approach Thread-per-client does not scale well and likely would not work well for the number of connections that you've mentioned. Take a look at this answer, as well as the Raymond Chen post referenced. Maximum threads limit per process in windows 10? (superuser.com) Does Windows have a limit of 2000 threads per process? (The Old New Thing blog) Share this post Link to post
Remy Lebeau 1421 Posted August 27 (edited) On 8/25/2024 at 2:32 PM, Dmitry Onoshko said: Well, not destroying an object implicitly feels like a code smell. At least, it requires more attention when reviewing code for possible leaks and stuff. I wasn't suggesting you not destroy your objects. But a destructor is not always the best place to perform complex cleanups, especially during app shutdown. For instance, you could use the Form's OnClose/Query event to detect the user's desired to exit the app, signal your code to start cleaning up (close sockets, cancel I/Os, etc), and disable the UI so the user can't generate more traffic in the meantime. When the cleanup is actually finished (all I/O completions are reported, etc), then exit the app. Edited August 27 by Remy Lebeau Share this post Link to post
Dmitry Onoshko 0 Posted August 27 5 hours ago, Remy Lebeau said: On 8/26/2024 at 12:32 AM, Dmitry Onoshko said: Well, not destroying an object implicitly feels like a code smell. At least, it requires more attention when reviewing code for possible leaks and stuff. I wasn't suggesting you not destroy your objects. But a destructor is not always the best place to perform complex cleanups, especially during app shutdown. For instance, you could use the Form's OnClose/Query event to detect the user's desired to exit the app, signal your code to start cleaning up (close sockets, cancel I/Os, etc), and disable the UI so the user can't generate more traffic in the meantime. When the cleanup is actually finished (all I/O completions are reported, etc), then exit the app. Ah, sorry, I must have misunderstood your idea. What I tried to address in my previous post is an idea of asking a socket to shutdown and letting it destroying itself automatically (without explicit destructor call). Back to your point, I feel requiring a particular method call before destroying the object like asking for trouble some time later. If one suddenly decides to call a destructor, it should clean all the stuff without any conditions. As for calling a destructor only at a particular moment in time, like before application termination, changing network app settings at runtime is a thing, and this implies actually shutting down sockets while app is still running, possibly with other sockets serving other ports/protocols/tasks but using the same IOCP and stuff. Disabling related UI parts while the restart takes place might be OK, though. Share this post Link to post