Blocking and Non-Blocking Sockets
In this post, we’re finally back onto some netcode stuff. We’re going to address the differences between blocking and non-blocking sockets. Both types are usually provided by the socket implementation that comes with your OS, or are provided as part of the standard library for that particular language. In addition, they may also be wrapped into a single interface, simply operating in different modes (blocking or non-blocking mode) or separate interfaces, one dedicated for each.
Blocking Sockets
This is your typical TCP socket and it’s what you will find most of the time when looking for tutorials and examples of netcode. In other documentation and articles, if it does not specifically say they are non-blocking sockets, then they will no doubt be referring to the blocking variety. This is because they are easier to work with.
Simply put, blocking sockets will block the current thread until the read or write operation has finished. With a read operation, it will block until the specified number of bytes have been read. With a write operation, it will block until the specified bytes have been successfully written.
However, it is not as straight-forward as this. Socket implementations have their own internal socket buffers. So, when performing read and write operations, the thread blocking occurs when it cannot read from or write to the internal socket buffer (as it is either full or empty depending on the operation). Most of the time a write operation will never block because there will be space in the internal output buffer (except if you’re sending larges amounts of data, such as transmitting file data). The socket implementation will automatically send any data in the internal output buffer and accumulate new data in the internal input buffer when it receives it. So, if the network card has already received data into its internal input buffer that your game netcode is expecting, then the subsequent read operation will not block, it will read that data straight from the internal input buffer instantly without blocking.
This is why we’ve been dealing with threads up until this point. In order to make use of blocking sockets, you’ll need to run at least one thread per socket. This isn’t necessarily an issue on a client because you’ll only ever have one or two open sockets to the server at a time. However, on a server that is required to handle many concurrent clients, the thread overhead can begin to build up.
Of course, as with everything in the world of software development, there are always exceptions. For instance, you can check the number of available bytes to ensure everything required is in the input buffer before invoking a read operation and therefore never blocking. You can do similar checks for the write operations. This works well for simple netcode applications and will allow you to run the netcode for all your clients in a single thread, or even in the main thread (although this is viewed as extremely bad practice), without any major issues. However, with more complex netcode, especially variable sized data packets, it quickly becomes increasingly difficult. If you find yourself doing this to handle multiple clients in a single thread, you should really stop because you are actually reinventing the wheel. What you really should be using here are non-blocking sockets.
Non-Blocking Sockets
This type of socket is very similar to blocking sockets, with one difference. All operations performed on a non-blocking socket are expected to return immediately and thus will not block the current thread. This changes absolutely everything in regards to working with them. You can’t just read from a non-blocking socket and then deserialise the data as no data may be returned (and it will not block until there is some). Instead you need create your own buffer and read data into it, then when you have everything you need, deserialise the buffer.
And yes, I know you should be reading blocking sockets into your own buffers as well if you want to be able to deserialise any kind of complex data packets, but there are other complexities that need to be handled. When writing, if the socket’s internal write buffer is full (due to lots of data or poor bandwidth), the write operation will block. Which allows you to throw everything you have at it and it’ll block until it eventually goes through. This is not the case with a non-blocking socket. You have to monitor the number of bytes written and remove those bytes from your buffer yourself before the next write.
Why would you opt for a socket type which is clearly more difficult to handle? Well, as with most hard ways there are benefits. Knowing that no matter what you do the socket won’t block the current thread has its advantages. Checks can be removed, assumptions can be made. Once you design your netcode to handle instant returns of zero data, you’ll realise that you can run all sockets on a single thread. This is what I was referring to at the end of the section on blocking sockets and it is excellent when it comes to writing game servers. However, it gets better. You can make use of selectors.
Selectors
These handy utility classes allow you to query all objects registered with it in a single operation. I’m not entirely sure if there are different types, but the selectors I’ve worked with in Java and C++ are specifically designed for use with sockets and IO streams.
Once registered, you can query the selector for all streams which are ready for IO. If none are, then the select operation will block until at least one stream is ready, otherwise it will instantly return. After the select is called, you can then retrieve the list of ready streams. In C++ you can select the read ready and write ready streams individually, but in Java you will need to check whether the ready state refers reading, writing, or both. Either way, you can get a list of streams and the state they are in, which allows you to process all the reads and writes in one go without blocking. Once complete, you just loop back round and start at the select again.
Now, I’ve done some light benchmarking to see which performs better, one socket per thread or all sockets on a single thread using a selector. All I can tell you is that using a selector is blazingly fast and, with the other benefits of lower thread overhead and being much easier to manage on a single thread, I very much prefer this approach these days. It makes up the core of the netcode in our Solitude game server and I have measured it before a couple times looking for bottlenecks (which existed elsewhere) and it doesn’t even register a microsecond with multiple clients connected.
Conclusion
Non-blocking sockets are definitely better than their blocking brethren, but they are a little trickier to set up and use. However, once you have your net framework done, you don’t really ever have to worry about it again. Of course, the benefits you get only really apply to server applications. We still use a blocking socket on its own thread in the client because there is absolutely no point in changing it now. There will be zero benefit.
Anyway, that pretty much wraps up this post. As always, if you want to chat about any of the topics or issues I’ve raised you can either comment below, catch me on IRC (on the navigation bar click Community→Chat) or send a tweet to @Jargon64.
Thanks for reading! 🙂