Spring webFlux differrences when Netty vs Tomcat is used under the hood

gstackoverflow :

I am learninig spring webflux and I've read the following series of articles(first, second, third)

In the third Article I faced the following text:

Remember the same application code runs on Tomcat, Jetty or Netty. Currently, the Tomcat and Jetty support is provided on top of Servlet 3.1 asynchronous processing, so it is limited to one request per thread. When the same code runs on the Netty server platform that constraint is lifted, and the server can dispatch requests sympathetically to the web client. As long as the client doesn’t block, everyone is happy. Performance metrics for the netty server and client probably show similar characteristics, but the Netty server is not restricted to processing a single request per thread, so it doesn’t use a large thread pool and we might expect to see some differences in resource utilization. We will come back to that later in another article in this series.

First of all I don't see newer article in the series although it was written in 2016. It is clear for me that tomcat has 100 threads by default for handling requests and one thread handle one request in the same time but I don't understand phrase it is limited to one request per thread What does it mean?

Also I would like to know how Netty works for that concrete case(I want to understand difference with Tomcat). Can it handle 2 requests per thread?

Brian Clozel :

When using Servlet 2.5, Servlet containers will assign a request to a thread until that request has been fully processed.

When using Servlet 3.0 async processing, the server can dispatch the request processing in a separate thread pool while the request is being processed by the application. However, when it comes to I/O, work always happens on a server thread and it is always blocking. This means that a "slow client" can monopolize a server thread, since the server is blocked while reading/writing to that client with a poor network connection.

With Servlet 3.1, async I/O is allowed and in that case the "one request/thread" model isn't anymore. At any point a bit request processing can be scheduled on a different thread managed by the server.

Servlet 3.1+ containers offer all those possibilities with the Servlet API. It's up to the application to leverage async processing, or non-blocking I/O. In the case of non-blocking I/O, the paradigm change is important and it's really challenging to use.

With Spring WebFlux - Tomcat, Jetty and Netty don't have the exact same runtime model, but they all support reactive backpressure and non-blocking I/O.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=121112&siteId=1