Sticky Sessions vs Non-Sticky Sessions

Non-Sticky Session

Amazon EC2’s Elastic Load Balancing feature just became a bit more powerful. Up until now each load balancer had the freedom to forward each incoming HTTP or TCP request to any of the EC2 instances under its purview. This resulted in a reasonably even load on each instance, but it also meant that each instance would have to retrieve, manipulate, and store session data for each request without any possible benefit from locality of reference.

Suppose two separate web browsers each request three separate web pages in turn. Each request can go to any of the EC2 instances behind the load balancer, like this:

This configuration was the original idea for the manager and its implementation was documented here. Its main features are the following:

  • The non-sticky configuration produces the most fair balancing.
  • All the sessions are managed by all the servers that receives request for that session. But the session is cleared, ie, the attributes of the session are only in the local session while the request lives in the container, once the session is unlocked the attributes are cleared.
  • Error conditions in the repo are immediately detected (session is always read and written, no caching is done).
  • This configuration needs the server to lock/unlock sessions and that operations represents some performance degradation (there are more operations performed and more complex).
  • Locking can produce some weird behaviors if the application sends several requests at the same time or in parallel (usually applications that uses web-services a lot). In normal applications (one request, one page) this effect is meaningless.

When a particular request reaches a given EC2 instance, the instance must retrieve information about the user from state data that must be stored globally. There’s no opportunity for the instance to cache any data since the odds that several requests from the same user / browser will go down as more instances are added to the load balancer.

With the sticky session feature, it is possible to instruct the load balancer to route repeated requests to the same EC2 instance whenever possible. 

Sticky Session


When a particular request reaches a given EC2 instance, the instance must retrieve information about the user from state data that must be stored globally. There’s no opportunity for the instance to cache any data since the odds that several requests from the same user / browser will go down as more instances are added to the load balancer.

With the new sticky session feature, it is possible to instruct the load balancer to route repeated requests to the same EC2 instance whenever possible.

The implementation of the sticky configuration is well commented in this post. Features of this configuration:

  • A sticky configuration is simpler and, therefore, faster. In this kind of configuration the manager can assume it is the only one managing and modifying the session and that saves some access to the external repository.
  • A sticky configuration makes each application server to only store the sessions that are being re-directed to it. In a normal (no failure) situation a server knows nothing about the sessions in the other containers.
  • The sticky configuration saves operations against the repository but it needs to maintain the whole session in memory (because the session is not re-read, the saved attributes should be mantained).
  • Normally a sticky configuration produces a more unfair balancing of the requests (but fair enough in my humble opinion).
  • Cos the session is cached in the local server, an error in the repository could be detected not as fast as in non-sticky situations.

There are two main downsides:

  1. Your load isn't evenly distributed. Sticky sessions will stick, hence the name. While initial requests will be distributed evenly, you might end up with a significant number of users spending more time than others. If all of these are initially set to a single server, that server will have much more load. Typically, this isn't really going to have a huge impact, and can be mitigated by having more servers in your cluster.

  2. Proxies conglomerate users into single IP's, all of which would get sent to a single server. While that typically does no harm, again other than increasing individual server loads, proxies can also operate in a cluster. A request into your F5 from such a system would not necessarily be sent back to the same server if the request comes out of a different proxy server in their proxy cluster.

下面这篇文章是对ELB Sticky Sessions的讲解:

Elastic Load Balancing with Sticky Sessions 

猜你喜欢

转载自yuanhsh.iteye.com/blog/2191934
今日推荐