Architecture III: the introduction of local caching and distributed cache

1. Brief description

In fact, at this stage to come up with a distributed cache, some early, pre mainly use the local cache, I use the technology mainly ehcahe, this memory there is the basic server applications that run on top of you, that there is a big question is, not suitable for long-term storage, if long-term storage, when a large amount of data, will occupy a large part of your service memory, distributed caching with memcached and more are redis late, but I mainly use the redis.
redis distributed cache will have a series of questions, such as: cache coherency, the cache penetration / breakdown, avalanche cache, a hot spot data set failed. I will write a post-solutions for these issues to this problem.

2. A flowchart

Here Insert Picture Description

3. Question

Most requests for cache Kang Zhu, subscriber growth, concurrent pressure will fall on the tomcat, the response is very slow. Here I did not understand understanding, complicated by a tomcat per second, and said the line to see a lot of support 150 concurrent default, it can be changed to 250 concurrent / sec. Individuals really want to verify the number of concurrent. So I study a little, links.

4. optimizations

  • On the Tomcat server or increase the local cache in the same JVM

  • Increase in external distributed cache

  • Caching hot data and static html page

    By caching the most requested prior to read and write database interception off, can effectively improve the access speed of the application.

Published 215 original articles · won praise 135 · Views 1.14 million +

Guess you like

Origin blog.csdn.net/weinichendian/article/details/103823643
Recommended