JAVA learning experience, overview of JAVA solutions, high concurrency, cache breakdown, cache avalanche, distributed solutions

High concurrency solution:

  1. Cache: redis memory database, Nginx static page cache
  2. Asynchronous: message middleware, multi-threaded
  3. Concurrent programming:
  4. distributed:
  5. Database: The database layer can be separated from reading and writing (provided that reading more and writing less, read 80%, write 20%), sub-database and table, cluster mode to achieve high concurrency (try not to use database cache, generally in the service layer Do cache)

Cache data consistency:

  1. Quasi real-time: use the message queue MQ mechanism
  2. Cache invalidation; set invalidation time
  3. Strong consistency: update redis in the business of addition, deletion and modification
  4. Timed task

Cache penetration: no data in the cache, malicious requests, directly to the database

  1. Cache empty objects: cache the non-existent data in redis, and assign it directly to null
  2. Condition filtering: if the increasing sequence is used, it can be directly <0; if the uuid is used, the length can be limited
  3. Bloom filter: A large cache map is used to store the key in redis, and before querying, check whether there is a value in the map

Caching avalanche: a large amount of data becomes invalid at the same time, causing the request to go directly to the database

  1. Try not to let the key fail at the same time
  2. Data warm-up: before large concurrency, the relevant data is directly cached in redis
  3. Cache never expires: some hot spots, important keys, never expire.

Cache breakdown: Hot key access is very high frequency. At the moment when the hot key fails, a large amount of concurrent data is sent to the database

  1. Never expire; the most effective way to never expire these hot keys 2. Mutex: the current hot key is querying the database, and others are queuing. Take it directly from redis after queuing up to time

Distributed transaction:

  1. Redis solution solution:
  2. Zookeper solution:

Nginx is highly available:

  1. LVS:keeplived
  2. DNS load: one domain name corresponds to multiple ip addresses

The difference between distributed and microservices:

  1. Microservices are independent of the database

Front end to server information push technology

  1. Short polling method: use the setInterval timing task of js to send ajax requests (such as Jingdong login, Jingdong payment and then jump to the order page);
    solution: browser js set timing task
  2. Long polling mode: After the client requests, the client waits until there is a result before returning. After returning, request again (now the headlines are scrolling).
    Solution: Call the current function again after the browser request has a result; the server uses the DeferredResult provided by spring3.
  3. SSE long connection: eventsource supports reconnection after disconnection.
    Solution: browser uses eventsource function; see video for server changes
  4. websocket:

High concurrency:

redis: read 100,000+, write 80,000+
mysql: mechanical hard disk 300, solid-state hard disk 700
tomcat: average 300-500, limit 1000
mysql maximum number of connections 300-700
using nginx page cache can replace cdn technology, but cdn is Payable

Dubbo's question:

每次最多返回100条数据,多了需要修改dubbo的配置
序列问题,需要显示声明
编译版本的问题,不同机器编译的dubbo的class文件版本不一致,无法调用

Guess you like

Origin blog.csdn.net/penggerhe/article/details/108253787