Write catalog title here
- High concurrency solution:
- Cache data consistency:
- Cache penetration: no data in the cache, malicious requests, directly to the database
- Caching avalanche: a large amount of data becomes invalid at the same time, causing the request to go directly to the database
- Cache breakdown: Hot key access is very high frequency. At the moment when the hot key fails, a large amount of concurrent data is sent to the database
- Distributed transaction:
- Nginx is highly available:
- The difference between distributed and microservices:
- Front end to server information push technology
- High concurrency:
- Dubbo's question:
High concurrency solution:
- Cache: redis memory database, Nginx static page cache
- Asynchronous: message middleware, multi-threaded
- Concurrent programming:
- distributed:
- Database: The database layer can be separated from reading and writing (provided that reading more and writing less, read 80%, write 20%), sub-database and table, cluster mode to achieve high concurrency (try not to use database cache, generally in the service layer Do cache)
Cache data consistency:
- Quasi real-time: use the message queue MQ mechanism
- Cache invalidation; set invalidation time
- Strong consistency: update redis in the business of addition, deletion and modification
- Timed task
Cache penetration: no data in the cache, malicious requests, directly to the database
- Cache empty objects: cache the non-existent data in redis, and assign it directly to null
- Condition filtering: if the increasing sequence is used, it can be directly <0; if the uuid is used, the length can be limited
- Bloom filter: A large cache map is used to store the key in redis, and before querying, check whether there is a value in the map
Caching avalanche: a large amount of data becomes invalid at the same time, causing the request to go directly to the database
- Try not to let the key fail at the same time
- Data warm-up: before large concurrency, the relevant data is directly cached in redis
- Cache never expires: some hot spots, important keys, never expire.
Cache breakdown: Hot key access is very high frequency. At the moment when the hot key fails, a large amount of concurrent data is sent to the database
- Never expire; the most effective way to never expire these hot keys 2. Mutex: the current hot key is querying the database, and others are queuing. Take it directly from redis after queuing up to time
Distributed transaction:
- Redis solution solution:
- Zookeper solution:
Nginx is highly available:
- LVS:keeplived
- DNS load: one domain name corresponds to multiple ip addresses
The difference between distributed and microservices:
- Microservices are independent of the database
Front end to server information push technology
- Short polling method: use the setInterval timing task of js to send ajax requests (such as Jingdong login, Jingdong payment and then jump to the order page);
solution: browser js set timing task - Long polling mode: After the client requests, the client waits until there is a result before returning. After returning, request again (now the headlines are scrolling).
Solution: Call the current function again after the browser request has a result; the server uses the DeferredResult provided by spring3. - SSE long connection: eventsource supports reconnection after disconnection.
Solution: browser uses eventsource function; see video for server changes - websocket:
High concurrency:
redis: read 100,000+, write 80,000+
mysql: mechanical hard disk 300, solid-state hard disk 700
tomcat: average 300-500, limit 1000
mysql maximum number of connections 300-700
using nginx page cache can replace cdn technology, but cdn is Payable
Dubbo's question:
每次最多返回100条数据,多了需要修改dubbo的配置
序列问题,需要显示声明
编译版本的问题,不同机器编译的dubbo的class文件版本不一致,无法调用