Optimization page access speed (three) - server optimization

Page access speed optimization (c)

- server optimization

I. Overview

Optimization of the service side, mainly through the message queue , reducing the database requests (cache), concurrent processing, static pages like manner.

Second, the message queue

1, to solve the problem

Message Queue (Message Queue, MQ) There are many different implementations can be used rabbitmq, activemq, rocketmq, you can also use task distribution system gearman.

Message Queue mainly to resolve the message sent asynchronously, that is, for the contents of a system do not care, just announced a deal completed, bring some parameters, need their own subscription system. For example, some tasks performed to complete the callback function, you can use to implement asynchronous callback mq.

2, treatment

To rabbitmq example. Rabbitmq server, can be regarded as producers and consumers of transit platform message. Producers announced to rabbitmq server switch (exchange), when released will define routing rules (routing key). Rabbitmq server according to the routing rules, the message is forwarded from the exchange to the corresponding queue (queue), the message from the queue and then taken in by the consumer, the processing. As shown below:

In order to speed up the processing speed to prevent the accumulation of queues, consumers can play multiple processes at the same time, the queue for content consumption.

3, to ensure that the consumer

Rabbitmq there retry mechanism, in the absence of successful released, it will automatically retry sending. Of course, if you want to make sure that the message is consumed, you can set a similar way TCP three-way handshake, requiring consumers to complete the processing of the message, and then publish a message telling producers.

Producers can send messages to individual records in a database table, and after receiving the message consumer confirmation processing, the corresponding data state.

Meanwhile, one can write the crontab, the timing of the scan table, to exceed a predetermined time (e.g. 10 minutes) recorded success status is not set, a re-release.

A separate table to record the information advantage is that, as long as there is data in the table, indicating that the message has been posted. When this message appears as a problem to be handled when the news is not easy to confirm or release the message receiving process problems.

Of course, this method requires the processing of the message recipient is idempotent, i.e., for the same message, no matter how many times to process only once.

4、rabbitmq的routing key

Rabbitmq There are several treatments, as follows:

1) Broadcast

All the messages posted to the queue by the consumer to receive the message of interest, for the message is not of interest discarded.

2)direct

Specify a queue to be sent.

3)topic

Using a regular manner, some publish messages to the queue, for example, a. *, To publish all named a. The beginning of the queue.

Third, the cache

1, to solve the problem

Cached object, mainly to reduce the operation of the database. Requests to the database, you need to take up I / O resources, and the cache is stored in memory, the speed will be much faster.

Thus, for frequently accessed data, and real-time requirement is not so high, it is possible to reduce the pressure on the database by caching.

In addition, the need for frequent changes of data (such as articles traffic), access to a large amount in a short time (spike system), caching is a better solution.

Cache is used redis and memcache.

2, the difference between the memcache and redis

Memcache cache is pure, there is only one form of key-value store.

Redis more powerful, supports five data structure, including string, list, hash, set, sorted-set, support for persistent data (AOF, snapshots), supports transaction processing, support for sentinel surveillance, and can temporarily break through the memory limit ( by persistent manner).

3, key in setting mode

In general, the method name: id way as key, so more convenient to find.

4, caching issues arise

Cache Cache will penetrate, avalanche cache, the cache breakdown.

1) Cache penetration

When a large number of query key does not exist, as under normal circumstances do not exist query results will not be saved to the cache, which will lead to a lot of queries bypass the cache query the database directly.

Solution: For the content database does not exist, you can also save a short time random time, such as three minutes, to avoid bypass the database behavior.

2) cache avalanche

When all key are set to the same time, there will be the same time are all key expires, it will happen the moment the database of a large number of requests.

Solution: different key, a set of random time, such as 4 minutes and 58 seconds to 5 minutes 3 seconds of time expires, to avoid the same time expires.

3) buffers breakdown

For a key, a large number of concurrent access in a very short time, all requests to bypass the cache database access to data.

Solution: You can set the mutex to solve the problem. That request when the cache does not exist, go visit mutex, redis of setnx, memcache's add a key. In this case, the request database, and the results of the request stored in the buffer.

In this way, the next time a request is due to the presence mutex, key can not be added when there is, it means that the data is locked, you can wait for a random short period of time and then requests a lock until the request is successful, go to visit a cache, usually at this time there is already a cache. If no content, you can go to request database.

5, the cache expiration policy --LRU

Cache content too much, more than the machine's memory, you need a policy excluding part of the contents of the cache, the most commonly used is the LRU strategy, that is the least recently used.

Specific implementation is to use a queue to maintain the cache when a cache is accessed from the queue to remove the bottom, add it to the head of the queue, so the need to eliminate the oldest data are no data access.

Problems:

A time when there are a lot of access to different key, and will queue dirty, resulting in data that needs to be removed.

solution:

1) can be two queues, the data access time when placed in the first queue, second queue again moved to access and to clean the first rule LRU queue.

2) can also be used more weighted queue, the important content, the content may visit often, are high-level cache queue to go, I do not use some of the contents into the lower queue.

Fourth, concurrent processing

PHP concurrent processing, can be used swoole framework to resolve, it can control the concurrent consumption of content. For example, a page of the display, it is necessary to take data from several different systems, a plurality of data can take place asynchronously, with the process after the final summary.

Swoole framework I'm not too familiar, and then later learned to share this part.

Fifth, static pages

Nginx does not address the ability of PHP, PHP file to be forwarded to encounter php-fpm to deal with, and encounter html, js, css, etc., can be processed directly back to the browser.

Therefore, the page does not achieve complete separation of the front and rear ends, you can use static way, the content rarely changes, the first visit to the php file, move it to html file, and set an expiration time .

When later accessed, Linux file created according to the time, determines whether or not expired when the expiration time is not yet reached, may be taken to return the corresponding html files directly Nginx.

Guess you like

Origin www.cnblogs.com/zmdComeOn/p/11704925.html