How to optimize server performance?

1. Using in-memory database

Memory database, in fact, the data in the database memory in direct operations. Relative to the disk, the data read and write speed memory to be several orders of magnitude, will be compared to data stored in memory access can greatly improve the performance of applications from disk. Memory database abandoned the traditional way of disk data management, based on all data in memory redesigned architecture, and in the data cache, fast algorithms, parallel operation also for the corresponding improvement, the data processing speed than the conventional database data processing speed is much faster. But the problem of security can be said to be the biggest flawed memory database. Because the memory itself has a power-down lost natural defects, so when we use in-memory database, usually in advance to take some protective mechanism of the data on the memory, such as backup, logging, hot standby or cluster, synchronization and disk database the way. 56 So the cloud for some small series to remind everyone the importance of not high, but they want a quick response to user requests can be considered part of the data memory to store the database, while a regular basis to solidify data to disk.
How to optimize server performance?

2. Use the RDD

In some applications, a large cloud data in the related art, Spark can be used to speed up data processing speed. Spark is the core of RDD, the first source and a paper RDD Berkeley Laboratory "Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing". Existing data flow system for handling two applications are not efficient: one iterative algorithm, which is common in the field of machine learning and drawing application; the second is an interactive data mining tools. In both cases, the data stored in memory can greatly improve performance.

3. Increase Cache

Many web applications have a lot of static content, static content of these documents are mainly small and are frequently read, using Apache and nginx as a web server. In the small amount of time web access, both http servers can be said to be very fast and efficient, if a large load, we can use to build a front-end cache server, the cache static resource file servers to operate direct system memory read operation, since the speed of reading data directly from the memory is much larger than from the hard disk. This is actually increase the cost of memory to reduce disk access time caused by consumption.

4. SSD

In addition to optimizing memory area, you can also optimize the disk side. Compared with the traditional mechanical hard drives, solid state drives have fast read and write, light weight, low power consumption and small volume. But ssd price compared to traditional mechanical hard drives expensive, conditional ssd can be used instead of a mechanical hard drive.

5. Optimize Database

Most of the server requests the database will eventually need to fall with increasing amount of data, database access speed will be slower and slower. Want to improve the speed of request processing must be of the original single-table a knife. Current mainstream Linux server used to belong to mysql database, and if we use the recorded data stored in a single table mysql million level reached, then the query speed will be very slow. Suitable business rules for partitioning a database according to points table, can effectively improve the access speed of the database, improve the overall performance of the server. In addition to the service query request, the construction time table can be set according to demand and other related indexes to improve query speed.

6. Select the appropriate IO model

IO model is divided into:
(1) blocking I / O model: Before the data did not arrive, I / O has been blocked, if the data arrives, it will return. Typically recvfrom, general default are blocked. (2) non-blocking I / O model: and blocking Instead, just can not get the results when, I / O return immediately. It does not block the current thread. IO multiplexing model: that is, you have to learn the part. Multiplexing means, to the combined multi-channel signal processing way, similar to a plurality of conduit pipes together, as opposed to the demultiplexer. IO multiplexing model primarily select, poll, epoll; to an IO port, two calls, twice to return, than there is nothing blocking IO superiority; the key is to achieve simultaneously monitor multiple IO ports; function will be that the blocking process, however, and blocking I / O of the difference, these two functions can block multiple I / O operations simultaneously. But also simultaneously to a plurality of read operations, the write operation of the plurality of I / O functions for detecting, until there is data to be read or written, it really calls the I / O operation function. Drive signal: first open socket drive signal I / O function, and mounting a signal processing function call sigaction through the system. When the data packet is ready to be read, it generates a SIGIO signal to the process. Then you can call recvfrom in the signal handler to read the data reported, the well inform the main loop data is ready to be processed. Can also inform the main loop, it read datagram. Asynchronous IO model: tell the kernel to start an action, and let the kernel after the entire operation is completed (including the copy from the kernel buffer to the user's own data) let us know. This is not to say must use a model, epoll is also not much better than the select performance in all cases, the choice of the time or to combine business requirements.

7. Use a multi-core processing policy

Now the mainstream machine running server configurations are multi-core CPU, we designed the server when you can take advantage of multi-core features, multi-process or multi-threaded frame. About Select multi-threaded or multi-process can be based on actual demand, combined with the advantages and disadvantages of each choice. For multi-threaded, especially when you can use the thread pool to set the appropriate thread pool thread pool by performance testing different servers.

Guess you like

Origin blog.51cto.com/14540004/2441025