Website Performance Analysis

First, the different perspectives of ordinary users think of site performance Web site performance

Site performance for the average user, the most direct manifestation of the response time. Users in the browser intuitively feel the response speed of the site that sends a request from the client to the server returns the response time of the content.
As website developers, website performance is usually not the same and ordinary users to understand.
Ordinary users feel the performance of the site, not just determined by the web server. Time period also includes a client computer and server communications, the web server processes the response, the client browser configured to parse the response time of the request data. Indeed, the performance of different computers and different browsers parse HTML speed, the difference in network bandwidth houses of different network operators, which will cause the user to feel the response time, the time the site server processes the request may be greater than.

Developers believe the site performance

Developers mainly concerned with the server application itself, as well as supporting the system performance. Including concurrent processing, stability, response delay and other technical specifications.
A primary means of optimizing the performance, including the use of cache data read acceleration, increase data throughput using a cluster, using asynchronous messaging accelerate request response codes used to improve program performance.

Operation and maintenance personnel believe the site performance

Operation and maintenance personnel is mainly concerned with the server infrastructure and resource utilization. Such as server hardware configuration, network operators bandwidth, data center network architecture. The main means of optimizing the use of cost-effective server has to build the backbone network optimization, optimize resource utilization using virtualization technology.


Second, the performance indicators

From a developer's perspective, the main indicator of site performance and response time of concurrent.

Concurrent

Refers to the number of concurrent system can handle the number of requests for the web server, the number of concurrent users is the number of concurrent Web site, refers to the number of simultaneous users submit requests.
Concurrent with the number corresponding to the number of online sites as well as (the number of users log in) users and website users (generally refers to the number of registered users). Their relationship is generally: the number of users the site> site users Online> Site number of concurrent users

Response time

Response time is the most important performance indicators, directly reflects the speed of the system.

Common system operation response time [TD]

operating Response time
Open a Web site A few seconds
Query a record in the database (indexed) Dozen milliseconds
Addressing a mechanical disk positioning 4 ms
1MB of data read from the disk sequentially mechanical 2 ms
1MB disk reads data from the SSD sequentially 0.3 ms
Read data from a remote Distributed Cache Redis 0.5 ms
1MB data read from the memory Ten microseconds
2KB data transmission network 1 microsecond

Third, performance optimization

For developers, Web site performance optimization typically includes front-end performance optimization, application server performance optimization, storage server performance optimization categories.


Web front-end performance optimization

1, to reduce the http request http protocol is an application layer protocol status, meaning that every http request requires establishing a communication link, data transmission, on the server side, every http request to start a separate thread needs to process. Reducing the number of http requests can improve access performance.
The main means of reducing http is to combine CSS, merge javascript, merge pictures.
2, use your browser's cache for a Web site, CSS, javascript, logo, icon, these static resource files relatively low frequency of updates, and these files and almost every http request is needed. If these files are cached in the browser, you can improve performance excellent. By setting the cache-control http header and attribute expires, the browser can be set caching and can be customized.
3, the compressed file compression is enabled on the server side, the decompression of the file in the browser, which can effectively reduce the amount of data transmitted in communication. If you can, as much as possible of external scripts, style merger, more into one. Text files compression efficiency can reach above 80%, so HTML, CSS, javascript enabled GZip file compression can achieve better results. But the compression generated on the server and browser a certain pressure, in good network bandwidth, and insufficient server resources to be taken into account.
4, CSS on the most top of the page, javascript bottom of the page on the browser will render the entire page fishes after the download is complete all the CSS, so the best practice is to CSS on the top of the page, allowing the browser to download as soon as possible CSS. Javascript contrast, the browser immediately after loading the javascript execution, there may block the entire page, resulting in slow page display, so the best on javascript bottom of the page.


Application Server Optimization

The application server is the server processing business website, the site's business code are deployed here, there are major optimization program cache, asynchronous, clustering and so on.
1, the rational use of cache
When the site comes to performance bottlenecks, the first solution is generally cache. Throughout the site application, the cache almost everywhere, whether it is a client or application server, or database server. Interactive client and server, whether it is data, files can be cached, the cache is very important for the rational use of site performance optimization.
Cache is generally used to store those read and write times higher, less variable data, such as information, product information, etc. Home. Application reads data, the cache usually start reading, if the read data has expired or not, and then access the disk database, and the data is written to the cache again.
The basic principle is to cache in the storage medium relatively high access speed, such as data storage memory. On the one hand cache access speed, on the other hand, if the cached data is the need to get through the calculation process, it is also possible to use caching to reduce the computation time server processing data.
Use the cache is not without flaws: Memory resources are more valuable, it is impossible to cache all data, the general frequently-modified data is not recommended to use the cache, which can lead to inconsistent data.
Site data cache generally follow the twenty-eight law, namely 80% of the visits were 20% of the data. Therefore, 20% of these general data cache, the system may function to improve the performance of the server to improve efficiency of reading.
2, asynchronous operation
Use of asynchronous message queues will be called, the system can improve the performance of the site.
Without the use of the message queue, the request of the user directly into the database, in case of high concurrent database will cause great pressure, response time may be delayed.
After using message queues, the user requests the data being sent to the server message queue, the message queue server process will open, the asynchronous data is written to the database. The message queue processing speed far exceeds the server database, the user's response delay can be improved.
Message Queuing transactional messages can be generated within a short time of high concurrency, stored in the message queue, thereby enhancing the ability of concurrent processing site. In the electricity supplier website promotion, the rational use of message queues, can withstand a short time high user concurrency impact.
3, the use of cluster
In the case of the site high concurrent access, the use of load balancing technology, can build a server cluster of multiple servers for one application, concurrent access requests, distributed on multiple servers to handle, to avoid a single server due to overload, and cause response delays.
4, code optimization
Website business logic code is mainly deployed on an application server, you need to handle complex concurrent transactions. Reasonable optimize business code, it can also improve site performance.
Any concurrent access web sites will encounter multi-user, large sites will reach tens of thousands of concurrent users. Each user requests will create an independent system processes to deal with. Due process threads than the lighter, less resource intensive, so the current mainstream web application server multi-threaded approach, handle concurrent user requests, therefore, most web developers are multi-threaded programming.
Another reason to use multi-threaded server has multiple CPU, mobile phones are now to the era of 8-core CPU, the general is at least 16 core CPU server, in order to maximize the use of the CPU, you must start multiple threads.
So, how many threads start better?
Start threads and the CPU core is proportional to the number, and the waiting time is proportional to the IO. If you are the type of computing tasks, then the number of threads should not exceed a maximum number of CPU cores, since more start, CPU is too late to call. If a task is waiting to read and write disk, network response, so many start-threaded task will increase concurrency, improve server performance.
Or with a simplified formula will be described:
Start number of threads = (task execution time / (task execution event - IO wait time)) * CPU Cores
5, storage optimization
Read and write data is another bottleneck site to handle concurrent access. Although you can use the cache to read and write data to solve part of the pressure, but many times, the system disk is still the most serious bottleneck. And the disk is the site's most important asset, availability and fault-tolerant disk is also crucial.
Mechanical hard drives and solid state drives is the most commonly used mechanical hard disk, driven by the motor to the head position of the specified disk access to data, access data need to move the head in the continuous data read and random access, the number of head moving difference huge, so the mechanical hard disk performance difference is huge, low literacy efficiency. In Web applications, the access to most data are random, in this case, SSDs have higher performance. But SSDs currently in the process, the reliability of the data yet to be upgraded, the use of solid state drives is not yet universal, from the development trend to replace mechanical hard disk should be sooner or later.
More technical information may concern: gzitcast

Guess you like

Origin www.cnblogs.com/heimaguangzhou/p/11578595.html