Interpretation of PHP Interview - high concurrency class study point solutions

Since finishing Mu class network 360 Daniel PHP comprehensive interpretation of the interview, to buy the link: https: //coding.imooc.com/class/133.html

 

 


1. high concurrency and high-volume solution

 Zhenti review

how to address the high concurrency PHP big traffic problem?

Analysis of test sites

High concurrency architecture concepts: the concept of concurrent Baidu Encyclopedia, what we call the concurrency in the Internet age, concurrent, high concurrency usually refers to concurrent access, that is, have at some point the number of simultaneous accesses arrival. Usually if a pv system of more than 10 million, there may be a highly concurrent systems. High concurrency issues, what we care about, qps number of queries or requests per second, in the Internet field, the number of requests per second response value. Certain number of requests processed per unit time (typically determined by the number of concurrent qps). Response time is received in response to the request from the time taken. pv: the number of pageviews page (page view), that is, page views or hits, a guest access within 24 hours. A page the same person traffic a site considered a pv. uv: unique visitors (unique visitor), that is, within a certain time frame, the same visitors visit the site several times, only count as one unique visitors. Bandwidth: Bandwidth is calculated need to focus on the size of the average size of the two indices, peak flow and pages. Day website Bandwidth = pv / Statistical Time (converted to seconds) * average page size (kb) * 8. Qps number of concurrent connections is not equal to the number of concurrent connections, the number of requests per second is http qps, the number of concurrent connections is processed simultaneously requesting system. Requests per second peak = (total number pv * 80%) / (6 hours * 20 seconds), meaning 80% of 20% concentrated traffic time. 6 hours is a simple valuation. Stress test, the test can withstand maximum concurrent testing biggest bear qps. Common performance testing tool: ab, wrk, http_load, web bench, siege, Apache jmeter. ab tool can refer to: https: //blog.csdn.net/qq_16399991/article/details/56676780 Note: The test machine was tested separately with the machine, do not do stress tests for the online service, observe the test machine tool where ab and the front-end test machine cpu, memory, network, etc. does not exceed the maximum limit of 75%.

    qps reached its limit: qps50: small sites, the general server can cope with;
    qps 100: Suppose a relational database for each request completed in 0.01 seconds, assuming that only a single page sql query, 100qps means 1 second to complete 100 requests, but at this time we can not guarantee to complete the database query 100 times, so the limit is reached, the optimization program: database cache layer, the database load balancing.
    qps 800: Suppose we use hundreds of megabytes of bandwidth, the actual bandwidth means the site exit is about 8M, assuming that each page just under 10k, this concurrent conditions, hundreds of megabytes of bandwidth is eaten. Program: cdn acceleration, load balancing.
    qps of 1000, assuming memcache cache database queries per page request is much larger than for memcache requests for db pessimistic concurrency memcahe is about 2W, but it is possible that before the network bandwidth is eaten, showing no stable. Solution: static html cache.
    qps in 2000 reached this level file system access locks have become a disaster. Solution: do business separate, distributed storage.

Optimization:

    Web Server Optimization: load balancing
     traffic optimization: anti-hotlinking processing malicious request mask,
    front end optimization: reduce http requests, add an asynchronous request, enabled browser caching and file compression, cdn accelerated establishment of an independent image server,
    the server optimization: static pages, concurrent processing, queue processing,
    database optimization: database cache sub-library sub-table, partitioning, separate read and write, load balancing

caught

 
2.web resource security chain

concepts:

What is the security chain: hotlinking is show some of the content on the server is not on your own server

Anti-theft chain works: refer or by signing, the site can detect the source page landing pages visited, if a resource file, you can track to show his web address, upon detection of the source site that is not blocked or returns the specified page . By way of signature by calculating the signature way to calculate the signature is legitimate, lawful is displayed, otherwise it returns an error message.

Method: The method used nginx module referer ngx_http_referer_module domain for blocking illegal origin of the request, nginx instruction valid_referers, global variables $ invalid_referer problem can camouflage referer.. The use of cryptographic signatures, using third-party modules httpAccessModule implement anti-hotlinking,
3. reduce http requests

related concepts

on only 10% -20% of end user response time is spent receiving documents requested http: Why reduce the number of requests http: Performance Golden Rule the remaining 80% -90% of the time all the components (images, js, css, flash, etc.) spent html document references. How to improve: to reduce the number of components, thereby reducing the http request. http connection overhead generated: DNS -tcp connection - to send the request - wait - Resources - Analysis of time

to reduce http requests ways: css sprite, merge scripts and stylesheets appropriate
4. Browser caching and compression optimization techniques
5.CDN accelerate
6. the establishment of an independent image server
7. static dynamic language
to optimize data caching layer 8.
optimization layer 9. the database cache
optimization 10.mysql data layer
11.web server load balancing

concepts:

Seven load balancing implementation: url load balancing based on application layer information; proxy representatives nginx is he a very powerful capability to enable a Layer 7 load balancing. Powerful, high performance, stable, flexible configuration is simple and can automatically remove the back-end server is not working properly, upload files using asynchronous mode, supports a variety of distribution strategies. nginx load balancing strategy: built-in policies: iphash, weighted round-robin, expansion strategy: Fair policy: The response time of the back-end servers determine the load condition of the machine to choose the lightest load shunt. General hash: universal hash is relatively simple, nginx can be built-in variables for the hash key, consistency hash: nginx using the built-in consistency hash ring, support memcache.

nginx configuration:

four load balancing achieved: by the packet destination address and port, together with the choice of server load balancing device settings embodiment, the internal server determines the final selection. lvs achieve server load balancing There are three ways NAT, DR, and TUN.
----------------
Disclaimer: This article is CSDN blogger "wangxiaoangg 'original article, follow the CC 4.0 BY-SA copyright agreement, reproduced, please attach the original source link and this statement. .
Original link: https: //blog.csdn.net/qq_16399991/article/details/82556527

Guess you like

Origin www.cnblogs.com/jokmangood/p/11735126.html