Related technologies of JavaWeb website performance optimization

1. Improve the concurrent processing capacity of the server

We always hope that a server can handle as many requests as possible in a unit time, which has also become the key to the ability of a web server. The reason why the server can process multiple requests at the same time is that the operating system is designed with a multi-execution flow system, so that multiple tasks can take turns using system resources, including CPU, memory, and I/O. This requires selecting an appropriate concurrency strategy to reasonably utilize these resources, thereby improving the concurrent processing capability of the server. These concurrency strategies are more commonly used in underlying web server software such as apache, nginx, and lighttpd.

 

2. Separation of Web Components

The web components mentioned here refer to all URL-based access resources provided by the web server, including dynamic content, static web pages, pictures, style sheets, scripts, videos, and so on. These resources have great differences in file size, number of files, content update frequency, expected number of concurrent users, and whether a script interpreter is required. The use of optimization strategies for resources with different characteristics that can give full play to their potential can greatly improve the web site performance. For example: deploying images on an independent server and assigning an independent new domain name to it, and using the epoll model for static web pages can maintain a stable throughput rate in the case of large concurrency.

 

3. Database performance optimization and expansion.

The optimization of the Web server software in the database is mainly to reduce the number of times of accessing the database, and the specific method is to use various caching methods. You can also start from the database itself to improve its query performance, which involves the knowledge of database performance optimization and will not be discussed in this article. In addition, the scale of the database can be expanded and the service capability of the database can be improved by means of master-slave replication, separation of read and write, reverse proxy, and separation of write operations.

 

4. Web load balancing and related technologies

Load balancing is a means of horizontal scaling of web sites. There are several ways to achieve load balancing, including HTTP redirection-based load balancing, DNS load balancing, reverse proxy load balancing, four-layer load balancing, and so on.

 

A brief introduction to these load balancing methods: HTTP redirection-based load balancing utilizes the HTTP redirection request transfer and automatic jump functions to achieve load balancing. We are familiar with mirror downloads using this kind of load balancing. DNS load balancing refers to configuring multiple IP addresses for the same host name in a DNS server, and returning different resolution results when answering DNS queries to direct client access to different machines, so that different clients access different server, so as to achieve the purpose of load balancing. Reverse proxy load balancing is also called seven-layer load balancing, because the reverse proxy server works at the seventh layer (application layer) of the TCP seven-layer structure. load balancing tasks. Layer 4 load balancing is a load balancing based on NAT technology. It maps a legally registered IP address on the Internet to the IP addresses of multiple internal servers, and dynamically uses one of the internal IP addresses for each TCP connection request to achieve load balancing. Purpose. In addition, there is load balancing working in the direct routing mode of the data link layer (layer 2), which is realized by modifying the destination MAC address of the data packet. And, based on IP tunnel load balancing, in this way, actual servers can be deployed in different regions as needed, and requests are transferred according to the principle of nearby access. CDN services are based on IP tunnel technology.

 

While expanding the scale of web servers, web load balancing provides a larger, more complex and more flexible and free platform for web site performance optimization. Based on this platform, performance optimization strategies include shared file systems, content distribution and synchronization, and distributed files. systems, distributed computing, distributed caching, and more.

 

5. Web caching technology

Web caching technology is considered to be an effective way to reduce server load, reduce network congestion, and enhance the scalability of the World Wide Web. A copy, when the content is accessed next time, does not have to be connected to the resident website or recomputed, but is served by the copy kept in the Cache. Web caching can bring the following benefits:

 

(1) Reduce network traffic, thereby alleviating network congestion; this is because caching avoids a portion of HTTP requests.

(2) The main reasons for reducing customer access delay are: ① For cached content, customers can cache and obtain rather than obtain or recalculate from the server, thereby reducing transmission delay and shortening response time; ② Content that is not cached Faster acquisition by clients due to network congestion and reduced server load;

(3) Since part or all of the requested content of the client can be obtained from the cache, the load of the remote server is reduced.

(4) If the server cannot respond to the client request due to server failure or network failure, the client can obtain the cached content copy from the cache, which enhances the robustness of the web site service.

 

It can be seen that web caching can bring considerable performance improvements to web sites. In fact, caching is ubiquitous in the process that a user sends a request to a complete web page displayed in front of the user. The following are commonly used caching technologies for web performance optimization. You will find that caching is widely used in all aspects.

 

Browser cache: Browsers generally create a directory in the user's file system to store cached files, and mark each cached file with necessary tags, such as expiration time. These tags are mainly used for cache negotiation between the browser and the server.

 

Web server cache: a URL corresponds to a unique response content for a long period of time, such as static content or dynamic content that is updated less frequently, the web server can cache the response content, and the next time the web server receives the request Then immediately take out the pre-cached response content and return it to the browser.

 

Proxy server cache: The front-end server exposed to the Internet and connected to the back-end web server through the internal network is called a reverse proxy server, and the cache established on the reverse proxy server is called a reverse proxy cache. The front-end server that is exposed to the Internet and connected to the back-end web client through the internal network is called a forward proxy server, and the cache established on the forward proxy server is called a forward proxy cache. The proxy server cache is located between the client and the web server and can be thought of as a relay station between the two. Its existence can improve the access speed of the client, enhance the service capability, security and so on of the web server.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325360131&siteId=291194637