apache configuration optimization

【How APACHE works】

prefork mode (default)
This multiprocessing module (MPM) implements a non-threaded, prefork web server that works like Apache 1.3. It is suitable for systems that do not have thread-safe libraries and need to avoid thread compatibility issues. It is the best MPM where each request is required to be independent of each other, so that if a problem occurs in one request, it will not affect other requests.

This MPM has a strong self-adjustment capability, requiring only a few configuration command adjustments. The most important thing is to set MaxClients to a value large enough to handle potential request spikes, but not so large that the memory required exceeds the size of physical memory.

Worker Mode
This Multi-Processing Module (MPM) enables the web server to support mixed multi-threading and multi-processing. Since threads are used to process requests, massive requests can be handled with less overhead on system resources than process-based MPMs. However, it also uses multiprocessing, each with multiple threads, for the stability of a process-based MPM.

[apache configuration parameter remarks]
1. KeepAlive On/Off
   KeepAlive refers to keeping the connection active, similar to the permanent connection of Mysql. In other words, if KeepAlive is set to On, then requests from the same client do not need to be connected again, avoiding the need to create a new connection for each request and increase the burden on the server. Under normal circumstances, websites with more pictures should set KeepAlive to On.

2. KeepAliveTimeOut number
  If the time between the second request and the first request exceeds the KeepAliveTimeOut time, the first connection will be interrupted, and a second connection will be created. Its settings generally consider the interval between two requests for files such as pictures or JS. My experience in setting is 3-5 seconds.

3. MaxKeepAliveRequests 100
  The maximum number of HTTP requests that can be made by one connection. Setting its value to 0 will support unlimited transfer requests within a single connection. In fact, no client program requests too many pages in a single connection, and the connection is usually completed before this limit is reached.

4. StartServers 10
  Set the number of child processes established when the server starts. Since the number of child processes dynamically depends on the load, it is generally not necessary to adjust this parameter.

5. MinSpareServers 10
  Set the minimum number of idle child processes. The so-called idle child process refers to the child process that is not processing the request. If the current number of idle child processes is less than MinSpareServers, then Apache will spawn new child processes at a maximum rate of one per second. Tuning this parameter is only necessary on very busy machines. Setting this parameter too large is usually a bad idea.

6. MaxSpareThreads 75
  sets the maximum number of idle child processes. If there are currently more than MaxSpareServers idle child processes, the parent process will kill the excess child processes. Tuning this parameter is only necessary on very busy machines. Setting this parameter too large is usually a bad idea. If you set the value of this directive to be smaller than MinSpareServers, Apache will automatically change it to "MinSpareServers+1".

7. The ServerLimit 2000
  server allows the upper limit of the number of processes to be configured. Only used if you need to set MaxClients higher than the default of 256. Keep the value of this directive the same as MaxClients. Modifying the value of this command must completely stop the service and then restart it to take effect. Restarting in the restart mode will not take effect.

8. MaxClients 256
  is the maximum number of requests (maximum number of child processes) used to serve client requests. Any request exceeding the limit of MaxClients will enter the waiting queue. The default value is 256. If you want to increase this value, you must increase the value of ServerLimit at the same time. The author recommends to set the initial value to (maximum physical memory in Mb/2), and then adjust it dynamically according to the load situation. For example, a machine with 4G memory, then the initial value is 4000/2=2000.

9. MaxRequestsPerChild 0
   The apache.exe process includes a parent process and a child process. After the parent process receives an access request, the request is handed over to the child process for processing. The MaxRequestsPerChild directive sets the number of requests an individual child process will be able to handle. After processing "MaxRequestsPerChild number" requests, the child process will be terminated by the parent process. At this time, the memory occupied by the child process will be released. If there are more access requests, the parent process will regenerate the child process for processing. If MaxRequestsPerChild defaults to 0 (infinite) or a larger number (such as 10,000 or more), each child process can handle more requests without reducing access efficiency due to continuous termination and startup of child processes, but when MaxRequestsPerChild is set to 0 , if 200-300M memory is occupied, the memory occupied will not decrease even when the load is down. Servers with larger memory can be set to 0 or a larger number. Servers with less memory may wish to set it to 30, 50, 100 to prevent memory overflow. Therefore, in general, if you find that the memory of the server has skyrocketed, it is recommended to modify this parameter and try.

[Apache's Rewrite]
1. Whether to support the use of .htaccess files to define or modify apache settings, and whether to support directory listings

<Directory />
    Options indexes FollowSymLinks
    AllowOverride All
</Directory>
 
2. rewrite Configure
RewriteEngine on
RewriteCond $1 !^(index\.php|images|robots\.txt) #Define the conditions for rewriting to occur
RewriteRule ^(.*)$ /index.php/$1 [L] RewriteLog D:/ lib/rewrite.log #Set the rewrite log file, mainly used for rewrite debugging RewriteLogLevel 3 #Set the level of the rewrite log file record, mainly used for rewrite debugging  


 

[apache's Gzip function]
gzip can speed up a large website. Sometimes the compression ratio is as high as 80%. I tested it recently, and it is at least 40%, which is still quite good. In versions after Apache2, the module name is not called gzip , and called mod_deflate

If you want to enable gzip, be sure to open the following two modules.
LoadModule headers_module modules/mod_headers.so
LoadModule deflate_module modules/mod_deflate.so

Set the compression ratio, the value range is between 1 (lowest) and 9 (highest), it is not recommended to set too high, although there is a high compression ratio, it takes up more CPU resources.
DeflateCompressionLevel 3
AddOutputFilter DEFLATE html xml php js css
<Location />
SetOutputFilter DEFLATE
BrowserMatch ^Mozilla/4 gzip-only-text/html
BrowserMatch ^Mozilla/4\.0[678] no-gzip
BrowserMatch \bMSIE !no-gzip !gzip-only-text/html
SetEnvIfNoCase Request_URI \\.(?:gif|jpe?g|png)$ no-gzip dont-vary
SetEnvIfNoCase Request_URI .(?:exe|t?gz|zip|bz2|sit|rar)$ no-gzip dont-vary
SetEnvIfNoCase Request_URI .(?:pdf|mov|avi|mp3|mp4|rm)$ no-gzip dont-vary

Header append Vary User-Agent env=!dont-vary #Settings for the agent
</Location>

The following two test sites

http://www.whatsmyip.org/mod_gzip_test/

http://www.gidnetwork.com/tools/gzip-test.php

Test data pair css
Original Size: 44 KB
Gzipped Size: 10 KB
Data Savings: 77.27%

Test data js
Original Size: 6 KB
Gzipped Size: 2 KB
Data Savings: 66.67%

Test data php
Original Size: 62 KB
Gzipped Size: 15 KB
Data Savings: 75.81%

The above is just a few random data. It can be seen that the file is much smaller after gzip compression.

In addition, about squid's handling of gzip
in squid, only one cache is kept for the same URL. For different browsers (whether compression is supported or not) if frequently alternate access, for example: to a cached target, an http/1.0 request may cause Squid to force its cache to be updated. But then another http/1.1 request will cause Squid to update the cache again. In this way, the squid cache data will be updated frequently, which greatly reduces the cache hit rate.
Fortunately, there are very few browsers that do not support compression in the real environment, so the reduction in the cache hit rate is very limited.

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326919948&siteId=291194637