Performance Optimization Overview

table of Contents

Performance Optimization Overview

1 for each service

2, need to understand the business model

3. Finally, we need to consider the performance and safety

Stress testing tool

[root@web01 conf.d]# yum install httpd-tools -y

#配置nginx
[root@lb01 conf.d]# cat try.conf 
server {
    listen 80;
    server_name try.haoda.com;

    location / {
    root /code;
        try_files $uri $uri/ @java;
    index index.jsp index.html;
    }

    location @java {
    proxy_pass http://172.16.1.8:8080;
    }
}

#配置nginx使用的静态页面
[root@lb01 conf.d]# echo "nginx ab" > /code/ad.html

#配置tomcat使用的静态页面
[root@web02 ~]# wget http://mirrors.hust.edu.cn/apache/tomcat/tomcat-9/v9.0.16/bin/apache-tomcat-9.0.16.tar.gz
[root@web02 ~]# tar xf apache-tomcat-9.0.16.tar.gz
[root@web02 ~]# cd /usr/share/tomcat/webapps/ROOT
[root@web02 ROOT]# echo "tomcat aaaaa" > tomcat.html

#压测工具测试nginx处理静态资源
[root@lb01 conf.d]# ab -n 10000 -c 200 http://try.haoda.com/ad.html

Server Software:        nginx/1.14.2
Server Hostname:        try.haoda.com
Server Port:            80

Document Path:          /ad.html
Document Length:        9 bytes

Concurrency Level:      200
Time taken for tests:   1.078 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      2380000 bytes
HTML transferred:       90000 bytes
Requests per second:    9272.58 [#/sec] (mean)
Time per request:       21.569 [ms] (mean)
Time per request:       0.108 [ms] (mean, across all concurrent requests)
Transfer rate:          2155.15 [Kbytes/sec] received


#压测工具测试tomcat处理静态资源
[root@lb01 conf.d]# ab -n 10000 -c 200 http://try.haoda.com/tomcat.html

Server Software:        nginx/1.14.2
Server Hostname:        try.haoda.com
Server Port:            80

Document Path:          /tomcat.html
Document Length:        13 bytes

Concurrency Level:      200
Time taken for tests:   4.956 seconds
Complete requests:      10000
Failed requests:        0
Write errors:           0
Total transferred:      2510000 bytes
HTML transferred:       130000 bytes
Requests per second:    2017.78 [#/sec] (mean)
Time per request:       99.119 [ms] (mean)
Time per request:       0.496 [ms] (mean, across all concurrent requests)
Transfer rate:          494.59 [Kbytes/sec] received

Understand the impact of performance indicators

1、网络
    (1)网络的流量
    (2)网络是否丢包
    (3)这些会影响http的请求与调用
2、系统
    (1)硬件有没有磁盘损坏,磁盘速率
    (2)系统的负载、内存、系统稳定性
3、服务
    (1)连接优化。请求优化
    (2)根据业务形态做对应的服务设置
4、程序
    (1)接口性能
    (2)处理速度
    (3)程序执行效率
5、数据库

#每个服务与服务之间都或多或少有一些关联,我们需要将整个架构进行分层,找到对应系统或服务的短板,然后进行优化

System performance optimization

File handles, Linux everything is a file, the file handle that can be understood as an index, with the file handle will call our process frequently increase, the system default file handle is limited, not unlimited calls for a process, so we need to limit each process and how each service uses a file handle, file handle is also necessary to adjust the tuning parameters.

File handles arrangement:
1, modify global system.
2, local users to modify.
3, localized modification process.

[root@lb01 ~]# vim /etc/security/limits.conf

1、系统全局性修改。
# * 代表所有用户
* soft nofile 25535
* hard nofile 25535

2.用户局部性修改
#针对root用户,soft仅提醒,hard限制,nofile打开最大文件数
root soft nofile 65535
root hard nofile 65535

3.进程局部性修改
#针对nginx进程,nginx自带配置
worker_rlimit_nofile 30000
4.调整内核参数:让time_wait状态重用(端口重用)[flag]
[root@web01 ROOT]# vim /etc/sysctl.conf
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_timestamps = 1
[root@web01 ROOT]# sysctl -p    #可以查看我们添加的内核参数
[root@web01 ROOT]# sysctl -a    #可以查看所有内核参数

On the short high concurrent TCP server connections, when the server request has been handled properly close the connection immediately active. There will be a large number of socket in the TIME_WAIT state under this scenario. If the amount of concurrent client sustained high, then the client will display portion are not connected. Let me explain this scenario. Initiative normal TCP connection is closed, there will be TIMEWAIT.

Why should we care about this high concurrent connection it short? There are two aspects to note: 1. The high-concurrency server at the same time can make up a lot of ports in a short time frame, and the port has a range from 0 to 65535, not a lot, excluding use of the system and other services, and the rest even less. 2. In this scenario, the short connection represents a "business process time + data transmission time is much less than the timeout TIMEWAIT" connection.

Here is a relative length of concepts, such as taking a web page, http 1 second short business connection has been processed, after the connection is closed, this business will stay in port spent a few minutes TIMEWAIT state, which is a few minutes, others when an HTTP request comes it is unable to take up this port (profit at others pull Xiang). This alone business computing server utilization will find that doing something serious time server and port (resources) are hanging ratio can not be used in time of 1: several hundred, a serious waste of server resources. (Say a digression, starting to consider server performance tuning In this sense, then, long connection service business does not need to consider TIMEWAIT state. At the same time, if you are very familiar with the server business scenarios, you will find that in actual business scenarios , an amount generally corresponding to the connection length of concurrent operations and not very high.

Proxy Service Optimization

Configuring nginx proxy service using a long connection

upstream http_backend {
    server 127.0.0.1:8080;
    keepalive 16;   #长连接
}

server {
    ...
    location /http/ {
        proxy_pass http://http_backend;
        proxy_http_version 1.1;         #对于http协议应该指定为1.1
        proxy_set_header Connection ""; #清除“connection”头字段
        proxy_next_upstream error timeout http_500 http_502 http_503 http_504;  #平滑过渡
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwared-For $proxy_add_x_forwarded_for;
        proxy_connect_timeout 30s;      # 代理连接web超时时间
        proxy_read_timeout 60s;         # 代理等待web响应超时时间
        proxy_send_timeout 60s;         # web回传数据至代理超时时间
        proxy_buffering on;             # 开启代理缓冲区,web回传数据至缓冲区,代理边收边传返回给客户端
        proxy_buffer_size 32k;          # 代理接收web响应的头信息的缓冲区大小
        proxy_buffers 4 128k;           # 缓冲代理接收单个长连接内包含的web响应的数量和大小
    ...
    }
}

For fastcgi servers, you need to maintain a long connection set fastcgi_keep_conn [In Flag]

upstream fastcgi_backend {
    server 127.0.0.1:9000;
    keepalive 8;
}

server {
    ...
    location /fastcgi/ {
        fastcgi_pass fastcgi_backend;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        fastcgi_keep_conn on;   
        fastcgi_connect_timeout 60s;
        include fastcgi_params;
        ...
    }
}

keepalive_requests set by the maximum number of requests a connection keepalive provided, after issuing the maximum number of requests, will close the connection.

Syntax: keepalive_requests number;
Default: keepalive_requests 100;
Context: upstream

keepalive_timeout set the timeout, idle keepalive connection during the proxy server will again remain open.

Syntax: keepalive_timeout timeout;
Default: keepalive_timeout 60s;
Context: upstream

#该指令出现在1.15.3版中

Note:
1.scgi uwsgi agreement and did not keep the concept of connection.
2. But whether it is a proxy, fastcgi, uwsgi protocol has cache cache function, can accelerate the efficiency After opening the site visit. (Depending on hardware)

Caching static resources

No browser cache

There browser cache

Outdated browser checking mechanism

浏览器If-None-Match   "9-1550193224000"   询问          web服务器  etag   "9-1550193224000"
浏览器认为只是缓存过期,内容并没有修改,所以协商后还是304
  浏览器If-Modified-Since  Tue, 29 Jan 2019 02:29:51 GMT
    询问   
  web服务器  Last-Modified:       Tue, 29 Jan 2019 02:29:51 GMT
浏览器认为只是缓存过期,内容并没有修改,所以协商后还是304

Configuring static resource cache scene

server {
    listen 80;
    server_name static.haoda.com;

    location ~ .*\.(jpg|gif|png)$ {
        expires      7d;
    }
    location ~ .*\.(js|css)$ {
        expires      30d;
    }
}

Cancel Cache

        location ~ \.*(png|jpg|gif|jpeg)$ {
                expires 30d;
                add_header Cache-Control no-store;
                add_header Pragma no-cache; 
        }
}

Static Resource Compression

location ~*  .*\.(jpg|gif|png)$ {
    root /code/images;
    gzip on;
    gzip_http_version 1.1;
    gzip_comp_level 2; #极致为9,压缩的级别
    gzip_types  image/jpeg image/gif image/png; #文件格式
    }

For txt

[root@Nginx conf.d]# cat static_server.conf 
server {
    listen 80;
    server_name static.oldboy.com;
    sendfile on;
    location ~ .*\.(txt|xml|html|json|js|css)$ {
        gzip on;
        gzip_http_version 1.1;
        gzip_comp_level 1;
        gzip_types text/plain application/json application/x-javascript application/css application/xml text/javascript;
    }
}

Prevent resource Daolian

Anti-hotlinking, referring to other resources to prevent malicious Web site theft.

Basic anti-hotlinking set ideas: mainly for the legitimacy of some of the Header information of the client request process carried to validate the request, such as the client in the process of request will carry referer information. The advantage is a simple rule, easy to configure and use, the disadvantage is the security chain depends Referer authentication information can be forged, so not 100% reliable information by referer anti-theft chain, but he can limit Daolian most cases.

Syntax: valid_referers none | blocked | server_name | string ...;
Default: -;
Context: server, location

#none: referer来源头部为空的情况
#blocked: referer来源头部不为空,这些都不以http://或者https://开头
#server_name: 来源头部信息包含当前域名,可以正则匹配
Syntax: valid_referers none | blocked | server_name | string ...;
Default: -;
Context: server, location

#none: referer来源头部为空的情况
#blocked: referer来源头部不为空,这些都不以http://或者https://开头
#server_name: 来源头部信息包含当前域名,可以正则匹配

5.4.1 ready html file on the server Daolian steal my pictures

<html>
<head>
    <meta charset="utf-8">
    <title>haoda.com</title>
</head>
<body style="background-color:black;">
    <img src="http://39.104.205.72/picture/niu.jpg"/>
</body>
</html>

5.4.2 Access Page View

Configuring security chain on 5.4.3 server

location ~ .*\.(jpg|png|gif) {
    root /data;
    valid_referers none blocked 39.104.205.72;
    if ( $invalid_referer ) {
        return 403;
    }
}

The meaning of the above configuration, said from 39.104.205.72 all have access to the current site of the picture, if not in this list, the source domain, then $ invalid_referer equal to 1, returns a 403 customers in an if statement, so the user will see a 403 pages

5.4.4 If you do not use return but with a rewrite, then hotlinking images will return to the user a pei.jpg

location ~ .*\.(jpg|png|gif) {
    root /data;
    valid_referers none blocked 39.104.205.72;
    if ( $invalid_referer ) {
        rewrite ^(.*)$ /picture/pei.jpg break;
    }   
}

5.4.5 If you want some sites can Daolian

location ~ .*\.(jpg|png|gif) {
    root /data;
    valid_referers none blocked 39.104.205.72 server_name ~\.google\. ~\.baidu\.;
    if ( $invalid_referer ) {
        return 403;
    }   
}

Of course, this is not a 100% guarantee and protection resources are not hotlinking, because we can modify refer to sources of information from the command.

[root@lb01 conf.d]# curl -e "http://www.baidu.com" -I http://39.104.205.72/picture/niu.jpg
[root@lb01 conf.d]# curl -e "http://39.104.205.72" -I http://39.104.205.72/picture/niu.jpg

Production Practice

1. Configure site

[root@web02 conf.d]# cat static.conf 
server {
    listen 80;
    server_name static.oldboy.com;
    root /code;
location / {
    index index.html;
}
}

2. Upload 2 pictures

A picture can be hotlinking
one is advertising picture

Restart the server

[root@web02 code]# systemctl restart nginx

3. Configure server Daolian

[root@web01 conf.d]# cat try.conf
server {
    server_name dl.oldboy.com;
    listen 80;
    root /code;
    location / {
        index index.html;
    }
}

Configuration page Daolian

[root@web01 code]# cat /code/tt.html
<html>

<head>
    <meta charset="utf-8">
    <title>oldboyedu.com</title>
</head>

<body style="background-color:red;">
    <img src="http://static.oldboy.com/smg.jpg"/>   #根据情况修改你的服务器地址
</body>
</html>

4.web02 added security chain operations

location ~* \.(gif|jpg|png|bmp)$ {
  valid_referers none blocked *.xuliangwei.com server_ names ~\.google\.;
  if ($invalid_referer) {
      return 403;                           #可以选择直接返回403
      rewrite ^(.*)$ /ggw.png break;        #也可以选择返回一张水印的图片,给公司做广告
  }

Allow cross-domain access

1. Configure a Web site

[root@Nginx ~]# cat /code/http_origin.html 
<html lang="en">
<head>
        <meta charset="UTF-8" />
        <title>测试ajax和跨域访问</title>
        <script src="http://libs.baidu.com/jquery/2.1.4/jquery.min.js"></script>
</head>
<script type="text/javascript">
$(document).ready(function(){
        $.ajax({
        type: "GET",
        url: "http://naonao.lq.com/1.jpg",
        success: function(data) {
                alert("sucess!!!");
        },
        error: function() {
                alert("fail!!,请刷新再试!");
        }
        });
});
</script>
        <body>
                <h1>测试跨域访问</h1>
        </body>
</html>

2. Configure b website

3. The cross-domain access through a browser test

4. To allow cross-domain access a site on the web site b

server {
        server_name  naonao.lq.com;
        listen 80;
        root /code;

        location / {
                index index.html;
        }

        location ~* \.(gif|jpg|png|bmp)$ {
                add_header Access-Control-Allow-Origin *;
                add_header Access-Control-Allow-Methods GET,POST,PUT,DELETE,OPTIONS;
        }
}

cpu affinity

1. View the current physical state of the CPU

[root@nginx ~]# lscpu |grep "CPU(s)"
CPU(s):                24                                       #总的核心数
On-line CPU(s) list:   0-23
每个物理cpu使用的是那些核心(代表2颗物理CPU,)
NUMA node0 CPU(s):     0,2,4,6,8,10,12,14,16,18,20,22
NUMA node1 CPU(s):     1,3,5,7,9,11,13,15,17,19,21,23
#本次演示服务器为         两颗物理cpu,每颗物理CPU12个核心, 总共有24个核心

Nginx started work to modify the process is automatic

worker_processes  auto;
worker_cpu_affinity auto;

[root@web01 ~]# ps -eo pid,args,psr|grep [n]ginx
  1242 nginx: master process /usr/   2
  1243 nginx: worker process         0
  1244 nginx: worker process         1
  1245 nginx: worker process         2
  1246 nginx: worker process         3
不推荐调整的方式
    # 第一种绑定组合方式
    worker_processes 24;
    worker_cpu_affinity 000000000001 000000000010 000000000100 000000001000 000000010000 000000100000 000001000000 000010000000 000100000000 001000000000 010000000000 10000000000;
    # 第二种方式(使用较少)
    worker_processes 2;
    worker_cpu_affinity 101010101010 010101010101;

Nginx common configuration Nginx proxy configuration Nginx Fastcgi

[root@nginx ~]# cat nginx.conf
user www;                   # nginx进程启动用户
worker_processes auto;      #与cpu核心一致即可
worker_cpu_affinity auto;   # cpu亲和

error_log /var/log/nginx/error.log warn;    # 错误日志
pid /run/nginx.pid;
worker_rlimit_nofile 35535;     #每个work能打开的文件描述符,调整至1w以上,负荷较高建议2-3w

events {
    use epoll;                  # 使用epoll高效网络模型
    worker_connections 10240;   # 限制每个进程能处理多少个连接,10240x[cpu核心]
}

http {
    include             mime.types;
    default_type        application/octet-stream;
    charset utf-8;      # 统一使用utf-8字符集
# 定义日志格式
log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                  '$status $body_bytes_sent "$http_referer" '
                  '"$http_user_agent" "$http_x_forwarded_for"';

access_log  /var/log/nginx/access.log  main;    # 访问日志

server_tokens off;  # 禁止浏览器显示nginx版本号
client_max_body_size 200m;  # 文件上传大小限制调整

# 文件高效传输,静态资源服务器建议打开
sendfile            on;
tcp_nopush          on;
# 文件实时传输,动态资源服务建议打开,需要打开keepalive
tcp_nodelay         on;
keepalive_timeout   65;

# Gzip 压缩
gzip on;
gzip_disable "MSIE [1-6]\.";
gzip_http_version 1.1;
gzip_comp_level 2;
gzip_buffers 16 8k;
gzip_min_length 1024;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml applicati
on/xml+rss text/javascript image/jpeg;
# 虚拟主机
include /etc/nginx/conf.d/*.conf;
}

Nginx security and optimization summary

The number of files 1.cpu affinity, the number of worker processes, each worker process to adjust the opening
2. epool network model, the process of adjusting each worker maximum number of connections
efficient reading sendfile 3. files, NOPUSH,
4. file real-time transmission, nodealy
5. tcp open long link and long link timeout keepalived
6. Start the file transfer compressed gzip
7. open the static file cache expires
8. hidden Nginx version number
9. prohibit access by IP address, prohibit malicious domain parsing, only domain names
10. The discharge Daolian configuration, and cross-domain access
11. The anti-DDOS, cc attack, to limit concurrent connections a single IP, http request and
12. The limit elegant nginx error page
13.nginx encrypted https optimize transmission
14. nginx proxy_cache, fastcgi_cache, uwsgi_cache cache
squid, varnish ()

php optimization

1.php configuration management program file /etc/php.ini, mainly to adjust the log, file upload, ban dangerous function, close the version number display, etc.

#;;;;;;;;;;;;;;;;;

Error  logging ;  #错误日志设置

#;;;;;;;;;;;;;;;;;
expose_php = Off                        # 关闭php版本信息
display_error = Off                     # 屏幕不显示错误日志
error_reporting = E_ALL                 # 记录PHP的每个错误
log_errors = On                         # 开启错误日志
error_log = /var/log/php_error.log      # 错误日志写入的位置
date.timezone = Asia/Shanghai           # 调整时区,默认PRC

#;;;;;;;;;;;;;;;

File Uploads ;    #文件上传设置

#;;;;;;;;;;;;;;;
file_uploads = On           # 允许文件上传
upload_max_filesize = 300M  # 允许上传文件的最大大小
post_max_size = 300M        # 允许客户端单个POST请求发送的最大数据
max_file_uploads = 20       # 允许同时上传的文件的最大数量
memory_limit = 128M         # 每个脚本执行最大内存

[Session]       #会话共享
session.save_handler = redis
session.save_path = "tcp://172.16.1.51:6379" #有密码写?

https://blog.csdn.net/unixtech/article/details/53761832
#php禁止危险函数执行(取决于实际情况,需要和开发沟通)
disable_functions = chown,chmod,pfsockopen,phpinfo

2.php-fpm Process Management profile /etc/php-fpm.conf

#第一部分,fpm配置
;include=etc/fpm.d/*.conf

#第二部分,全局配置
[global]
;pid = /var/log/php-fpm/php-fpm.pid     #pid文件存放的位置
;error_log = /var/log/php-fpm/php-fpm.log   #错误日志存放的位置
;log_level = error  #日志级别, alert, error, warning, notice, debug
rlimit_files = 65535     #php-fpm进程能打开的文件数
;events.mechanism = epoll #使用epoll事件模型处理请求

#第三部分,进程池定义
[www]       #池名称
user = www  #进程运行的用户
group = www #进程运行的组
;listen = /dev/shm/php-fpm.sock #监听在本地socket文件
listen = 127.0.0.1:9000         #监听在本地tcp的9000端口
;listen.allowed_clients = 127.0.0.1 #允许访问FastCGI进程的IP,any不限制 

pm = dynamic                    #动态调节php-fpm的进程数
pm.max_children = 512           #最大启动的php-fpm进程数
pm.start_servers = 32           #初始启动的php-fpm进程数
pm.min_spare_servers = 32       #最少的空闲php-fpm进程数
pm.max_spare_servers = 64       #最大的空闲php-fpm进程数
pm.max_requests = 1500          #每一个进程能响应的请求数
pm.process_idle_timeout = 15s;
pm.status_path = /phpfpm_status #开启php的状态页面

#第四部分,日志相关
php_flag[display_errors] = off
php_admin_value[error_log] = /var/log/phpfpm_error.log
php_admin_flag[log_errors] = on

#慢日志
request_slowlog_timeout = 5s    #php脚本执行超过5s的文件
slowlog = /var/log/php_slow.log #记录至该文件中
慢日志示例
[21-Nov-2013 14:30:38] [pool www] pid 11877
script_filename = /usr/local/lnmp/nginx/html/www.quancha.cn/www/fyzb.php
[0xb70fb88c] file_get_contents() /usr/local/lnmp/nginx/html/www.quancha.cn/www/fyzb.php:2

3.php status page pm.status_path = / phpfpm_status # php open status page

Profiles

[root@nginx ~]# curl http://127.0.0.1/phpfpm_status
pool:                 www           #fpm池名称,大多数为www
process manager:      dynamic       #动态管理phpfpm进程
start time:           05/Jul/2016   #启动时间,如果重启会发生变化
start since:          409           #php-fpm运行时间
accepted conn:        22            #当前池接受的连接数
listen queue:         0     #请求等待队列,如果这个值不为0,那么需要增加FPM的进程数量
max listen queue:     0     #请求等待队列最高的数量
listen queue len:     128   #请求等待队列的长度
idle processes:       4     #php-fpm空闲的进程数量
active processes:     1     #php-fpm活跃的进程数量
total processes:      5     #php-fpm总的进程数量
max active processes: 2     #php-fpm最大活跃的进程数量(FPM启动开始计算)
max children reached: 0     #进程最大数量限制的次数,如果数量不为0,则说明phpfpm最大进程数量过小,可以适当调整。

4.PHP-FPM Profile 4-core 16G, 4-core 32G

[root@nginx ~]# cat /etc/php-fpm.d/www.conf
[global]
pid = /var/run/php-fpm.pid

error_log = /var/log/php-fpm.log
log_level = warning
rlimit_files = 655350
events.mechanism = epoll

[www]
user = nginx
group = nginx
listen = 127.0.0.1:9000
listen.allowed_clients = 127.0.0.1

pm = dynamic
pm.max_children = 512
pm.start_servers = 32
pm.min_spare_servers = 32
pm.max_spare_servers = 64
pm.process_idle_timeout = 15s;
pm.max_requests = 2048
pm.status_path = /phpfpm_status

#php-www模块错误日志
php_flag[display_errors] = off
php_admin_value[error_log] = /var/log/php/php-www.log
php_admin_flag[log_errors] = on

#php慢查询日志
request_slowlog_timeout = 5s
slowlog = /var/log/php-slow.log

to sum up

nginx

    硬件层面    代理比较的消耗CPU、内存、          静态比较消耗磁盘IO、
    网络层面    网络带宽大小、传输速率、是否有丢包、
    系统层面    调整文件描述。  timewait重用
    应用层面    nginx作为代理    keepalive 长连接
    服务层面    nginx作为静态    浏览器缓存、文件传输、压缩、防盗链、跨域访问、CPU亲和  nginx作为缓存    proxy_cache fastcgi_cache uwsgi_cache nginx作为安全  nginx+lua实现waf防火墙

php

    php.ini     错误日志记录、文件大小的调整、session会话共享的配置、禁止不必要的函数(与开发协商)
    php-fpm     监听地址、进程的动态调节、日志开启。
    php状态       php自身监控的状态信息
    php慢查询  什么时间、什么进程、运行什么文件、哪个函数、第几行达到了超时时间

Guess you like

Origin www.cnblogs.com/1naonao/p/11470403.html