Part of Java Web container Tomcat & Nginx

Java Web series summary posted: Java Web knowledge summary summary


Tomcat

Tomcat's top architecture

 Tomcat's top architecture

Tomcat's two core components: Connector & Container.

More Connector and on the formation of a Container Service, and the Service including the entire life cycle is controlled by the Tomcat Server.

Connector responsible for accepting requests, Container handles these requests, Service is responsible for both associations will also initialize other components below it. All components lifecycle Lifecycle of a control interface.

Server provides an interface to allow other programs to access the Service to the collection, while maintaining all Service Lifecycle it contains, including how to initialize, how to end the service, how to find other people to access the Service.

Connector Architecture Analysis

Connector ProtocolHandler is used to process the request, different connection types different ProtocolHandler representatives, such as: Http11Protocol Socket by using a common connection, Http11NioProtocol is used to connect the NioSocket.

Which contains three components from the ProtocolHandler: Endpoint, Processor, Adapter.

(. 1) Endpoint Socket for processing of the underlying network connection, Processor for Socket Endpoint received encapsulated Request, Adapter for Container Request to perform specific processing.

(2) Endpoint process because it is the underlying network Socket connection, so Endpoint is used to achieve TCP / IP protocol, and the HTTP protocol to implement Processor, Adapter request to the Servlet container adapted for specific processing.

(3) Endpoint abstract implementation AbstractEndpoint inside AsyncTimeout Acceptor and two internal classes and interfaces defined in a Handler. Acceptor to listen request, AsyncTimeout Request for checking asynchronous timeout, Handler for processing the received Socket, Processor calls are processed internally.

Container Architecture Analysis

 Container Architecture Analysis

4 child container action are:

(1) Engine: engine, used to manage multiple sites, a Service can only have a maximum of Engine;
(2) Host: represents a site, can also be called a virtual host, Host can be added by configuring the site;
(3) Context : represents an application program corresponding to a normal development, or a WEB-INF directory and below web.xml file;
(. 4) Wrapper: each encloses a Wrapper the Servlet;

Reference:
four maps take you through the Tomcat system architecture
system architecture and workflow Introduction Tomcat is
the working principle of the Servlet


Nginx

What is the problem C10K

The origin of the problem C10K

With the popularity of the Internet, the application user groups geometric fold increase server performance problems arise at this time. The original server is based on the process / thread model. The arrival of a new TCP connection, you need to assign a process. If there C10K, you need to create a process 1W, imagine stand-alone is unbearable. So the problem is how to break the stand-alone performance of high-performance network programming must face, and then these limitations and the problem is referred to as C10K problem was first conducted and summarized by Dan Kegel, and he also analyzed the system and propose solutions.

C10K nature of the problem

The problem is the operating system essentially C10K problem. For the operating system, Web 1.0 / 2.0 era, the traditional synchronous blocking I / O model are handled requests per second. When a process is created or thread more, data is copied frequently (cached I / O, the kernel will copy the data to the user process space, blocking, process / thread context switching consumption, cause the operating system to crash, this is the nature C10K problem.
Visible the key to solve the C10K problem is to minimize the consumption of CPU resources.

C10K problem solution

From network programming point of view, the main idea:

  • Each connection is assigned a separate thread / process
  • With a thread / process simultaneously handle multiple connections

Excerpt:
C10K problem

Forward Proxy and Reverse Proxy

Forward Proxy:
Forward proxy by the following chart to understand is actually the user wants to take the resource data from the server, but only in order to get through the proxy server, the user A can only access the proxy server and then go through a proxy server to get data server B this is the user know exactly who you want to visit is, in our lives most typical case is "over the wall", and also for the last visit by a proxy server to access the network.
Reverse Proxy:
Reverse proxy is actually the client to access the server, he does not know will visit Which, feeling that the Proxy clients to access the same, and indeed when the proxy is the gateway to get the user's request when it will be forwarded to the proxy server randomly (algorithm) to a table. And the user's perspective, he only visited the Proxy server only, a typical example is the load balancing.

Icon:

Reference:
talk Forward Proxy and Reverse Proxy

Nginx of several commonly used load balancing strategy

Tactics DEFINITIONS
polling Default
weight Weight way
ip_hash According ip distribution
least_conn Minimum Connection
fair (third party) Response time way
url_hash (third party) According to URL allocation

Reference:
load balancing policy servers Nginx (6 kinds)

Extended: Web load balancing strategy

Several load balancing methods

  • HTTP redirect load balancing
  • DNS Load Balancing
  • Reverse Proxy Load Balancing

Load balancing component

  • apache
  • nginx
  • lvs
  • HAProxy
  • keepalived

Several common load balancing algorithms

  • 1, polling
  • 2, WRR
  • 3, random
  • 4, weighted random
  • 5, Hash Method: The IP client requests or "Key", a hash value is calculated, and then modulo the number of nodes
  • 6, Least Connections

More:
One high-concurrency solutions - Load balancing
load balancing of Web applications, clustering, high availability (HA) solutions finishing summed up
everything about load balancing: Summary and Thinking

Nginx architecture and works

Nginx architecture

Nginx architecture in general is this:
after 1.Nginx start, will have a primary process, the main process performs a series of work will produce one or more worker processes;
process client requests 2. dynamic sites, Nginx also it relates to a communication server and back-end servers. Nginx received Web request to the backend server through a proxy, data processing and organized by the back-end server;
3.Nginx order to improve efficiency in response to the request, reducing the pressure on the network, using a caching mechanism, historical response data cache locally. Guarantee fast access to cache file;

Worker Process

The main work of the process are the following:

  • Receiving a client request;
  • The request of each module into a filtering process;
  • IO calls, fetch response data;
  • Communicate with back-end servers, back-end server receives the processing result;
  • Data caching
  • Respond to client requests;

Interactive process

Nginx server when using Master-Worker model, involves the interaction between the primary process and the interaction and work processes work process. These two types of interaction are dependent on the pipeline mechanism.
1.Master-Worker interact
this pipe conduit ordinary different, it is directed by the main process pipe work-way process, the main process comprising instructions sent to the working process, the working process ID and the like; while the main process with the outside world through a signal communication;
2.worker-worker interaction
of such interaction is the interaction and the Master-worker is substantially uniform. But will the main process. They are isolated from each other between the working process, when the work required to process the work process W1 W2 send instructions to first find W2 process ID, and the correct instruction is written pointing to the channel W2. W2 received signal to take appropriate measures.

Nginx Modular

  • (1) core module;
    core module is a module Nginx server uptime essential, as the operating system kernel. It provides Nginx basic core services. Like process management, access control, error logging and so on;

  • (2) a standard HTTP module;
    standard HTTP module supports standard HTTP function;

  • (3) Optional HTTP module;
    optional HTTP module is mainly used to extend the standard HTTP features that make Nginx can handle some special services;

  • (4) mail service module;
    mail service module is mainly used to support Nginx mail service;

  • (5) third-party modules;
    third-party modules to extend Nginx server application developer to complete the desired function;

Reference:
Nginx server architecture Brief
Nginx architecture Introduction

Nginx configuration

#运行用户
user nobody;
#启动进程,通常设置成和cpu的数量相等
worker_processes  1;
 
#全局错误日志及PID文件
#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;
 
#pid        logs/nginx.pid;
 
#工作模式及连接数上限
events {
    #epoll是多路复用IO(I/O Multiplexing)中的一种方式,
    #仅用于linux2.6以上内核,可以大大提高nginx的性能
    use   epoll; 
 
    #单个后台worker process进程的最大并发链接数    
    worker_connections  1024;
 
    # 并发总数是 worker_processes 和 worker_connections 的乘积
    # 即 max_clients = worker_processes * worker_connections
    # 在设置了反向代理的情况下,max_clients = worker_processes * worker_connections / 4  为什么
    # 为什么上面反向代理要除以4,应该说是一个经验值
    # 根据以上条件,正常情况下的Nginx Server可以应付的最大连接数为:4 * 8000 = 32000
    # worker_connections 值的设置跟物理内存大小有关
    # 因为并发受IO约束,max_clients的值须小于系统可以打开的最大文件数
    # 而系统可以打开的最大文件数和内存大小成正比,一般1GB内存的机器上可以打开的文件数大约是10万左右
    # 我们来看看360M内存的VPS可以打开的文件句柄数是多少:
    # $ cat /proc/sys/fs/file-max
    # 输出 34336
    # 32000 < 34336,即并发连接总数小于系统可以打开的文件句柄总数,这样就在操作系统可以承受的范围之内
    # 所以,worker_connections 的值需根据 worker_processes 进程数目和系统可以打开的最大文件总数进行适当地进行设置
    # 使得并发总数小于操作系统可以打开的最大文件数目
    # 其实质也就是根据主机的物理CPU和内存进行配置
    # 当然,理论上的并发总数可能会和实际有所偏差,因为主机还有其他的工作进程需要消耗系统资源。
    # ulimit -SHn 65535
 
}
 
 
http {
    #设定mime类型,类型由mime.type文件定义
    include    mime.types;
    default_type  application/octet-stream;
    #设定日志格式
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
 
    access_log  logs/access.log  main;
 
    #sendfile 指令指定 nginx 是否调用 sendfile 函数(zero copy 方式)来输出文件,
    #对于普通应用,必须设为 on,
    #如果用来进行下载等应用磁盘IO重负载应用,可设置为 off,
    #以平衡磁盘与网络I/O处理速度,降低系统的uptime.
    sendfile     on;
    #tcp_nopush     on;
 
    #连接超时时间
    #keepalive_timeout  0;
    keepalive_timeout  65;
    tcp_nodelay     on;
 
    #开启gzip压缩
    gzip  on;
    gzip_disable "MSIE [1-6].";
 
    #设定请求缓冲
    client_header_buffer_size    128k;
    large_client_header_buffers  4 128k;
 
 
    #设定虚拟主机配置
    server {
        #侦听80端口
        listen    80;
        #定义使用 www.nginx.cn访问
        server_name  www.nginx.cn;
 
        #定义服务器的默认网站根目录位置
        root html;
 
        #设定本虚拟主机的访问日志
        access_log  logs/nginx.access.log  main;
 
        #默认请求
        location / {
            
            #定义首页索引文件的名称
            index index.php index.html index.htm;   
 
        }
 
        # 定义错误提示页面
        error_page   500 502 503 504 /50x.html;
        location = /50x.html {
        }
 
        #静态文件,nginx自己处理
        location ~ ^/(images|javascript|js|css|flash|media|static)/ {
            
            #过期30天,静态文件不怎么更新,过期可以设大一点,
            #如果频繁更新,则可以设置得小一点。
            expires 30d;
        }
 
        #PHP 脚本请求全部转发到 FastCGI处理. 使用FastCGI默认配置.
        location ~ .php$ {
            fastcgi_pass 127.0.0.1:9000;
            fastcgi_index index.php;
            fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
            include fastcgi_params;
        }
 
        #禁止访问 .htxxx 文件
            location ~ /.ht {
            deny all;
        }
 
    }
}

Reference:
nginx.conf basic configuration and optimization
Nginx configuration in detail

Guess you like

Origin blog.csdn.net/zangdaiyang1991/article/details/92139088