Nginx reverse proxy + Go service practice

Foreword
During back-end development, API service interfaces will be provided for the front-end or platform. At this time, we can learn more about the process of Nginx reverse proxy to the back-end service after reading today's article.

Nginx: It
is a high-performance HTTP and reverse proxy web server, and also provides IMAP / POP3 / SMTP services. Can do reverse proxy, forward proxy, static server, etc.

Load balancing algorithm:
upstrem supports 4 load balancing scheduling algorithms:

Weight (weight): each request is assigned to different back-end servers one by one in chronological order, the default is the polling method
url_hash: the request is assigned according to the hash of the access URL
ip_hash: the request is assigned according to the hash of the access IP, if the user The IP is fixed and can also solve the problem of session
fair: intelligent allocation request based on page size and loading time, priority allocation with short response time
reverse proxy:
client-> proxy <-> server

for example:

For example, renting a house in Beijing, we are like a client, I love my agency is like an agent, in fact, we may not necessarily see the owner (server)

In the process of renting a house, we know who the agent is, but we do n’t know who the owner is.

Three servers:

server 192.168.0.1
server 192.168.0.2
server 192.168.0.3

webApi service:
Go language is based on the HTTP service developed by the gin framework. The service starts to listen on port 10080

/usr/local/brand/bin/webApi --config-dir=/usr/local/webApi/config api

Nginx configuration:


# 进程数,一般情况下与CPU个数一致
worker_processes  24;

events {
    use epoll;
    #单个worker process进程的最大并发链接数
    worker_connections  65535;
}

http {
    # 其他配置省略
    include vhosts/api.test.com.conf;
}

api.test.com.conf configuration:
server_name is api.test.com, listen to port 80, and forward all requests to http: //192.168.0.*: 10080

cat /usr/local/nginx/conf/vhost/api.test.com.conf

# 配置负载均衡,均衡负责轮询的方式进行负载
upstream api.test.com {
    # server 要代理到的服务器节点,weight是轮询的权重
    server 192.168.0.1:10080 weight=1;
    server 192.168.0.2:10080 weight=1;
    server 192.168.0.3:10080 weight=1;
}

server {
    listen       80;
    server_name  api.test.com;

    access_log  /data/log/nginx/api.test.com.access.log;
    error_log  /data/log/nginx/api.test.com.error.log;

    location / {
        # proxy_pass 要注意如何url以"/"结尾,则表示绝对路径,否则表示相对路径
        proxy_pass http://api.test.com;
    }
}

We return after the request: http://api.test.com/d/a?page=1&page_size=20, the corresponding result has 5 records, this time has already indicated that the request was successful.

The Nginx access log is as follows:

==> /data/log/nginx/api.test.com.access.log <==
192.168.0.123 - - [22/Apr/2020:22:01:01 +0800] "GET /d/a?page=1&page_size=20 HTTP/1.1" 200 656 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.92 Safari/537.36"

We have successfully accessed our Nginx + Go service.

Summary
This article has practiced the knowledge of Nginx proxy to back-end Go services, and the deployment is simple and practical.

Guess you like

Origin www.cnblogs.com/guichenglin/p/12760698.html