docker, build nginx reverse proxy tomcat actual combat docker, write Dockerfile to customize tomcat8 image, realize online deployment of web applications

Nginx achieves load balancing by configuring nginx.conf. The entire content of nginx.conf is as follows:

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    #include /etc/nginx/conf.d/*.conf;

    upstream tomcat_client {
         server t01:8080 weight=1;
         server t02:8080 weight=1;
    }

    server {
        server_name "";
        listen 80 default_server;
        listen [::]:80 default_server ipv6only=on;

        location / {
            proxy_pass http://tomcat_client;
            proxy_redirect default;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
}

It can be seen that monitoring port 80 is realized through the configuration of the server node, "proxy_pass  http://tomcat_client ;" means that the processing of the request is handed over to tomcat_client, and the specific configuration of tomcat_client:

upstream tomcat_client {
         server t01:8080 weight=1;
         server t02:8080 weight=1;
    }

That is, it is handed over to the two servers whose urls are t01:8080 and t02:8080, and the weight processed by each server is 1. (What exactly are t01:8080 and t02:8080? In fact, this is an alias, which corresponds to the alias in the link parameter. We will talk about this later when we use the link.) 
That's all for the configuration of nginx, let's see how to make it The image file of nginx, that is, the specific content of the Dockerfile:

# First docker file from lz
# VERSION 0.0.1
# Author: lz

#base image
FROM nginx

#author
MAINTAINER Lizhong <[email protected]>

#define working directory
ENV WORK_PATH /etc/nginx

#define conf file name
ENV CONF_FILE_NAME nginx.conf

#delete the original configuration file
RUN rm $WORK_PATH/$CONF_FILE_NAME

#Copy the new configuration file
COPY ./$CONF_FILE_NAME $WORK_PATH/

#Assign read permission to the shell file
RUN chmod a+r $WORK_PATH/$CONF_FILE_NAME

As shown above, the Dockerfile file is very simple, just delete the nginx.conf file in the original system and replace it with the file we just made.

Now we create a new directory image_nginx, there are only two files in this directory, nginx.conf and Dockerfile, as shown below: 

Open a terminal, enter this directory, and execute the command line:

docker build -t nginx:0.0.1 .

After the execution is complete, execute docker images again, and you can see the new image file, as shown below:

For the tomcat image, please directly use the tomcat image customized by Makefile in the previous article " Actual docker, write Dockerfile to customize tomcat8 image, and realize online deployment of web applications " . The benefits of this image are: when verifying the load balancing capability after deployment , you can deploy the application package directly to the tomcat container through the maven plugin.

ok, now that we have the images of nginx and tomcat, the next thing to do is to run an nginx container, and then run two tomcat containers to achieve load balancing. There are two ways to run these three containers: 
1 . Execute three docker run commands to start three containers; 
2. Use docker compose to start multiple containers in batches;

Let's try the first method first: 
1. Start the first tomcat container, name it tomcat001, and enter in the terminal:

docker run --name=tomcat001 -p 8081:8080 -e TOMCAT_SERVER_ID=tomcat_server_001 -idt tomcat:0.0.1

2. Start the second tomcat container, name it tomcat002, and enter in the terminal:

docker run --name=tomcat002 -p 8082:8080 -e TOMCAT_SERVER_ID=tomcat_server_002 -idt tomcat:0.0.1

3. At this time, you can see that two containers have been started by executing docker ps. The two tomcat containers each expose their own port 8080, and then map to the two ports of 8081 and 8082 of the current computer, which means that we access 192.168.1.129:8081 and 192.168.1.129:8082 can access tomcat on the two containers, open the browser and try,

 

Deploy the war package to tomcat through maven, and modify the parameters in pom.xml, as shown below:  

After changing the port in the red box above to 8081, execute it in the directory where pom.xml is located:

mvn clean package -U -Dmaven.test.skip=true tomcat7:redeploy

Then change the port to 8082 and execute the same command, so that the war package can be deployed to two tomcat containers respectively. At this time, you can see http://localhost:8081/loadbalancedemo/hello with a browser. The effect of the picture below:

The effect of visiting http://localhost:8082/loadbalancedemo/hello is as follows:

Enter the following command in the terminal to start nginx:

docker run --name=ngx001 --link=tomcat001:t01 --link=tomcat002:t02 -p 80:80 -idt nginx:0.0.1

Here, let’s focus on the parameter –link=tomcat001:t01 , –link indicates that the container ngx001 started by the current command needs to establish a connection with another container named tomcat001, the “t01” in “tomcat001:t01” indicates that t01 is tomcat001 after the connection is established , or it can be understood as: After ngx001 is started, a record is added to the /etc/host file, the ip is the ip of tomcat001, and the name is t01.

We enter docke exec -it ngx001 /bin/bash to log in to ngx001, and then enter cat /etc/hosts to see the following:

root@a4bedc55e938:/# cat /etc/hosts
127.0.0.1    localhost
::1    localhost ip6-localhost ip6-loopback
fe00::0    ip6-localnet
ff00::0    ip6-mcastprefix
ff02::1    ip6-allnodes
ff02::2    ip6-allrouters
172.18.0.2        t01    c4gfhf5hjmfd tomcat001
172.18.0.3        t02 f34d4563sd6j tomcat002
172.18.0.4        a4bedc55e938

That is to say, inside the ngx001 container, all access to t01 actually accesses the ip corresponding to tomcat001. After figuring this out, look at the nginx.conf file of nginx previously configured:

 

upstream tomcat_client {
         server t01:8080 weight=1;
         server t02:8080 weight=1;
    }

The t01, t02 here correspond to t01, t02 in the link parameter, so that when nginx uses t01 as the domain name for request forwarding, the request can go to tomcat001 and tomcat002.

So far, we have achieved the effect of nginx+tomcat to achieve load balancing, but it seems very troublesome to start three containers and execute commands three times. Is there any way to execute them in batches? Write your own shell and put all the commands in it? Well, that's fine, but what about other bulk operations? For example, stop, resume, build images, view information, etc., so using compose is a better choice. Compose is a tool for defining and running complex Docker applications, which can process multiple containers in batches. Here we only do a small one Try it, don't delve into it.

Go directly to the code, create a new docker-compose.yml file, the content is as follows:

version: '2'
services:
  nginx001:
    image: nginx:0.0.1
    links:
      - tomcat001:t01 
      - tomcat002:t02
    ports:
      - "80:80" 
    restart: always
  tomcat001:
    image: tomcat:0.0.1
    ports:
      - "8081:8080"
    environment:
      TOMCAT_SERVER_ID: tomcat_server_001
    restart: always
  tomcat002:
    image: tomcat:0.0.1
    ports:
      - "8082:8080"
    environment:
      TOMCAT_SERVER_ID: tomcat_server_002
    restart: always

Each container is defined as a node, and there are specific parameter key-value pairs in the node. The parameters that we brought with docker run before are all in it.

Now you can try to execute docker-compose.yml. Before executing, please execute the following command to stop and delete the three containers we started earlier:

docker stop tomcat001 tomcat002 ngx001;docker rm tomcat001 tomcat002 ngx001

Then enter the directory where the docker-compose.yml file is located and execute the following command:

docker-compose up -d

After the execution is complete, execute docker ps to view the current container information: 

The container has been started at one time, but the name seems to have changed and a prefix has been added. The specific content of this prefix is ​​related to the current directory. Now enter localhost in the browser and try it, and successfully open the welcome page of tomcat: 

Please refer to the previous method to deploy the war package on two tomcats respectively through the "mvn clean package -U -Dmaven.test.skip=true tomcat7:redeploy" command, and then visit " http://192.168.1.129/loadbalancedemo/ hello " to verify that nginx is distributing requests to different tomcats.

The above is the actual battle of using link and docker compose to deploy server load balancing. You may have noticed that it is very troublesome to deploy war packages every time. In fact, in addition to this method, we can also create tomcat images in the Dockerfile. Write the command to copy the war package to the image, or you can mount the actual directory of the current computer to the webapps directory of tomcat through the -v parameter when Docker run.

 

Reference blog: combat docker, build nginx reverse proxy tomcat, learn link and docker-compose

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325160370&siteId=291194637