Online installation
Precautions:
1. The server is recommended to be configured with 4vCPUs+8G memory, and the agent is recommended to be 2vCPUs+4G memory, which can ensure relatively stable operation. And the official recommendation is that the production environment server and agent should not be installed on the same machine, otherwise the port conflict will fail when starting the container. If it needs to be installed on the same machine, it is also very simple. You only need to modify the mapping when starting rancher/server. The external port of the container is fine, and how to solve it will be mentioned later.
2. After performing step 17 to download the image, it is recommended to open the docker log to check the startup of the rancher/server container, command: docker logs container ID. To view logs in real time, add the -f parameter: docker logs -f container ID.
1. Prepare two minimally installed virtual machines. This test uses centos7.6. The official recommendation is that the centos series should be no less than 7.5 (Host configuration: win10-4 core 8 threads 24G memory)
Virtual machine resources are limited, and the running process of this configuration test is not particularly stable. It is estimated that the CPU resources of the server are insufficient. Conditions suggest that the higher the configuration, the better, rancher2 is still more resource-intensive
192.168.44.100 server 2 cores 4 threads 8G memory 50G storage
192.168.44.110 agent 2 cores 4 threads 8G memory 50G storage
2. The two machines are newly installed virtual machines. Configure the network first, and then use xshell5 to connect to the server to complete the subsequent operations.
The configuration of the two machines is the same, just modify the IP information and execute them separately. The configuration file of centos7.6 is this: /etc/sysconfig/network-scripts/ifcfg-ens33
Modify static IP
sed -i 's/BOOTPROTO=dhcp/BOOTPROTO=static/g' /etc/sysconfig/network-scripts/ifcfg-ens33
Modify the activation of the network card at startup
sed -i 's/ONBOOT=no/ONBOOT=yes/g' /etc/sysconfig/network-scripts/ifcfg-ens33
Add IP, subnet mask, gateway, DNS
cat >> /etc/sysconfig/network-scripts/ifcfg-ens33 <<-EOF
IPADDR=192.168.44.100
NETMASK=255.255.255.0
GATEWAY=192.168.44.2
DNS1=114.114.114.114
EOF
Restart the network card
systemctl restart network
3. Add hosts resolution records and modify host names on the two servers
Modify the /etc/hosts file
cat >> /etc/hosts <<-EOF
192.168.44.100 server
192.168.44.110 agent
EOF
Modify the host name, the host name cannot be repeated
server virtual machine: hostnamectl set-hostname server
agent virtual machine: hostnamectl set-hostname agent
4. Temporarily close sellinux
sudo setenforce 0
5. Modify the configuration file /etc/selinux/config to permanently close selinux
sudo sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
6. Turn off the firewall and start the firewall automatically
sudo systemctl stop firewalld && systemctl disable firewalld
7. Install wget
sudo yum -y install wget
8. First backup yum source
sudo cd /etc/yum.repos.d/ && mv CentOS-Base.repo CentOS-Base.repo_bak
9. Download Alibaba Cloud for Yum
sudo wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
10. Clear the yum source and re-establish the cache
sudo yum clean all && yum makecache
11. Install the necessary toolkit (provide yum-config-manager command)
sudo yum -y install yum-utils
12.Add docker-ce data source
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo && yum makecache fast
13.List docker-ce installable list
sudo yum list docker-ce --showduplicates | sort -r
14. Install the latest version of docker-ce
sudo yum -y install docker-ce
View docker information
docker --version or docker info (detailed information)
15. Apply and configure Alibaba Cloud personal free image accelerator, and add some docker tuning configurations
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://xxxxxx.mirror.aliyuncs.com"],
"log-driver":"json-file",
"log-opts": {"max-size":"100m", "max-file":"3"},
"max-concurrent-downloads": 10,
"max-concurrent-uploads": 10,
"storage-driver": "overlay2",
"oom-score-adjust": -1000
}
EOF
Replace https://xxxxxx.mirror.aliyuncs.com with your own Aliyuncs address
16. Reload configuration, start (restart) docker, and add self-start after boot
sudo systemctl daemon-reload && systemctl restart docker && systemctl enable docker
17.Docker install rancher2.5.2 (server)
The following is the installation using rancher's default certificate. If you need to use your own certificate to consult the official rancher2 documentation center, it is very detailed.
If rancher/server and rancher/agent are installed on different machines, execute the command on the server virtual machine to install the latest stable version of rancher/rancher:stable (currently the latest stable version is 2.5.2 ):
sudo docker run -d --restart=unless-stopped --privileged -p 80:80 -p 443:443 \
-v /docker_volume/rancher_home/rancher:/var/lib/rancher \
-v /docker_volume/rancher_home/auditlog:/var/log/auditlog \
--name rancher rancher/rancher:stable
If rancher/server and rancher/agent are installed on the same machine, execute the following sentence. The -p parameter modifies the port accessed outside the container, 80 is mapped to 8080, and 443 is mapped to 8443
sudo docker run -d --restart=unless-stopped --privileged -p 8080:80 -p 8443:443 \
-v /docker_volume/rancher_home/rancher:/var/lib/rancher \
-v /docker_volume/rancher_home/auditlog:/var/log/auditlog \
--name rancher2 rancher/rancher:stable
After running the command, docker will pull the image required to create the rancher/server:stable container from the mirror warehouse. After the image pull ends, it will take a few minutes to fully start
18.Log in rencher
Access address: https://192.168.44.100:443 or https://192.168.44.100:8443, rancher2 accesses https by default, not sure if it only supports https, it doesn’t matter anyway
The figure below is the secondary login interface. The initial login display will be different, but you only need to set the admin login password (unlimited) and agree to the use agreement, and then click to save it. After logging in, you will be given a box of English. If you are not interested directly Close, like me. The simplified Chinese display can be modified in the lower right corner.
19. Add a cluster
The rancher2.5.2 version has a local cluster by default. There seems to be no 2.4.x version, and I don’t know which version is available. I’m too lazy to care. Anyway, don’t use it if you don’t see it. Create a new cluster yourself. The lower left corner shows that the version I am using is 2.5.3. I am confused. I clearly remember that the latest stable version of rancher/rancher:stable was 2.5.2 when I installed it. Did I upgrade the stable version to 2.5.3 during the tossing process? Spicy eyes, if you haven't seen it, skip it here:)
sssss is a cluster I created in advance. Let's take a look at how to create a cluster. It's very simple. Click [Add Cluster]-[Customize], and then fill in a cluster name (the special characters that can be filled in the name seem to be only "-", such as rancher-server), other defaults, and will be adjusted later. After clicking Next, you will see the following picture:
Since I only prepared a virtual machine to install the agent, I chose three roles, etcd, control, and worker. Once you have selected it, click Copy on the right side of the black box and run it on the agent host ssh terminal. You also need to pull all the images required to create the agent container. Note that when the image pull is completed, the container will be deployed. At this time, you need to switch to the server host ssh terminal to view the deployment log. Many students reported that the agent host failed to be added and the reason could not be found in the docker deployment log on the server side. see.
Command to view logs: docker logs -f <rancher/rancher: stable container ID>
For example, mine is the picture below, then the command is: docker logs -f 2ab920764f64, there are other ways to view it, so I can explore it by myself. If there is no error, then congratulations, if you report an error... well, no problem, let's go next.
20. Deploy workload
As mentioned earlier, if the deployment agent host reports an error, let me share the errors encountered during deployment, for reference only:
A read-only...too long seems to be an error. This does not affect the completion of the final deployment, but the reason has not been found.
A get request was lost and the connection failed. But it seems that when the agent host is running, it will start a monitoring program with an interval of 15s (check the agent host log to see), it will automatically connect to the server host to complete the deployment, just wait a few more minutes. If you wait for a long time, you can clear the rancher data of all the agent hosts (the official document has a detailed clearing node data script) and redeploy it. Generally, there is no problem. I have successfully deployed many times, mainly because I want to tune and test, so I repeatedly restore virtual machine snapshots and try to deploy. Nothing else.
Go to the topic and deploy the workload, as shown in the following figure: Click 1, 2, and 3 to enter the default namespace of the sssss cluster deployment workload. As shown in the figure below, there is nothing. We click [Workload]-[Deployment Service].
The deployment service interface is as follows:
Take the deployment of mysql5.7 as an example, fill in the red box. The other defaults are activated.
When the status becomes active strong green, you can connect to this MySQL through the database connection tool.
The port is the mapped port 30001, the user name is root, and the password is the value of MYSQL_ROOT_PASSWORD previously set. You can see the IP point of mysql service when you enter it. The login test was successful.
in conclusion:
Some steps in the document are not necessary, you can use it at your discretion. Students with a weak foundation are better to follow this article to avoid mistakes, and then do more expansion and optimization after getting familiar with rancher. The rancher Chinese document is very detailed, students who have time must read it carefully to avoid many errors. Rancher is indeed excellent. Compared with pure docker, k8s, etc., rancher management and deployment efficiency has been improved a lot, and the page operation is also very friendly. No wonder it has been very popular recently. Of course, this document is just an entry-level deployment attempt of rancher, and the points involved are narrow and shallow. Looking forward to more interesting discoveries in the follow-up.