Build a k8s cluster on Ubuntu18 and deploy a simple application nigx 2019/10/9

Three nodes are built on the local virtual machine, all operating on Ubuntu. Mainly for the convenience of self-recording and later use.
Pro test available

Initialize the build environment and initialize the master node, add other nodes to the cluster *****
Configure docker and k8s on the master node and other nodes, but the master node needs to be initialized, and add a network plugin.
At the beginning, you need to turn off the firewall, turn off the swap, turn off selinux and other preparations

1: Turn off the firewall
sudo ufw disable

2: close swap

 sudo swapoff -a     #这个只是暂时关闭,重启后失效
 sudo vim /etc/fstab   #通过修改配置文件,彻底关闭
# 注释掉swapfile这一行

3: Close selinux

 sudo vim /etc/selinux/config   #通过修改配置文件
 SELINUX=disabled

4: Modify the host name to a recognized name. You can set it through the command line or by yourself. Optional.

5: Install some basic tools, such as: vim, curl, etc.

sudo apt update && \
sudo apt -y upgrade && \
sudo apt install -y vim \
curl \
apt-transport-https \
ca-certificates \
software-properties-common

6: Change dns service provisional, don't set this for the time being

$ sudo apt install -y unbound
$ sudo systemctl stop systemd-resolved
$ sudo systemctl disable systemd-resolved
$ sudo rm -rf /etc/resolv.conf
$ sudo vim /etc/NetworkManager/NetworkManager.conf
 # 在[main]下面添加
 dns=unbound
# 重启生效
$ reboot

7: Configure the packet forwarding to temporarily not set this

sudo vim /etc/sysctl.conf    #修改配置文件,往文件中添加东西
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

Make the configuration effective, but invalid after restart

sudo modprobe br_netfilter
 sudo sysctl -p

Set to automatically load br_netfilter at boot

 sudo vim /etc/init.d/load_br_netfilter.sh
  #!/bin/bash
  ### BEGIN INIT INFO
  # Provides:       svnd.sh
  # Required-Start: $local_fs $remote_fs $network $syslog
  # Required-Stop:  $local_fs $remote_fs $network $syslog
  # Default-Start:  2 3 4 5
  # Default-Stop:   0 1 6
  # Short-Description: starts the svnd.sh daemon
  # Description:       starts svnd.sh using start-stop-daemon
  ### END INIT INFO
  sudo modprobe br_netfilter

 sudo chmod 775 /etc/init.d/load_br_netfilter.sh
 sudo update-rc.d load_br_netfilter.sh defaults 90

If you want to cancel the boot automatically load the module

$ sudo update-rc.d -f load_br_netfilter.sh remove

The first step: first install docker
1: first uninstall the old version that may exist

 sudo apt-get remove docker docker-engine docker-ce docker.io

2: Update the apt package index:

sudo apt-get update

3: Install the following packages so that apt can use the repository via HTTPS

 sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common

4: Add the official Docker GPG key:

 curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

5: Use the following command to set up the stable repository:

 sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

6: Update the apt package index:

 sudo apt-get update

7: List available versions:

 apt-cache madison docker-ce

8: Install the specified version

sudo apt-get install docker-ce=18.06.3~ce~3-0~ubuntu

9: Check whether the docker status is started

systemctl status docker

If not started, start

sudo systemctl start docker

Stop docker

sudo systemctl stop docker 

Set docker to start automatically

systemctl enable docker.service

10: Verify that it is normal

sudo docker run hello-world

Step 2: Build k8s environment
1: Configure source

 sudo curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
 sudo tee /etc/apt/sources.list.d/kubernetes.list <<-'EOF'
deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main
EOF

Update index

$ sudo apt update

View available versions

apt-cache madison kubeadm

2: Install the specified version

sudo apt install -y kubelet=1.15.0-00 kubeadm=1.15.0-00 kubectl=1.15.0-00
sudo apt-mark hold kubelet=1.15.0-00 kubeadm=1.15.0-00 kubectl=1.15.0-00

3: Set the boot to start automatically

sudo systemctl enable kubelet && sudo systemctl start kubelet

4: Initialize the master node

This can be used

kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.15.0 --pod-network-cidr=10.244.0.0/16

If not, try this

kubeadm init --kubernetes-version=1.15.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap

5: Regenerate the token information, which is usually executed when it needs to expire after 12 hours.

kubeadm token create --print-join-command

6: Install the CNI flannel network plug-in: If you do not install the network plug-in, the status of the master node is NoReady.
Before installing the network plug-in, you must set it, otherwise it will report: The connection to the server localhost: 8080 was refused

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

If this operation is not performed, kubectl get nodes will also report an error

This can be used

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

If you can't use it, try this

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

So far, the k8s environment has been basically set up, and the master node has also been initialized. The next step is to let other nodes join the cluster.

After the node restarts, be sure to close the swap, execute the reset command, and then go in the join cluster.

7: Execute the token command generated by the master node on other nodes.

It should be noted here: in addition to the master node needs to install docker and kubectl, kubeadm, kubelet. Other nodes also need to be installed, and must be installed the same.
For example: execute this on other nodes

kubeadm join 192.168.172.134:6443 --token 1ly6ta.omqv3bkurfomxw0j     --discovery-token-ca-cert-hash sha256:d65642726224dfdf52ccd8af1e9fc1ca30ad201e4b03a14ebd327d7f60abf1c8

8: The node that joined the cluster before, if you want to rejoin the cluster later, you must execute the
kubeadm resetcommand, otherwise you will get an error

Note: Remember some common errors and solutions. If you report an error, you must learn to read the error reported by the console. ! ! ! !
1) The status of the newly added node is NotReady, the solution is:
restart the daemon, docker and kuberlet of each node node

 systemctl daemon-reload
 systemctl restart docker
 systemctl restart kubelet

Last execution

kubeadm reset

Finally, rejoin the cluster
to solve!

2) k8s deletes a node

kubectl delete node nodename

In fact, this command is universal and can delete any type.

kubectl delete type typename

type is the resource type, which can be node, pod, rs, rc, deployment, service, etc. typename is the name of this resource.

Note: No matter which node is in the future, as long as it is restarted, the master will go to see if the status of docker and kubelet is normal, and execute kubectl get nodes to
display the node information. Other nodes go to see if docker is normal. Secondly, if there is a problem, if you want to rejoin the cluster, you must first reset: kubeadm reset

In Ubuntu, some commonly used operations to modify the configuration file through the terminal:
1: Use the vim command to enter a configuration file, i means insert. If you want to save and exit, esc, then: wq, save and exit.
2: Set Chinese, time, no lock screen, all in the settings.

Deploy a simple application in the cluster ******
Simple deployment of nigx
1: Create a deployment on the master node

kubectl create deployment nginx --image=nginx

View

kubectl get deployments

2: Create a service

kubectl create service nodeport nginx --tcp 80:80

View

kubectl get svc

3: Execute the following command on the slave node to verify whether nginx is successfully deployed.
curl localhost:30601
#The port number here depends on the actual port number after your deployment

Or
curl kube-slave:30601
# here kube-slave is the host name of the node, which is what node1, node2 ...

Open the browser and enter: localhost: port number or node host name: port number to access the nigx page normally.

Published 13 original articles · Like1 · Visit 2006

Guess you like

Origin blog.csdn.net/qq_31152023/article/details/102455601