Kubernetes cluster deployment and practice

Link to this blog: https://security.blog.csdn.net/article/details/128840528

1. Summary

Deploying a Kubernetes cluster requires at least 3 servers, of which at least one server must be the master node, at least one server must be the node node, and the node name is unique.

When there is only one master node in the cluster, if it fails, the control plane of Kubernetes will completely fail. To ensure the high reliability of the Kubernetes cluster, multiple masters can be set up. When some of the masters fail, other masters can also manage the entire cluster.

Therefore, we use three server deployments here, and there are two deployment schemes, 2 masters + 1 node, or 1 master + 2 nodes. Here we use the latter method.

Three servers:

master,192.168.153.145
node1,192.168.153.146
node2,192.168.153.147

2. Deployment

modify hostname

# master机器执行命令
hostnamectl set-hostname master

# node1机器执行命令
hostnamectl set-hostname node1

# node2机器执行命令
hostnamectl set-hostname node2

Turn off the firewall:

# 3台机器都执行
systemctl stop firewalld
systemctl disable firewalld

close selinux:

# 3台机器都执行
sed -i 's/enforcing/disabled/' /etc/selinux/config

Close the swap partition:

# 3台机器都执行
vim /etc/fstab
注释掉该行:/dev/mapper/centos-swap

Edit the /etc/hosts file and add the following:

# 3台机器都执行
192.168.153.145 master master
192.168.153.146 node1 node1
192.168.153.147 node2 node2

Create and edit /etc/sysctl.d/k8s.conf and add the following:

# 主要是为了将桥接的IPv4流量传递到iptables
# 只在master机器上执行
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

Excuting an order:

# 只在master机器上执行
sysctl --system

Configure time synchronization:

# 3台机器都执行
yum -y install chrony

Edit the /etc/chrony.conf file and add the following content:

# 3台机器都执行
pool time1.aliyun.com iburst

Excuting an order:

# 3台机器都执行
systemctl enable --now chronyd

Password-free authentication:

# 只在master机器上执行
ssh-keygen -t rsa
ssh-copy-id master
ssh-copy-id node1
ssh-copy-id node2

Reboot the machine:

# 3台机器都执行
reboot

Install docker:

# 3台机器都执行
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y install docker-ce
systemctl start docker
systemctl enable docker
docker -v

Create and edit /etc/yum.repos.d/kubernetes.repo, add the following content:

# 主要是为了添加Kubernetes的源
# 3台机器都执行
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

Install kubeadm, kubelet, kubectl:
Note: version 1.24 and above has abandoned docker, if installed, k8s will report an error during initialization

# 3台机器都执行
yum install -y kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6
systemctl enable kubelet

Excuting an order:

# 只在master机器上执行
docker pull coredns/coredns:1.8.5
docker tag coredns/coredns:1.8.5 registry.aliyuncs.com/google_containers/coredns:v1.8.5

Create and edit /etc/docker/daemon.json, add the following content:

# 3台机器都执行
{
    "exec-opts": ["native.cgroupdriver=systemd"]
}

Excuting an order:

# 3台机器都执行
systemctl daemon-reload
systemctl restart docker
systemctl restart kubelet

Deploy the Kubernetes master node:

# 只在master机器上执行
kubeadm init  \
--apiserver-advertise-address=192.168.153.145 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.23.6 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16

If the problem is solved through Baidu, after a long wait, record the following values:

kubeadm join 192.168.153.145:6443 --token zqlnxn.b8110o37bp5kwinl \
        --discovery-token-ca-cert-hash sha256:69cf2bd1bf87495d1e2e5dc11b3736151feaf00e38a59ea66b276007a163a0aa

As shown below:

insert image description here

Excuting an order:

# 只在master机器上执行
echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' > /etc/profile.d/k8s.sh
source /etc/profile.d/k8s.sh

Install the pod network plugin:

# 只在master机器上执行
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 或者下载下来之后手动安装
kubectl apply -f /root/kube-flannel.yml

Join the Node node of Kubernetes:

# 在两个node机器上执行
echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptables
kubeadm join 192.168.153.145:6443 --token zqlnxn.b8110o37bp5kwinl --discovery-token-ca-cert-hash sha256:69cf2bd1bf87495d1e2e5dc11b3736151feaf00e38a59ea66b276007a163a0aa 

View the node status, as shown in the following figure:

insert image description here

At this point, if all errors are resolved, the deployment ends.

3. Test the Kubernetes cluster

Excuting an order:

# 只在master机器上执行
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pod,svc

as the picture shows:

insert image description here

Browser access:

insert image description here

4. Kubernetes practice

The following are only executed on the master machine

4.1, Deployment resource deployment

Run the Nginx image on the cluster:

kubectl run nginx --image=nginx

View the deployed Nginx application:

kubectl get deployments.apps nginx

View the details and process of Nginx deployment:

kubectl describe deploy nginx

View Nginx's ReplicaSet resource:

# DESIRED-副本数期望值,CURRENT-当前副本数,READY-就绪状态的副本数,AGE-已启动时间
kubectl get rs

View Nginx Pod resources:

kubectl get pod -o wide

4.2. Deployment log view

View the application log of Nginx:

kubectl logs nginx-85b98978db-kxvn6

insert image description here

4.3, Deployment resource execution

Enter the container corresponding to the Nginx application through kubectl:

kubectl exec -it nginx-85b98978db-kxvn6 bash

View the mapping relationship between Service and Pod:

kubectl get endpoints

4.4. Deployment resource expansion

Extend the number of copies of the Nginx application to 3 (originally 1):

kubectl scale deployment nginx --replicas=3

After expansion, view the Deployment, ReplicaSet, Pod, and Endpoints resources of the Nginx application:

kubectl get deployment.apps nginx
kubectl get rs
kubectl get pod
kubectl get ep

insert image description here

4.5. Resource deletion

Delete the corresponding Deployment resource (started by kubectl run):

kubectl rollout undo deploy nginx

Check the delete result:

kubectl get pod

Delete the corresponding Service resource (started by kubectl apply):

kubectl delete -f /tmp/nginx.svc.yml

4.6. Troubleshooting

View the application log of Nginx:

kubectl logs nginx-85b98978db-kxvn6

Details of the output resource:

kubectl describe pod nginx-85b98978db-kxvn6

Guess you like

Origin blog.csdn.net/wutianxu123/article/details/128840528