Use domestic image sources to build kubernetes (k8s) clusters

  1. Overview
    As the old saying goes: study hard, improve yourself, let yourself know more than others, and understand more than others.

Closer to home, we talked about Docker before. With the continuous expansion of business, the number of Docker containers and physical machines continues to increase. At this time, we will find that it is very troublesome to log in to each machine to manually operate Docker.

At this time, we need a useful tool to manage Docker, help us create, run, adjust, destroy these containers, help us monitor which container is down, and then restart the container, etc.

Kubernetes (k8s) is a good choice. Today we will first talk about how kubernetes (k8s) is built.

  1. Scenario description
    Server A IP: 192.168.1.12

Server B IP: 192.168.1.11

Server C IP: 192.168.1.15

Server A hostname: zhuifengren2

Server B hostname: zhuifengren3

Server C hostname: zhuifengren4

Prepare three servers with CentOS7 operating system.

Docker has been installed on all three servers. For the installation of Docker, please refer to my other article "Quick Introduction to Docker"

Server A serves as the Master node, and Server B and Server C serve as data nodes.

  1. kubernetes (k8s) installation (CentOS7)
    3.1 official website address

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

3.2 Server configuration requirements

Memory at least 2G

CPU at least 2 cores

Hard drive at least 20G

3.3 Turn off SELinux

method one:

setenforce 0

sed -i ‘s/^SELINUX=enforcing$/SELINUX=permissive/’ /etc/selinux/config

Method 2:

vim /etc/sysconfig/selinux

Change SELINUX=enforcing to SELINUX=disabled

restart server

3.4 Set routing

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf

br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

sysctl --system

3.5 Close the system Swap

swapoff -a

we /etc/fstab

Comment out the automatic mounting of SWAP

vi /etc/sysctl.d/k8s.conf

Add the following line:
vm.swappiness=0

sysctl -p /etc/sysctl.d/k8s.conf

3.6 Install and start kubernetes (K8s)

cat < /etc/yum.repos.d/kubernetes.repo

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

systemctl enable kubelet

systemctl restart kubelet

Steps from 3.3 to 3.6 are executed on all three servers

  1. kubernetes (k8s) cluster construction (CentOS7)
    4.1 Modify Docker configuration

cat > /etc/docker/daemon.json <<EOF

{
“exec-opts”: [“native.cgroupdriver=systemd”],
“log-driver”: “json-file”,
“log-opts”: {
“max-size”: “100m”
},
“storage-driver”: “overlay2”,
“storage-opts”: [
“overlay2.override_kernel_check=true”
],
“data-root”: “/data/docker”
}
EOF

systemctl daemon-reload

systemctl restart docker

4.2 View the required image

kubeadm config images listk8s.gcr.io/kube-apiserver:v1.22.3

k8s.gcr.io/kube-controller-manager:v1.22.3
k8s.gcr.io/kube-scheduler:v1.22.3
k8s.gcr.io/kube-proxy:v1.22.3
k8s.gcr.io/pause:3.5
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/coredns/coredns:v1.8.4

4.3 Pull images from domestic sources

Since k8s.cgr.io is inaccessible, we need to use the domestic mirror source to pull it down first, and then change the tag.

Execute the following script:
#/bin/bash
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.3
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager: v1.22.3
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.3
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.3
docker pull registry.cn -hangzhou.aliyuncs.com/google_containers/pause:3.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1 .8.4
docker pull quay.io/coreos/flannel:v0.15.1-amd64

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.3 k8s.gcr.io/kube-apiserver:v1.22.3
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.3 k8s.gcr.io/kube-controller-manager:v1.22.3
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.3 k8s.gcr.io/kube-scheduler:v1.22.3
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.3 k8s.gcr.io/kube-proxy:v1.22.3
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5 k8s.gcr.io/pause:3.5
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0 k8s.gcr.io/etcd:3.5.0-0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.4 k8s.gcr.io/coredns/coredns:v1.8.4

docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.3
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.3
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.3
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.3
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.4

Steps 4.1 4.2 4.3 need to be executed on all three servers.

4.4 Initialize the cluster

Executed on the Master node

kubeadm init --apiserver-advertise-address=192.168.1.12 --pod-network-cidr=10.244.0.0/16

Among them, 192.168.1.12 is the IP address of the Master node, which can be modified according to the actual situation.

4.5 Get “http://localhost:10248/healthz”: dial tcp [::1]:10248: connect: connection refused Error resolution

When initializing the cluster, if the above error is reported, follow the following steps on the Master node:

vi /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf

增加:
Environment=“KUBELET_SYSTEM_PODS_ARGS=–pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --fail-swap-on=false”

cat > /var/lib/kubelet/config.yaml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF

systemctl daemon-reload

systemctl restart kubelet

kubeadm reset -f

4.6 Execute the cluster initialization command again

Executed on the Master node

kubeadm init --apiserver-advertise-address=192.168.1.12 --pod-network-cidr=10.244.0.0/16

The following message appears, indicating that the initialization is successful:
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown ( i d − u ) : (id -u): (idu):(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.12:6443 --token x0u0ou.q6271pyjm7cv5hxl
–discovery-token-ca-cert-hash sha256:907ffb03d73f7668b96024c328880f95f4249e98da1be44d1caeb01dd62173da

4.7 Export the config file and set up the network based on the information in the previous step

export KUBECONFIG=/etc/kubernetes/admin.conf

Here we use the flannel network

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

So far, the Master node has been built.

4.8 According to the information in step 4.6, add two data nodes to the cluster

On Server B and Server C, execute the following commands (information derived from step 4.6):

kubeadm join 192.168.1.12:6443 --token x0u0ou.q6271pyjm7cv5hxl \

–discovery-token-ca-cert-hash sha256:907ffb03d73f7668b96024c328880f95f4249e98da1be44d1caeb01dd62173da

If the execution is unsuccessful, or the data node is always in the NotReady state, refer to step 4.5 to modify the configuration.

4.9 View cluster information on the Master node

kubectl get node

If the status is all Ready, the Kubernetes (K8s) cluster is successfully established.

  1. Summary
    Today I talked about how to use domestic image sources to build a kubernetes (k8s) cluster. I hope it can be helpful to everyone's work.

v1.21.0
#/bin/bash
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.21.0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.21.0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.21.0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.4.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.0
docker pull quay.io/coreos/flannel:v0.15.1-amd64

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.21.0 k8s.gcr.io/kube-apiserver:v1.21.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0 k8s.gcr.io/kube-controller-manager:v1.21.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.21.0 k8s.gcr.io/kube-scheduler:v1.21.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.21.0 k8s.gcr.io/kube-proxy:v1.21.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.4.1 k8s.gcr.io/pause:3.4.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.0 k8s.gcr.io/coredns/coredns:v1.8.0

docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.21.0
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.21.0
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.21.0
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.21.0
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.4.1
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.0

Guess you like

Origin blog.csdn.net/xiuqingzhouyang/article/details/127824766