6. Review: Reinstall Cluster

6. Review: Reinstall Cluster

Screen bilibili Address: 6. Review - _ reinstall the cluster k8s beep beep miles miles (゜-゜) zu ro ~ -bilibili Cheers

1. Resource ready, all nodes

K8S pulling plug and flannel mirror network
provides packet Baidu cloud off, suveng-k8s-image.tar.gz here

Link: https: //pan.baidu.com/s/1lty5BLoz4eSBC7fKpSfj8A extraction code: eftw

Download offline packages uploaded to the virtual machine / root directory

cd /root
tar -zxvf suveng-k8s-image.tar.gz

# 导入镜像
docker load -i suveng/k8s.gcr.io-kube-proxy.tar
docker load -i suveng/k8s.gcr.io-kube-apiserver.tar
docker load -i suveng/k8s.gcr.io-kube-controller-manager.tar
docker load -i suveng/k8s.gcr.io-kube-scheduler.tar
docker load -i suveng/k8s.gcr.io-coredns.tar
docker load -i suveng/k8s.gcr.io-etcd.tar
docker load -i suveng/k8s.gcr.io-pause.tar
docker load -i suveng/flannel.tar

# 重新打标签
docker tag suveng/k8s.gcr.io-kube-apiserver:v1.15.0 k8s.gcr.io/kube-apiserver:v1.15.0

docker tag suveng/k8s.gcr.io-kube-scheduler:v1.15.0 k8s.gcr.io/kube-scheduler:v1.15.0

docker tag suveng/k8s.gcr.io-kube-controller-manager:v1.15.0 k8s.gcr.io/kube-controller-manager:v1.15.0


docker tag suveng/k8s.gcr.io-kube-proxy:v1.15.0 k8s.gcr.io/kube-proxy:v1.15.0

docker tag suveng/k8s.gcr.io-etcd:3.3.10 k8s.gcr.io/etcd:3.3.10

docker tag suveng/k8s.gcr.io-pause:3.1 k8s.gcr.io/pause:3.1

docker tag suveng/k8s.gcr.io-coredns:1.3.1 k8s.gcr.io/coredns:1.3.1

# 删除自己的标签
docker rmi suveng/k8s.gcr.io-kube-apiserver:v1.15.0 

docker rmi suveng/k8s.gcr.io-kube-scheduler:v1.15.0 

docker rmi suveng/k8s.gcr.io-kube-controller-manager:v1.15.0 


docker rmi suveng/k8s.gcr.io-kube-proxy:v1.15.0 

docker rmi suveng/k8s.gcr.io-etcd:3.3.10 

docker rmi suveng/k8s.gcr.io-pause:3.1 

docker rmi suveng/k8s.gcr.io-coredns:1.3.1 

2. Configure the environment, all nodes

# 安装kubelet kubeadm kubectl
cat <<EOF > /etc/yum.repos.d/kubernetes.repo 
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum install kubeadm-1.15.0 kubectl-1.15.0 kubelet-1.15.0 # 安装组件


# 启动kublet,并开机自启动
systemctl start kubelet 

systemctl enable kubelet


# centos7用户还需要设置路由
yum install -y bridge-utils.x86_64

# 加载br_netfilter模块,使用lsmod查看开启的模块
modprobe  br_netfilter  

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

# 重新加载所有配置文件
sysctl --system  

# k8s要求关闭swap  (qxl)
swapoff -a && sysctl -w vm.swappiness=0  # 关闭swap
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab  # 取消开机挂载swap

3. Install k8s cluster master

# master节点初始化,配置网络,应用flannel网络,注意这里的flannel网络的pod内网地址默认是10.244.0.0/16,在master初始化时指定内网地址
kubeadm init --apiserver-advertise-address <master_ip> --pod-network-cidr 10.244.0.0/16 --kubernetes-version 1.15.0

# 初始化完配置kubectl环境
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 保存打印出来的下面的提示,用于初始化worker节点, 复制自己的, 不是复制这篇文章的

kubeadm join 192.168.0.205:6443 --token fj6m98.tlsh8w89o27ojbqc \
    --discovery-token-ca-cert-hash sha256:e7ae2669a443be902feaf912c115662f3d238c807b41704a803308fdc6625a59


# 配置网络,应用flannel网络,注意这里的flannel网络的pod内网地址默认是10.244.0.0/16,在master初始化时指定内网地址
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml

# 检测master节点
kubectl get node

# 查看kubelet的日志

journalctl -fu kubelet

4. Install the cluster worker k8s


# 保存打印出来的下面的提示,用于初始化worker节点, 复制自己的, 不是复制这篇文章的
kubeadm join 192.168.0.205:6443 --token fj6m98.tlsh8w89o27ojbqc \
    --discovery-token-ca-cert-hash sha256:e7ae2669a443be902feaf912c115662f3d238c807b41704a803308fdc6625a59

5. worker node configuration kubectl

Configuration of global variables shell KUBECONFIGfor the master node/etc/kubenetes/config

vi /etc/profile

KUBECONFIG='/root/config'
export KUBECONFIG

6. recommended systemd

Observant people found reported a warning level log:

[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”.

[Warning IsDockerSystemdCheck]: detected "cgroupfs" as Docker cgroup driver. Recommended driver is "systemd"

k8s cgroup-driver may consider upgrading to systemd

For docker, a new cluster installation can be edited directly /etc/docker/daemon.json add attributes:

"exec-opts": [  "native.cgroupdriver=systemd" ]
Published 161 original articles · won praise 140 · views 470 000 +

Guess you like

Origin blog.csdn.net/qq_37933685/article/details/104307500