Cloud Native In-depth Analysis of K8S 1.24 High Availability Environment Deployment

1. Release of Kubernetes 1.24 and major changes

  • On May 3, 2022, Kubernetes 1.24 was officially released. In the new version, it can be seen that Kubernetes, as the de facto standard for container orchestration, is becoming more and more mature. 12 functions have been updated to the stable version, and many practical For example, StatefulSets supports batch rolling updates, and NetworkPolicy adds a new NetworkPolicyStatus field to facilitate troubleshooting.
  • Kubernetes has officially removed support for Dockershim, and the long-discussed "abandoning Dockershim" has finally come to an end in this version:

insert image description here
insert image description here
insert image description here

2. Basic environment deployment

① Preliminary preparation (all nodes)

# 在192.168.0.113执行
hostnamectl set-hostname  k8s-master-168-0-113
# 在192.168.0.114执行
hostnamectl set-hostname k8s-node1-168-0-114
# 在192.168.0.115执行
hostnamectl set-hostname k8s-node2-168-0-115
    • Configure hosts:
cat >> /etc/hosts<<EOF
192.168.0.113 k8s-master-168-0-113
192.168.0.114 k8s-node1-168-0-114
192.168.0.115 k8s-node2-168-0-115
EOF
  • Configure ssh mutual trust:
# 直接一直回车就行
ssh-keygen

ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-master-168-0-113
ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-node1-168-0-114
ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-node2-168-0-115
  • Time synchronization:
yum install chrony -y
systemctl start chronyd
systemctl enable chronyd
chronyc sources
  • Turn off the firewall:
systemctl stop firewalld
systemctl disable firewalld
  • Close swap:
# 临时关闭;关闭swap主要是为了性能考虑
swapoff -a
# 可以通过这个命令查看swap是否关闭了
free
# 永久关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab
  • Disable SELinux:
# 临时关闭
setenforce 0
# 永久禁用
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
  • Allow iptables to inspect bridged traffic (optional, all nodes):
    • To load this module explicitly, run sudo modprobe br_netfilter, verify that the br_netfilter module is loaded by running lsmod | grep br_netfilter:
sudo modprobe br_netfilter
lsmod | grep br_netfilter
    • In order for the Linux node's iptables to see bridge traffic correctly, make sure net.bridge.bridge-nf-call-iptables is set to 1 in the sysctl configuration, for example:
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# 设置所需的 sysctl 参数,参数在重新启动后保持不变
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# 应用 sysctl 参数而不重新启动
sudo sysctl --system

② Install container docker (all nodes)

  • Kubernetes versions prior to v1.24 included direct integration with Docker Engine using a component called dockershim, this particular direct integration is no longer part of Kubernetes (this removal was announced as part of the v1.20 release).
# 配置yum源
cd /etc/yum.repos.d ; mkdir bak; mv CentOS-Linux-* bak/
# centos7
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# centos8
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo

# 安装yum-config-manager配置工具
yum -y install yum-utils
# 设置yum源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 安装docker-ce版本
yum install -y docker-ce
# 启动
systemctl start docker
# 开机自启
systemctl enable docker
# 查看版本号
docker --version
# 查看版本具体信息
docker version

# Docker镜像源设置
# 修改文件 /etc/docker/daemon.json,没有这个文件就创建
# 添加以下内容后,重启docker服务:
cat >/etc/docker/daemon.json<<EOF
{
    
    
   "registry-mirrors": ["http://hub-mirror.c.163.com"]
}
EOF
# 加载
systemctl reload docker

# 查看
systemctl status docker containerd
  • What dockerd actually calls is the api interface of containerd. containerd is an intermediate communication component between dockerd and runC, so when the docker service is started, the containerd service will also be started.

③ Configure k8s yum source (all nodes)

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[k8s]
name=k8s
enabled=1
gpgcheck=0
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
EOF

④ Set the sandbox_image mirror source to Alibaba Cloud google_containers mirror source (all nodes)

# 导出默认配置,config.toml这个文件默认是不存在的
containerd config default > /etc/containerd/config.toml
grep sandbox_image  /etc/containerd/config.toml
sed -i "s#k8s.gcr.io/pause#registry.aliyuncs.com/google_containers/pause#g"       /etc/containerd/config.toml
grep sandbox_image  /etc/containerd/config.toml

insert image description here

⑤ Configure containerd cgroup driver systemd (all nodes)

  • Since kubernets v 1.24.0, docker.shim is no longer used, and containerd is used instead as the container runtime endpoint. Therefore, containerd (installed on the basis of docker) needs to be installed. Containerd is automatically installed when docker is installed above. Here The docker is only used as a client, and the container engine is still containerd.
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml
# 应用所有更改后,重新启动containerd
systemctl restart containerd

⑥ Start installing kubeadm, kubelet and kubectl (master node)

# 不指定版本就是最新版本,当前最新版就是1.24.1
yum install -y kubelet-1.24.1  kubeadm-1.24.1  kubectl-1.24.1 --disableexcludes=kubernetes
# disableexcludes=kubernetes:禁掉除了这个kubernetes之外的别的仓库
# 设置为开机自启并现在立刻启动服务 --now:立刻启动服务
systemctl enable --now kubelet

# 查看状态,这里需要等待一段时间再查看服务状态,启动会有点慢
systemctl status kubelet

insert image description here

  • Check the log and find that there is an error, the error is as follows:
kubelet.service: Main process exited, code=exited, status=1/FAILURE kubelet.service: Failed with result 'exit-code'.

insert image description here

  • Reinstall (or install for the first time) k8s. After kubeadm init or kubeadm join, kubelet will restart continuously. This is a normal phenomenon... The problem will be solved automatically after executing init or join. The official website has also explained this , that is, ignore kubelet.service at this time.
  • View version:
kubectl version
yum info kubeadm

insert image description here

⑦ Use kubeadm to initialize the cluster (master node)

  • Download the image first:
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.24.1
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.24.1
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.24.1
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.24.1
docker pull registry.aliyuncs.com/google_containers/pause:3.7
docker pull registry.aliyuncs.com/google_containers/etcd:3.5.3-0
docker pull registry.aliyuncs.com/google_containers/coredns:v1.8.6
  • Cluster initialization:
kubeadm init \
  --apiserver-advertise-address=192.168.0.113 \
  --image-repository registry.aliyuncs.com/google_containers \
  --control-plane-endpoint=cluster-endpoint \
  --kubernetes-version v1.24.1 \
  --service-cidr=10.1.0.0/16 \
  --pod-network-cidr=10.244.0.0/16 \
  --v=5
# –image-repository string:    这个用于指定从什么位置来拉取镜像(1.13版本才有的),默认值是k8s.gcr.io,我们将其指定为国内镜像地址:registry.aliyuncs.com/google_containers
# –kubernetes-version string:  指定kubenets版本号,默认值是stable-1,会导致从https://dl.k8s.io/release/stable-1.txt下载最新的版本号,我们可以将其指定为固定版本(v1.22.1)来跳过网络请求。
# –apiserver-advertise-address  指明用 Master 的哪个 interface 与 Cluster 的其他节点通信。如果 Master 有多个 interface,建议明确指定,如果不指定,kubeadm 会自动选择有默认网关的 interface。这里的ip为master节点ip,记得更换。
# –pod-network-cidr             指定 Pod 网络的范围。Kubernetes 支持多种网络方案,而且不同网络方案对  –pod-network-cidr有自己的要求,这里设置为10.244.0.0/16 是因为我们将使用 flannel 网络方案,必须设置成这个 CIDR
# --control-plane-endpoint     cluster-endpoint 是映射到该 IP 的自定义 DNS 名称,这里配置hosts映射:192.168.0.113   cluster-endpoint。 这将允许你将 --control-plane-endpoint=cluster-endpoint 传递给 kubeadm init,并将相同的 DNS 名称传递给 kubeadm join。 稍后你可以修改 cluster-endpoint 以指向高可用性方案中的负载均衡器的地址。
  • kubeadm does not support converting a single control plane cluster without the --control-plane-endpoint parameter to a high availability cluster.
  • Reset and reinitialize:
kubeadm reset
rm -fr ~/.kube/  /etc/kubernetes/* var/lib/etcd/*
kubeadm init \
  --apiserver-advertise-address=192.168.0.113  \
  --image-repository registry.aliyuncs.com/google_containers \
  --control-plane-endpoint=cluster-endpoint \
  --kubernetes-version v1.24.1 \
  --service-cidr=10.1.0.0/16 \
  --pod-network-cidr=10.244.0.0/16 \
  --v=5
# –image-repository string:    这个用于指定从什么位置来拉取镜像(1.13版本才有的),默认值是k8s.gcr.io,我们将其指定为国内镜像地址:registry.aliyuncs.com/google_containers
# –kubernetes-version string:  指定kubenets版本号,默认值是stable-1,会导致从https://dl.k8s.io/release/stable-1.txt下载最新的版本号,我们可以将其指定为固定版本(v1.22.1)来跳过网络请求。
# –apiserver-advertise-address  指明用 Master 的哪个 interface 与 Cluster 的其他节点通信。如果 Master 有多个 interface,建议明确指定,如果不指定,kubeadm 会自动选择有默认网关的 interface。这里的ip为master节点ip,记得更换。
# –pod-network-cidr             指定 Pod 网络的范围。Kubernetes 支持多种网络方案,而且不同网络方案对  –pod-network-cidr有自己的要求,这里设置为10.244.0.0/16 是因为我们将使用 flannel 网络方案,必须设置成这个 CIDR
# --control-plane-endpoint     cluster-endpoint 是映射到该 IP 的自定义 DNS 名称,这里配置hosts映射:192.168.0.113   cluster-endpoint。 这将允许你将 --control-plane-endpoint=cluster-endpoint 传递给 kubeadm init,并将相同的 DNS 名称传递给 kubeadm join。 稍后你可以修改 cluster-endpoint 以指向高可用性方案中的负载均衡器的地址。
  • Configure environment variables:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 临时生效(退出当前窗口重连环境变量失效)
export KUBECONFIG=/etc/kubernetes/admin.conf
# 永久生效(推荐)
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source  ~/.bash_profile

insert image description here

  • It is found that there is still a problem with the node, check the log /var/log/messages:
"Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"

insert image description here

⑧ Install the Pod network plug-in (CNI: Container Network Interface) (master)

  • A container network interface (CNI) based on the pod networking plugin must be deployed so that pods can communicate with each other:
# 最好提前下载镜像(所有节点)
docker pull quay.io/coreos/flannel:v0.14.0
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  • Check the node node, normal:

insert image description here

⑨ node node join k8s cluster

  • Install kubelet first:
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
# 设置为开机自启并现在立刻启动服务 --now:立刻启动服务
systemctl enable --now kubelet
systemctl status kubelet
  • If you don't have a token, you can get one by running the following command on the control plane node:
kubeadm token list
  • By default, tokens expire after 24 hours, if you want to join a node to the cluster after the current token has expired, you can create a new token by running the following command on the control plane node:
kubeadm token create
# 再查看
kubeadm token list
  • If there is no value for --discovery-token-ca-cert-hash, it can be obtained by executing the following chain of commands on the control plane node:
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
  • If you did not record the command to join the cluster when executing kubeadm init, you can recreate it with the following command (recommended). Generally, you don’t need to obtain the token and ca-cert-hash respectively above, and execute the following command in one go:
kubeadm token create --print-join-command
  • Here you need to wait for a while, and then check the node node status, because you need to install kube-proxy and flannel:
kubectl get pods -A
kubectl get nodes

insert image description here

⑩ Configure IPVS

  • Load ip_vs related kernel modules:
modprobe -- ip_vs
modprobe -- ip_vs_sh
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
  • All nodes verify that ipvs is enabled:
lsmod |grep ip_vs
  • Install the ipvsadm tool:
yum install ipset ipvsadm -y
  • Edit the kube-proxy configuration file, and change the mode to ipvs:
kubectl edit  configmap -n kube-system  kube-proxy

insert image description here

  • Restart kube-proxy:
# 先查看
kubectl get pod -n kube-system | grep kube-proxy
# 再delete让它自拉起
kubectl get pod -n kube-system | grep kube-proxy |awk '{
    
    system("kubectl delete pod "$1" -n kube-system")}'
# 再查看
kubectl get pod -n kube-system | grep kube-proxy

insert image description here

  • View ipvs forwarding rules:
ipvsadm -Ln

insert image description here

⑪ Cluster high availability configuration

  • Two options for configuring high availability (HA) Kubernetes clusters:
    • Use stacked control plane nodes, where etcd nodes and control plane nodes coexist. The architecture diagram is as follows:

insert image description here

    • Using an external etcd node, where etcd runs on a node different from the control plane, the architecture diagram is as follows:

insert image description here

    • Add a new machine here as another master node: 192.168.0.116 The configuration is the same as the master node above, except that the last step of initialization is not required.
  • Modify the host name and configure hosts:
    • All nodes are uniformly configured as follows:
# 在192.168.0.113执行
hostnamectl set-hostname  k8s-master-168-0-113
# 在192.168.0.114执行
hostnamectl set-hostname k8s-node1-168-0-114
# 在192.168.0.115执行
hostnamectl set-hostname k8s-node2-168-0-115
# 在192.168.0.116执行
hostnamectl set-hostname k8s-master2-168-0-116
    • Configure hosts:
cat >> /etc/hosts<<EOF
192.168.0.113 k8s-master-168-0-113 cluster-endpoint
192.168.0.114 k8s-node1-168-0-114
192.168.0.115 k8s-node2-168-0-115
192.168.0.116 k8s-master2-168-0-116
EOF
  • Configure ssh mutual trust:
# 直接一直回车就行
ssh-keygen

ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-master-168-0-113
ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-node1-168-0-114
ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-node2-168-0-115
ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-master2-168-0-116
  • Time synchronization:
yum install chrony -y
systemctl start chronyd
systemctl enable chronyd
chronyc sources
  • Turn off the firewall:
systemctl stop firewalld
systemctl disable firewalld
  • Close swap:
# 临时关闭;关闭swap主要是为了性能考虑
swapoff -a
# 可以通过这个命令查看swap是否关闭了
free
# 永久关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab
  • Disable SELinux:
# 临时关闭
setenforce 0
# 永久禁用
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
  • Allow iptables to inspect bridged traffic (optional, all nodes):
    • To load this module explicitly, run sudo modprobe br_netfilter, verify that the br_netfilter module is loaded by running lsmod | grep br_netfilter:
sudo modprobe br_netfilter
lsmod | grep br_netfilter
    • In order for the Linux node's iptables to see bridge traffic correctly, make sure net.bridge.bridge-nf-call-iptables is set to 1 in the sysctl configuration, for example:
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# 设置所需的 sysctl 参数,参数在重新启动后保持不变
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# 应用 sysctl 参数而不重新启动
sudo sysctl --system
  • Install the container docker (all nodes) (dockerd actually calls the api interface of containerd, containerd is an intermediate communication component between dockerd and runC, so when the docker service is started, the containerd service will also be started):
# 配置yum源
cd /etc/yum.repos.d ; mkdir bak; mv CentOS-Linux-* bak/
# centos7
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# centos8
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo

# 安装yum-config-manager配置工具
yum -y install yum-utils
# 设置yum源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 安装docker-ce版本
yum install -y docker-ce
# 启动
systemctl start docker
# 开机自启
systemctl enable docker
# 查看版本号
docker --version
# 查看版本具体信息
docker version

# Docker镜像源设置
# 修改文件 /etc/docker/daemon.json,没有这个文件就创建
# 添加以下内容后,重启docker服务:
cat >/etc/docker/daemon.json<<EOF
{
    
    
   "registry-mirrors": ["http://hub-mirror.c.163.com"]
}
EOF
# 加载
systemctl reload docker

# 查看
systemctl status docker containerd
  • Configure k8s yum sources (all nodes):
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[k8s]
name=k8s
enabled=1
gpgcheck=0
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
EOF
  • Set the sandbox_image mirror source to Alibaba Cloud google_containers mirror source (all nodes):
# 导出默认配置,config.toml这个文件默认是不存在的
containerd config default > /etc/containerd/config.toml
grep sandbox_image  /etc/containerd/config.toml
sed -i "s#k8s.gcr.io/pause#registry.aliyuncs.com/google_containers/pause#g"       /etc/containerd/config.toml
grep sandbox_image  /etc/containerd/config.toml

insert image description here

  • Configure containerd cgroup driver systemd:
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml
# 应用所有更改后,重新启动containerd
systemctl restart containerd
  • Start installing kubeadm, kubelet and kubectl (master node):
# 不指定版本就是最新版本,当前最新版就是1.24.1
yum install -y kubelet-1.24.1  kubeadm-1.24.1  kubectl-1.24.1 --disableexcludes=kubernetes
# disableexcludes=kubernetes:禁掉除了这个kubernetes之外的别的仓库
# 设置为开机自启并现在立刻启动服务 --now:立刻启动服务
systemctl enable --now kubelet

# 查看状态,这里需要等待一段时间再查看服务状态,启动会有点慢
systemctl status kubelet

# 查看版本

kubectl version
yum info kubeadm
  • Join the k8s cluster:
# 证如果过期了,可以使用下面命令生成新证书上传,这里会打印出certificate key,后面会用到
kubeadm init phase upload-certs --upload-certs
# 你还可以在 【init】期间指定自定义的 --certificate-key,以后可以由 join 使用。 要生成这样的密钥,可以使用以下命令(这里不执行,就用上面那个自命令就可以了):
kubeadm certs certificate-key

kubeadm token create --print-join-command

kubeadm join cluster-endpoint:6443 --token wswrfw.fc81au4yvy6ovmhh --discovery-token-ca-cert-hash sha256:43a3924c25104d4393462105639f6a02b8ce284728775ef9f9c30eed8e0abc0f --control-plane --certificate-key 8d2709697403b74e35d05a420bd2c19fd8c11914eb45f2ff22937b245bed5b68

# --control-plane 标志通知 kubeadm join 创建一个新的控制平面。加入master必须加这个标记
# --certificate-key ... 将导致从集群中的 kubeadm-certs Secret 下载控制平面证书并使用给定的密钥进行解密。这里的值就是上面这个命令(kubeadm init phase upload-certs --upload-certs)打印出的key

insert image description here

  • Execute the following commands as prompted:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • Check:
kubectl get nodes
kubectl get pods -A -owide

insert image description here

  • Although there are already two masters, there is only one entrance to the outside world, so a load balancer is required. If one master fails, it will automatically switch to the other master node.

⑫ Deploy Nginx+Keepalived high availability load balancer

insert image description here

  • Install Nginx and Keepalived:
# 在两个master节点上执行
yum install nginx keepalived -y
  • Nginx configuration:
    • Configure on two master nodes:
cat > /etc/nginx/nginx.conf << "EOF"
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
    
    
    worker_connections 1024;
}
# 四层负载均衡,为两台Master apiserver组件提供负载均衡
stream {
    
    
    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    access_log  /var/log/nginx/k8s-access.log  main;
    upstream k8s-apiserver {
    
    
    # Master APISERVER IP:PORT
       server 192.168.0.113:6443;
    # Master2 APISERVER IP:PORT
       server 192.168.0.116:6443;
    }
    server {
    
    
       listen 16443;
       proxy_pass k8s-apiserver;
    }
}

http {
    
    
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
    access_log  /var/log/nginx/access.log  main;
    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 2048;
    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;
    server {
    
    
        listen       80 default_server;
        server_name  _;

        location / {
    
    
        }
    }
}
    • If only high availability is guaranteed and k8s-apiserver load balancing is not configured, nginx may not be installed, but it is better to configure k8s-apiserver load balancing.
  • Keepalived configuration (master):
cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
    
    
   notification_email {
    
    
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from fage@qq.com
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}
vrrp_script check_nginx {
    
    
    script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
    
    
    state MASTER
    interface ens33
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    priority 100    # 优先级,备服务器设置 90
    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1
    authentication {
    
    
        auth_type PASS
        auth_pass 1111
    }
    # 虚拟IP
    virtual_ipaddress {
    
    
        192.168.0.120/24
    }
    track_script {
    
    
        check_nginx
    }
}
EOF
    • vrrp_script: Specify the script to check the working status of nginx (judging whether to failover according to the status of nginx)
    • virtual_ipaddress: Virtual IP (VIP)
    • Check nginx status script:
cat > /etc/keepalived/check_nginx.sh  << "EOF"
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    exit 1
else
    exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh
  • Keepalived configuration (backup):
cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
    
    
   notification_email {
    
    
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from fage@qq.com
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_BACKUP
}
vrrp_script check_nginx {
    
    
    script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
    
    
    state BACKUP
    interface ens33
    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
    priority 90
    advert_int 1
    authentication {
    
    
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
    
    
        192.168.0.120/24
    }
    track_script {
    
    
        check_nginx
    }
}
EOF
  • Check nginx status script:
cat > /etc/keepalived/check_nginx.sh  << "EOF"
#!/bin/bash
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    exit 1
else
    exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh
  • Start and set boot to start:
systemctl daemon-reload
systemctl restart nginx && systemctl enable nginx && systemctl status nginx
systemctl restart keepalived && systemctl enable keepalived && systemctl status keepalived
  • View VIP:
ip a

insert image description here

  • Modify the hosts (all nodes), and modify the ip executed before the cluster-endpoint to execute the current VIP:
192.168.0.113 k8s-master-168-0-113
192.168.0.114 k8s-node1-168-0-114
192.168.0.115 k8s-node2-168-0-115
192.168.0.116 k8s-master2-168-0-116
192.168.0.120 cluster-endpoint
  • Test verification:
    • View version (load balancing test verification):
curl -k https://cluster-endpoint:16443/version

insert image description here

    • High availability test verification, shut down the k8s-master-168-0-113 node:
shutdown -h now
curl -k https://cluster-endpoint:16443/version
kubectl get nodes -A
kubectl get pods -A
  • There is a risk of coupling failure in stacked clusters. If one node fails, both etcd members and control plane instances will be lost, and redundancy will be affected. This risk can be reduced by adding more control plane nodes.

3. Deployment of k8s management platform dashboard environment

① dashboard deployment

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.0/aio/deploy/recommended.yaml
kubectl get pods -n kubernetes-dashboard
  • But this can only be accessed internally, so to access externally, either deploy ingress, or set the service NodePort type, here select the service exposed port:
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.0/aio/deploy/recommended.yaml
  • The modified content is as follows:
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 31443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.6.0
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {
    
    }
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.8
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {
    
    }

insert image description here

  • Redeploy:
kubectl delete -f recommended.yaml
kubectl apply -f recommended.yaml
kubectl get svc,pods -n kubernetes-dashboard

insert image description here

② Create a login user

cat >ServiceAccount.yaml<<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
EOF
kubectl apply -f ServiceAccount.yaml
  • Create and get a login token:
kubectl -n kubernetes-dashboard create token admin-user

③ Configure hosts to log in to the dashboard web

192.168.0.120 cluster-endpoint
  • Login: https://cluster-endpoint:31443:

insert image description here

  • Enter the token created above to log in:

insert image description here

4. Deployment of k8s mirror warehouse harbor environment

① install helm

mkdir -p /opt/k8s/helm && cd /opt/k8s/helm
wget https://get.helm.sh/helm-v3.9.0-rc.1-linux-amd64.tar.gz
tar -xf helm-v3.9.0-rc.1-linux-amd64.tar.gz
ln -s /opt/k8s/helm/linux-amd64/helm /usr/bin/helm
helm version
helm help

② configure hosts

192.168.0.120 myharbor.com

③ Create stl certificate

mkdir /opt/k8s/helm/stl && cd /opt/k8s/helm/stl
# 生成 CA 证书私钥
openssl genrsa -out ca.key 4096
# 生成 CA 证书
openssl req -x509 -new -nodes -sha512 -days 3650 \
 -subj "/C=CN/ST=Guangdong/L=Shenzhen/O=harbor/OU=harbor/CN=myharbor.com" \
 -key ca.key \
 -out ca.crt
# 创建域名证书,生成私钥
openssl genrsa -out myharbor.com.key 4096
# 生成证书签名请求 CSR
openssl req -sha512 -new \
    -subj "/C=CN/ST=Guangdong/L=Shenzhen/O=harbor/OU=harbor/CN=myharbor.com" \
    -key myharbor.com.key \
    -out myharbor.com.csr
# 生成 x509 v3 扩展
cat > v3.ext <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names

[alt_names]
DNS.1=myharbor.com
DNS.2=*.myharbor.com
DNS.3=hostname
EOF
#创建 Harbor 访问证书
openssl x509 -req -sha512 -days 3650 \
    -extfile v3.ext \
    -CA ca.crt -CAkey ca.key -CAcreateserial \
    -in myharbor.com.csr \
    -out myharbor.com.crt

④ install ingress

helm upgrade --install ingress-nginx ingress-nginx \
  --repo https://kubernetes.github.io/ingress-nginx \
  --namespace ingress-nginx --create-namespace
  • Install via YAML file (this article uses this method to install ingress):
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/cloud/deploy.yaml
  • If the download of the mirror fails, you can modify the mirror address in the following way and install it again:
# 可以先把镜像下载,再安装
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.2.0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/cloud/deploy.yaml
# 修改镜像地址
sed -i 's@k8s.gcr.io/ingress-nginx/controller:v1.2.0\(.*\)@registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.2.0@' deploy.yaml
sed -i 's@k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1\(.*\)$@registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1@' deploy.yaml

###还需要修改两地方
#1、kind: 类型修改成DaemonSet,replicas: 注销掉,因为DaemonSet模式会每个节点运行一个pod
#2、在添加一条:hostnetwork:true
#3、把LoadBalancer修改成NodePort
#4、在--validating-webhook-key下面添加- --watch-ingress-without-class=true
#5、设置master节点可调度
kubectl taint nodes k8s-master-168-0-113 node-role.kubernetes.io/control-plane:NoSchedule-
kubectl taint nodes k8s-master2-168-0-116 node-role.kubernetes.io/control-plane:NoSchedule-

kubectl apply -f deploy.yaml

insert image description here

⑤ install nfs

  • All nodes install nfs:
yum -y install  nfs-utils rpcbind
  • Create a shared directory on the master node and authorize:
mkdir /opt/nfsdata
# 授权共享目录
chmod 666 /opt/nfsdata
  • Configure the exports file:
cat > /etc/exports<<EOF
/opt/nfsdata *(rw,no_root_squash,no_all_squash,sync)
EOF
# 配置生效
exportfs -r
  • exportfs command:
常用选项
-a 全部挂载或者全部卸载
-r 重新挂载
-u 卸载某一个目录
-v 显示共享目录 以下操作在服务端上
  • Start rpc and nfs (the client only needs to start the rpc service) (note the order):
systemctl start rpcbind
systemctl start nfs-server
systemctl enable rpcbind
systemctl enable nfs-server
  • Check:
showmount -e
# VIP
showmount -e 192.168.0.120
  • illustrate:
-e 显示 NFS 服务器的共享列表
-a 显示本机挂载的文件资源的情况 NFS 资源的情况
-v 显示版本号
  • client:
# 安装
yum -y install  nfs-utils rpcbind
# 启动rpc服务
systemctl start rpcbind
systemctl enable rpcbind
# 创建挂载目录
mkdir /mnt/nfsdata
# 挂载
echo "192.168.0.120:/opt/nfsdata /mnt/nfsdata     nfs    defaults  0 1">> /etc/fstab
mount -a
  • rsync data synchronization:
    • rsync install:
# 两端都得安装
yum -y install rsync
    • Configuration, add in /etc/rsyncd.conf:
cat >/etc/rsyncd.conf<<EOF
uid = root
gid = root
#禁锢在源目录
use chroot = yes
#监听地址
address = 192.168.0.113
#监听地址tcp/udp 873,可通过cat /etc/services | grep rsync查看
port 873
#日志文件位置
log file = /var/log/rsyncd.log
#存放进程 ID 的文件位置
pid file = /var/run/rsyncd.pid
#允许访问的客户机地址
hosts allow = 192.168.0.0/16
#共享模块名称
[nfsdata]
#源目录的实际路径
path = /opt/nfsdata
comment = Document Root of www.kgc.com
#指定客户端是否可以上传文件,默认对所有模块为 true
read only = yes
#同步时不再压缩的文件类型
dont compress = *.gz *.bz2 *.tgz *.zip *.rar *.z
#授权账户,多个账号以空格分隔,不加则为匿名,不依赖系统账号
auth users = backuper
#存放账户信息的数据文件
secrets file = /etc/rsyncd_users.db
EOF
    • Configure rsyncd_users.db:
cat >/etc/rsyncd_users.db<<EOF
backuper:123456
EOF
#官方要求,最好只是赋权600
chmod 600 /etc/rsyncd_users.db
    • Detailed explanation of common parameters of rsyncd.conf:
rsyncd.conf parameters Parameter Description
uid=root The user used by rsync
gid=root The user group used by rsync (the group the user belongs to)
use chroot=no If true, the daemon will "chroot to the path" before the client transfers files. This is a safe configuration, because most of us are on the internal network, so it doesn't matter
max connections=200 Set the maximum number of connections, the default is 0, which means unlimited, and the negative value closes this module
timeout=400 The default is 0, which means no timeout, 300-600 is recommended (5-10 minutes)
pid file The rsync daemon writes its process pid to this file after it starts. If this file exists, rsync will not overwrite the file, but will terminate
lock file Specify the lock file to support the "max connections" parameter, so that the total number of connections will not exceed the limit
log file If not set or set incorrectly, rsync will use rsyslog to output relevant log information
ignore errors ignore I/O errors
read only=false Specifies whether the client can upload files, the default is true for all modules
list=false Whether to allow the client to view the list of available modules, the default is yes
hosts allow Specify the hostname or ip address or address segment of the client that can be contacted. By default, there is no such parameter, that is, all can be connected
hosts deny Specify the client host name or ip address or address segment that cannot be contacted. By default, there is no such parameter, that is, all can be connected
auth users Specify which modules are available to users separated by spaces or commas, users do not need to exist on the local system. Default is no password access for all users
secrets file Specify the file stored in the user name and password, format; user name; password, the password should not exceed 8 characters
[backup] Here is the module name, which needs to be expanded with square brackets. There are no special requirements for the name, but it is better to have a meaningful name for future maintenance
path In this module, the file system or directory used by the daemon, the permissions of the directory should be consistent with the permissions in the configuration file, otherwise you will encounter problems with reading and writing
    • Detailed explanation of rsync common command parameters:
rsync --help

rsync [选项]  原始位置   目标位置

常用选项    说明
-r    递归模式,包含目录及子目录中的所有文件
-l    对于符号链接文件仍然复制为符号链接文件
-v    显示同步过程的详细信息
-z    在传输文件时进行压缩goD
-p    保留文件的权限标记
-a    归档模式,递归并保留对象属性,等同于-rlpt
-t    保留文件的时间标记
-g    保留文件的属组标记(仅超级用户使用)
-o    保留文件的属主标记(仅超级用户使用)
-H    保留硬链接文件
-A    保留ACL属性信息
-D    保留设备文件及其他特殊文件
--delete  删除目标位置有而原始位置没有的文件
--checksum  根据对象的校验和来决定是否跳过文件
    • Start the service (data source machine):
#rsync监听端口:873
#rsync运行模式:C/S
rsync --daemon --config=/etc/rsyncd.conf
netstat -tnlp|grep :873
    • Execute the command to synchronize data:
# 在目的机器上执行
# rsync -avz 用户名@源主机地址/源目录 目的目录
rsync -avz root@192.168.0.113:/opt/nfsdata/* /opt/nfsdata/
    • crontab timing synchronization (crontab timing synchronization data is not very good, you can use rsync+inotify for real-time data synchronization, this article does not do analysis):
# 配置crontab, 每五分钟同步一次,这种方式不好
*/5 * * * * rsync -avz root@192.168.0.113:/opt/nfsdata/* /opt/nfsdata/

⑥ Create nfs provisioner and persistent storage SC

  • Add helm repository:
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
  • Helm install nfs provisioner: the default image is inaccessible, use the image searched by dockerhub willdockerhub/nfs-subdir-external-provisioner:v4.0.2, and the StorageClass does not distinguish between namespaces, all can be used in all namespaces .
helm install nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
  --namespace=nfs-provisioner \
  --create-namespace \
  --set image.repository=willdockerhub/nfs-subdir-external-provisioner \
  --set image.tag=v4.0.2 \
  --set replicaCount=2 \
  --set storageClass.name=nfs-client \
  --set storageClass.defaultClass=true \
  --set nfs.server=192.168.0.120 \
  --set nfs.path=/opt/nfsdata
  • The above nfs.server is set as VIP to achieve high availability.
  • Check:
kubectl get pods,deploy,sc -n nfs-provisioner

insert image description here

⑦ Deploy Harbor (Https mode)

  • Create a namespace:
kubectl create ns harbor
  • Create a certificate key:
kubectl create secret tls myharbor.com --key myharbor.com.key --cert myharbor.com.crt -n harbor
kubectl get secret myharbor.com -n harbor
  • Add the Chart library:
helm repo add harbor https://helm.goharbor.io
  • Install harbor via helm:
helm install myharbor --namespace harbor harbor/harbor \
  --set expose.ingress.hosts.core=myharbor.com \
  --set expose.ingress.hosts.notary=notary.myharbor.com \
  --set-string expose.ingress.annotations.'nginx\.org/client-max-body-size'="1024m" \
  --set expose.tls.secretName=myharbor.com \
  --set persistence.persistentVolumeClaim.registry.storageClass=nfs-client \
  --set persistence.persistentVolumeClaim.jobservice.storageClass=nfs-client \
  --set persistence.persistentVolumeClaim.database.storageClass=nfs-client \
  --set persistence.persistentVolumeClaim.redis.storageClass=nfs-client \
  --set persistence.persistentVolumeClaim.trivy.storageClass=nfs-client \
  --set persistence.persistentVolumeClaim.chartmuseum.storageClass=nfs-client \
  --set persistence.enabled=true \
  --set externalURL=https://myharbor.com \
  --set harborAdminPassword=Harbor12345
  • Wait for a while to check the resource status:
kubectl get ingress,svc,pods,pvc -n harbor

insert image description here

  • Ingress has no ADDRESS problem solved, found "error: endpoints “default-http-backend” not found”:
cat << EOF > default-http-backend.yaml
---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app: default-http-backend
  namespace: harbor
spec:
  replicas: 1
  selector:
    matchLabels:
      app: default-http-backend
  template:
    metadata:
      labels:
        app: default-http-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        # Any image is permissible as long as:
        # 1. It serves a 404 page at /
        # 2. It serves 200 on a /healthz endpoint
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/defaultbackend:1.4
#        image: gcr.io/google_containers/defaultbackend:1.4
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        ports:
        - containerPort: 8080
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
---

apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: harbor
  labels:
    app: default-http-backend
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: default-http-backend
EOF
kubectl apply -f default-http-backend.yaml
  • Uninstall and redeploy:
# 卸载
helm uninstall myharbor -n harbor
kubectl get pvc -n harbor| awk 'NR!=1{
    
    print $1}' | xargs kubectl delete pvc -n harbor

# 部署
helm install myharbor --namespace harbor harbor/harbor \
  --set expose.ingress.hosts.core=myharbor.com \
  --set expose.ingress.hosts.notary=notary.myharbor.com \
  --set-string expose.ingress.annotations.'nginx\.org/client-max-body-size'="1024m" \
  --set expose.tls.secretName=myharbor.com \
  --set persistence.persistentVolumeClaim.registry.storageClass=nfs-client \
  --set persistence.persistentVolumeClaim.jobservice.storageClass=nfs-client \
  --set persistence.persistentVolumeClaim.database.storageClass=nfs-client \
  --set persistence.persistentVolumeClaim.redis.storageClass=nfs-client \
  --set persistence.persistentVolumeClaim.trivy.storageClass=nfs-client \
  --set persistence.persistentVolumeClaim.chartmuseum.storageClass=nfs-client \
  --set persistence.enabled=true \
  --set externalURL=https://myharbor.com \
  --set harborAdminPassword=Harbor12345

insert image description here

  • Visit harbor: https://myharbor.com, common operations of harbor:
    • Create project bigdata:

insert image description here

    • Configure a private warehouse: add the following content to the file /etc/docker/daemon.json:
"insecure-registries":["https://myharbor.com"]
    • Restart docker:
systemctl restart docker
    • Log in to harbor on the server:
docker login https://myharbor.com

insert image description here

    • Tag and upload the image to harbor:
docker tag rancher/pause:3.6 myharbor.com/bigdata/pause:3.6
docker push myharbor.com/bigdata/pause:3.6
  • Modify the containerd configuration:
    • When using docker-engine before, you only need to modify /etc/docker/daemon.json, but the new version of k8s already uses containerd, so you need to do relevant configuration, otherwise containerd will fail, and the certificate (ca.crt) can be found in Download from the page:

insert image description here

    • Create domain directory:
mkdir /etc/containerd/myharbor.com
cp ca.crt /etc/containerd/myharbor.com/
    • Configuration file: /etc/containerd/config.toml:
[plugins."io.containerd.grpc.v1.cri".registry]
      config_path = ""

      [plugins."io.containerd.grpc.v1.cri".registry.auths]

      [plugins."io.containerd.grpc.v1.cri".registry.configs]
        [plugins."io.containerd.grpc.v1.cri".registry.configs."myharbor.com".tls]
          ca_file = "/etc/containerd/myharbor.com/ca.crt"
        [plugins."io.containerd.grpc.v1.cri".registry.configs."myharbor.com".auth]
          username = "admin"
          password = "Harbor12345"

      [plugins."io.containerd.grpc.v1.cri".registry.headers]

      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."myharbor.com"]
          endpoint = ["https://myharbor.com"]

insert image description here

    • Restart containerd:
#重新加载配置
systemctl daemon-reload
#重启containerd
systemctl restart containerd
    • Simple to use:
# 把docker换成crictl 就行,命令都差不多
crictl pull myharbor.com/bigdata/mysql:5.7.38
    • Execute crictl to report the following error solution:
WARN[0000] image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
ERRO[0000] unable to determine image API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory"
    • This error is a docker error, which is not used here, so this error does not affect the use, but it is better to solve it, the solution is as follows:
cat <<EOF> /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
    • Pull the image again:
crictl pull myharbor.com/bigdata/mysql:5.7.38

insert image description here

Guess you like

Origin blog.csdn.net/Forever_wj/article/details/131633366