Deploy a complete set of enterprise-level K8s clusters (kubeadm mode)

Note: This article is from Teacher Li Zhenliang.

"Deploy a Complete Enterprise K8s Cluster"

v1.20 , kubeadm way

author information

Li Zhenliang (A Liang), WeChat: xyz12366699

DevOps Practical Academy

http://www.aliangedu.cn

illustrate

The document has a navigation pane for easy reading. If it is not displayed on the left, please check whether word is enabled.

Please indicate the author when reprinting, and refuse unethical behavior!

last updated

2021-04-21

Table of contents

1. Pre-knowledge points... 2

1.1 Two ways to deploy Kubernetes clusters in the production environment... 2

1.2 Prepare the environment... 2

1.3 Operating system initialization configuration... 4

2. Deploy Nginx+Keepalived High Availability Load Balancer... 6

2.1 Install software package (main/standby).... 7

2.2 Nginx configuration file (same as master/standby).... 7

2.3 keepalived configuration file (Nginx Master).... 9

2.4 keepalived configuration file (Nginx Backup).... 10

2.5 Start up and set boot up... 12

2.6 Check the working status of keepalived... 12

2.7 Nginx+Keepalived High Availability Test....12

3. Deploy Etcd cluster... 12

3.1 Prepare the cfssl certificate generation tool... 13

3.2 Generate Etcd certificate... 13

3.3 Download binaries from Github....16

3.4 Deploy Etcd cluster... 16

4. Install Docker/kubeadm/kubelet [all nodes].... 21

4.1 Install Docker.. 21

4.2 Add Alibaba Cloud YUM software source... 21

4.3 Install kubeadm, kubelet and kubectl.. 21

5. Deploy Kubernetes Master.. 22

5.1 Initialize Master1.. 22

5.2 Initialize Master2.. 25

5.3 Access load balancer test... 25

6. Join Kubernetes Node.. 26

7. Deploy network components... 26

8. Deploy Dashboard.. 27

1. Pre-knowledge points

1.1 Two ways to deploy Kubernetes clusters in the production environment

Currently, there are two main ways to deploy Kubernetes clusters in production:

  • kubeadm

Kubeadm is a K8s deployment tool that provides kubeadm init and kubeadm join for rapid deployment of Kubernetes clusters.

  • binary package

Download the distribution's binary package from github, and manually deploy each component to form a Kubernetes cluster.

Here, kubeadm is used to build a cluster.

kubeadm tool function:

  • kubeadm init: Initialize a Master node
  • kubeadm join: join the working node to the cluster
  • kubeadm upgrade: upgrade K8s version
  • kubeadm token: manage the token used by kubeadm join
  • kubeadm reset: clears any changes made to the host by kubeadm init or kubeadm join
  • kubeadm version: print kubeadm version
  • kubeadm alpha: preview available new features

1.2 Prepare the environment

Server requirements:

  • Recommended minimum hardware configuration: 2-core CPU, 2G memory, 30G hard disk
  • The server should preferably be able to access the external network, and there will be a need to pull images from the Internet. If the server cannot access the Internet, you need to download the corresponding image in advance and import it to the node

Software Environment:

software

Version

operating system

CentOS7.8_x64 (mini)

Docker

19-ce

Kubernetes

1.20

Overall planning of the server:

Role

IP

Other single-pack components

k8s-master1

192.168.16.80

docker,etcd,nginx,keepalived

k8s-master2

192.168.16.81

docker,etcd,nginx,keepalived

k8s-node1

192.168.16.82

docker,etcd

External IP of the load balancer

192.168.16.88 (VIP)

Architecture diagram:

1.3 Operating system initialization configuration

#Close firewall 

systemctl stop firewalld 

systemctl disable firewalld 



# Close selinux 

sed -i 's/enforcing/disabled/' /etc/selinux/config # Permanent 

setenforce 0 # Temporary 



# Close swap 

swapoff -a # Temporary 

sed -ri 's/. *swap.*/#&/' /etc/fstab # Permanent 



# Set the hostname according to the plan 

hostnamectl set-hostname <hostname> 



# Add hosts 

cat >> /etc/hosts << EOF 

192.168.16.80 k8s-master01
192.168.16.81 k8s-master02
192.168.16.82 k8s-node01
192.168.16.83 k8s-node02
192.168.16.88 k8s-vip 

EOF 



#Transfer bridged IPv4 traffic to the iptables chain 

cat > /etc/sysctl.d/k8s.conf << EOF 

net.bridge.bridge-nf-call-ip6tables = 1 

net.bridge. bridge-nf-call-iptables = 1 

EOF 

sysctl --system # effective 



# time synchronization 

yum install ntpdate -y 

ntpdate time.windows.com
 
 

ntpdate cn.pool.ntp.org

crontab -l

0 5 * * * /usr/sbin/ntpdate -u cn.pool.ntp.org

[root@k8s-master01 ~]# systemctl enable ntpdate.service

2. Deploy Nginx+Keepalived High Availability Load Balancer

Kubernetes, as a container cluster system, realizes Pod failure self-healing capability through health check + restart strategy, realizes distributed deployment of Pods through scheduling algorithm, and maintains the expected number of copies, and automatically pulls Pods on other Nodes according to Node failure status. High availability at the application layer.

For Kubernetes clusters, high availability should also include the following two considerations: high availability of Etcd database and high availability of Kubernetes Master components. In the K8s cluster built by kubeadm, Etcd only has one, and there is a single point, so we will build an Etcd cluster independently here.

The Master node plays the role of the master control center, and maintains the healthy working status of the entire cluster by constantly communicating with Kubelet and kube-proxy on the worker nodes. If the Master node fails, you will not be able to use the kubectl tool or API for any cluster management.

The Master node mainly has three services, kube-apiserver, kube-controller-manager, and kube-scheduler. Among them, the kube-controller-manager and kube-scheduler components have achieved high availability through the selection mechanism, so the high availability of the Master is mainly for kube-apiserver Component, and this component provides services with HTTP API, so its high availability is similar to that of a web server, just add a load balancer to balance its load, and it can be expanded horizontally.

kube-apiserver high availability architecture diagram:

  • Nginx is a mainstream web service and reverse proxy server. Here, four layers are used to achieve load balancing for apiserver.
  • Keepalived is a mainstream high-availability software, based on VIP binding to achieve server hot backup. In the above topology, Keepalived mainly judges whether a failover (offset VIP) is needed based on the running status of Nginx. For example, when the Nginx master node hangs up, the VIP It will be automatically bound to the Nginx standby node, so as to ensure that the VIP is always available and achieve high availability of Nginx.

Note: In order to save the machine, it is reused here with the K8s master node machine. It can also be deployed independently of the k8s cluster, as long as nginx and apiserver can communicate.

2.1 Install the software package (main/standby)

 yum install epel-release -y

 yum install nginx keepalived -y

2.2 Nginx configuration file (same as master/standby)

cat > /etc/nginx/nginx.conf << "EOF" 

user nginx; 

worker_processes auto; 

error_log /var/log/nginx/error.log; 

pid /run/nginx.pid; 



include /usr/share/nginx/modules /*.conf; 



events { 

    worker_connections 1024; 

} 



# Four-layer load balancing, providing load balancing 
stream for two Master apiserver components { 
    log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent'; 
    access_log /var/ log/nginx/k8s-access.log main; 
    upstream k8s-apiserver { 
       server 192.168.16.80:6443; # Master1 APISERVER IP:PORT 
       server 192.168.16.81:6443; # Master2 APISERVER IP:PORT 
    } 
    server {














    


       listen 16443; # Since nginx and the master node are multiplexed, the listening port cannot be 6443, otherwise it will conflict 

       proxy_pass k8s-apiserver; 

    } 

} 



http { 

    log_format main '$remote_addr - $remote_user [$time_local] "$request" ' ' 

                      $ status $body_bytes_sent "$http_referer" ' 

                      '"$http_user_agent" "$http_x_forwarded_for"'; 



    access_log /var/log/nginx/access.log main; 



    sendfile on; 

    tcp_nopush on; 

    tcp_nodelay on; 

    keepalive_timeout 65; 

    types_hash_max_size 2048; 



    include /etc /nginx/mime.types; 

    default_type application/octet-stream;



    server {

        listen       80 default_server;

        server_name  _;



        location / {

        }

    }

}

EOF

2.3 keepalived configuration file (Nginx Master)

cat > /etc/keepalived/keepalived.conf << EOF 

global_defs { 

   notification_email { 

     [email protected] 

     [email protected] 

     [email protected] 

   } 

   notification_email_from [email protected]   

   smtp_server 127.0.0.1 

   smtp_connect_timeout 30 

   router_id NGINX_MASTER 

} 



vrrp_script check_nginx { 

    script "/etc/keepalived/check_nginx.sh" 

} 



vrrp_instance VI_1 { 

    state MASTER 

    interface ens33 #Modify to the actual network card name 

    virtual_router_id 51 # VRRP routing ID instance, each instance is unique 

    priority 100 # Priority, standby server set 90

    advert_int 1 # Specify the VRRP heartbeat packet notification interval, the default is 1 second 

    authentication { 

        auth_type PASS       

        auth_pass 1111 

    }   

    # virtual IP 

    virtual_ipaddress { 

        192.168.16.88/24 

    } 

    track_script { 

        check_nginx 

    } 

} 

EOF
  • vrrp_script: specifies the script to check the working status of nginx (judging whether to failover according to the status of nginx)
  • virtual_ipaddress: Virtual IP (VIP)

Prepare the script to check the running status of nginx in the above configuration file:

cat > /etc/keepalived/check_nginx.sh  << "EOF"

#!/bin/bash

count=$(ss -antp |grep 16443 |egrep -cv "grep|$$")



if [ "$count" -eq 0 ];then

    exit 1

else

    exit 0

fi

EOF

chmod +x /etc/keepalived/check_nginx.sh

2.4 keepalived configuration file (Nginx Backup)

cat > /etc/keepalived/keepalived.conf << EOF 

global_defs { 

   notification_email { 

     [email protected] 

     [email protected] 

     [email protected] 

   } 

   notification_email_from [email protected]   

   smtp_server 127.0.0.1 

   smtp_connect_timeout 30 

   router_id NGINX_BACKUP 

} 



vrrp_script check_nginx { 

    script "/etc/keepalived/check_nginx.sh" 

} 



vrrp_instance VI_1 { 

    state BACKUP 

    interface ens33 

    virtual_router_id 51 # VRRP routing ID instance, each instance is unique 

    priority 90 

    advert_int 1 

    authentication {

        auth_type PASS      

        auth_pass 1111 

    }  

    virtual_ipaddress { 

        192.168.16.88/24

    } 

    track_script {

        check_nginx

    } 

}

EOF

Prepare the script to check the running status of nginx in the above configuration file:

cat > /etc/keepalived/check_nginx.sh  << "EOF"

#!/bin/bash

count=$(ss -antp |grep 16443 |egrep -cv "grep|$$")



if [ "$count" -eq 0 ];then

    exit 1

else

    exit 0

fi

EOF

chmod +x /etc/keepalived/check_nginx.sh

Note: keepalived judges whether to fail over according to the status code returned by the script (0 means normal work, non-0 abnormal).

2.5 Start and set the startup

systemctl daemon-reload

systemctl start nginx           —— nginx启动不成功

systemctl start keepalived

systemctl enable nginx

systemctl enable keepalived

2.6 View keepalived working status

ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

       valid_lft forever preferred_lft forever

    inet6 ::1/128 scope host 

       valid_lft forever preferred_lft forever

2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

    link/ether 00:0c:29:04:f7:2c brd ff:ff:ff:ff:ff:ff

    inet 192.168.31.80/24 brd 192.168.31.255 scope global noprefixroute ens33

       valid_lft forever preferred_lft forever

    inet 192.168.31.88/24 scope global secondary ens33

       valid_lft forever preferred_lft forever

    inet6 fe80::20c:29ff:fe04:f72c/64 scope link 

       valid_lft forever preferred_lft forever

It can be seen that the 192.168.31.88 virtual IP is bound to the ens33 network card, indicating that it is working normally.

2.7 Nginx+Keepalived High Availability Test

Shut down Nginx on the active node, and test whether the VIP has drifted to the server on the standby node.

Execute pkill nginx on the Nginx Master and
run the ip addr command on the Nginx Backup to check that the VIP has been successfully bound.

3. Deploy Etcd cluster

If you encounter problems in the study or the document is wrong, you can contact A Liang~ Wechat: xyz12366699

Etcd is a distributed key-value storage system. Kubernetes uses Etcd for data storage. By default, only one Etcd Pod is started when kubeadm is built. There is a single point of failure. It is strongly not recommended in the production environment, so we use 3 servers to form a cluster here. Tolerate 1 machine failure, of course, you can also use 5 machines to form a cluster, which can tolerate 2 machine failures.

node name

IP

etcd-1

192.168.16.80

etcd-2

192.168.16.81

etcd-3

192.168.16.82

Note: In order to save machines, it is reused here with K8s node machines. It can also be deployed independently of the k8s cluster, as long as the apiserver can be connected.

3.1 Prepare the cfssl certificate generation tool

cfssl is an open source certificate management tool that uses json files to generate certificates, which is more convenient to use than openssl.

Find any server to operate, here use the Master node.

[root@k8s-master01 ~]# cd /opt/

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64

mv cfssl_linux-amd64 /usr/local/bin/cfssl

mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

3.2 Generate Etcd certificate

1. Self-signed certificate authority (CA)

Create a working directory:

mkdir -p ~/etcd_tls

cd ~/etcd_tls

Self-signed CA:

cat > ca-config.json << EOF

{

  "signing": {

    "default": {

      "expiry": "87600h"

    },

    "profiles": {

      "www": {

         "expiry": "87600h",

         "usages": [

            "signing",

            "key encipherment",

            "server auth",

            "client auth"

        ]

      }

    }

  }

}

EOF



cat > ca-csr.json << EOF

{

    "CN": "etcd CA",

    "key": {

        "algo": "rsa",

        "size": 2048

    },

    "names": [

        {

            "C": "CN",

            "L": "Beijing",

            "ST": "Beijing"

        }

    ]

}

EOF

Generate the certificate:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

The ca.pem and ca-key.pem files will be generated.

2. Use self-signed CA to issue Etcd HTTPS certificate

Create a certificate request file:

cat > server-csr.json << EOF

{

    "CN": "etcd",

    "hosts": [

    "192.168.16.80",

    "192.168.16.81",

    "192.168.16.82"

    ],

    "key": {

        "algo": "rsa",

        "size": 2048

    },

    "names": [

        {

            "C": "CN",

            "L": "BeiJing",

            "ST": "BeiJing"

        }

    ]

}

EOF

Note: The IP in the hosts field of the above file is the cluster internal communication IP of all etcd nodes, and none of them can be missing! In order to facilitate later expansion, you can write a few more reserved IPs.

Generate the certificate:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

The server.pem and server-key.pem files will be generated.

3.3 Download binaries from Github

Download link: https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz

3.4 Deploy Etcd cluster

The following operations are performed on node 1. To simplify the operation, all files generated by node 1 will be copied to node 2 and node 3 later.

1. Create a working directory and unzip the binary package

mkdir /opt/etcd/{bin,cfg,ssl} -p

tar zxvf etcd-v3.4.9-linux-amd64.tar.gz

mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/

2. Create etcd configuration file

cat > /opt/etcd/cfg/etcd.conf << EOF

#[Member]

ETCD_NAME="etcd-1"

ETCD_DATA_DIR="/var/lib/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="https://192.168.16.80:2380"

ETCD_LISTEN_CLIENT_URLS="https://192.168.16.80:2379"



#[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.16.80:2380"

ETCD_ADVERTISE_CLIENT_URLS="https://192.168.16.80:2379"

ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.16.80:2380,etcd-2=https://192.168.16.81:2380,etcd-3=https://192.168.16.82:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE="new"

EOF
  • ETCD_NAME: node name, unique in the cluster
  • ETCDDATADIR: data directory
  • ETCDLISTENPEER_URLS: cluster communication listening address
  • ETCDLISTENCLIENT_URLS: client access listening address
  • ETCDINITIALADVERTISEPEERURLS: cluster notification address
  • ETCDADVERTISECLIENT_URLS: client notification address
  • ETCDINITIALCLUSTER: cluster node address
  • ETCDINITIALCLUSTER_TOKEN: Cluster Token
  • ETCDINITIALCLUSTER_STATE: Join the current state of the cluster, new is a new cluster, existing means joining an existing cluster

3. systemd management etcd

cat > /usr/lib/systemd/system/etcd.service << EOF

[Unit]

Description=Etcd Server

After=network.target

After=network-online.target

Wants=network-online.target



[Service]

Type=notify

EnvironmentFile=/opt/etcd/cfg/etcd.conf

ExecStart=/opt/etcd/bin/etcd \

--cert-file=/opt/etcd/ssl/server.pem \

--key-file=/opt/etcd/ssl/server-key.pem \

--peer-cert-file=/opt/etcd/ssl/server.pem \

--peer-key-file=/opt/etcd/ssl/server-key.pem \

--trusted-ca-file=/opt/etcd/ssl/ca.pem \

--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \

--logger=zap

Restart=on-failure

LimitNOFILE=65536



[Install]

WantedBy=multi-user.target

EOF

4. Copy the certificate just generated

Copy the newly generated certificate to the path in the configuration file:

cp ~/etcd_tls/ca*pem ~/etcd_tls/server*pem /opt/etcd/ssl/

5. Boot up and set boot up

systemctl daemon-reload

systemctl start etcd   

systemctl enable etcd

6. Copy all the files generated by node 1 above to node 2 and node 3

scp -r /opt/etcd/ [email protected]:/opt/

scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/

scp -r /opt/etcd/ [email protected]:/opt/

scp /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system/

Then modify the node name and current server IP in the etcd.conf configuration file on node 2 and node 3 respectively:

vi /opt/etcd/cfg/etcd.conf 

#[Member] 

ETCD_NAME="etcd-1" #Modify here, change node 2 to etcd-2, node 3 to etcd-3 

ETCD_DATA_DIR="/var/lib/ etcd/default.etcd" 

ETCD_LISTEN_PEER_URLS="https://192.168.31.71:2380" #Modify here as the current server IP 

ETCD_LISTEN_CLIENT_URLS="https://192.168.31.71:2379" #Modify here as the current server IP 



#[ Clustering] 

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.71:2380" #Modify here as the current server IP 

ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.71:2379" #Modify here as the current server IP 

ETCD_INITIAL_CLUSTER="etcd-1 =https://192.168.31.71:2380,etcd-2=https://192.168.31.72:2380,etcd-3=https://192.168.31.73:2380" 

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE="new"

Finally, start etcd and set it to start at boot, as above.

7. Check the cluster status

ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.16.80:2379,https://192.168.16.81:2379,https://192.168.16.82:2379" endpoint health --write-out=table



+----------------------------+--------+-------------+-------+

|          ENDPOINT    | HEALTH |    TOOK     | ERROR |

+----------------------------+--------+-------------+-------+

| https://192.168.31.61:2379 |   true | 10.301506ms |    |

| https://192.168.31.63:2379 |   true | 12.87467ms |     |

| https://192.168.31.62:2379 |   true | 13.225954ms |    |

+----------------------------+--------+-------------+-------+
 
  

查看原因tailf -n 10 /var/log/messages
request sent was ignored by remote peer due to cluster ID mismatch

problem solved:
Workaround for new clusters
Stop the three etcd services, delete the etcd data of the three nodes, and then reload and start, and the three etcd services can start normally.
[root@k8s-master01 ~]# systemctl stop etcd
 
  
[root@k8s-master01 ~]# cd /var/lib/etcd/default.etcd
[root@k8s-master01 default.etcd]# rm -rf member/
[root@k8s-master01 default.etcd]# systemctl daemon-reload
[root@k8s-master01 default.etcd]# systemctl start etcd
 
  
Analysis: The reason why the third etcd service cannot be started before (the status is always starting) is because the time of that server is not synchronized with the other two.

ntpdate cn.pool.ntp.org

In addition to installing the ntpdate service and executing the synchronization time server, you also need to set the ntpdate service to start automatically at boot

If the above information is output, it means that the cluster deployment is successful.

If there is a problem, the first step is to look at the log: /var/log/message or journalctl -u etcd

4. Install Docker/kubeadm/kubelet [all nodes]

Here, Docker is used as the container engine, or it can be replaced with another one, such as containerd

4.1 Install Docker

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

yum -y install docker-ce

systemctl enable docker && systemctl start docker

Configure the mirror download accelerator:

cat > /etc/docker/daemon.json << EOF

{

  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]

}

EOF



systemctl restart docker

docker info

4.2 Add Alibaba Cloud YUM software source

cat > /etc/yum.repos.d/kubernetes.repo << EOF

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=0

repo_gpgcheck=0

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

4.3 Install kubeadm, kubelet and kubectl

Due to frequent version updates, the version number deployment is specified here:

yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0

 systemctl enable kubelet

5. Deploy Kubernetes Master

If you encounter problems in the study or the document is wrong, you can contact A Liang~ Wechat: xyz12366699

5.1 Initialize Master1

Generate an initialization configuration file:

cat > kubeadm-config.yaml << EOF

apiVersion: kubeadm.k8s.io/v1beta2

bootstrapTokens:

- groups:

  - system:bootstrappers:kubeadm:default-node-token

  token: 9037x2.tcaqnpaqkra9vsbw

  ttl: 24h0m0s

  usages:

  - signing

  - authentication

kind: InitConfiguration

localAPIEndpoint:

  advertiseAddress: 192.168.16.80

  bindPort: 6443

nodeRegistration:

  criSocket: /var/run/dockershim.sock

  name: k8s-master01

  taints:

  - effect: NoSchedule

    key: node-role.kubernetes.io/master

---

apiServer:

  certSANs:  # Including all Master/LB/VIP IPs, not one less! In order to facilitate later expansion, you can write a few more reserved IPs. 

  - k8s-master01 

  - k8s-master02 

  - 192.168.16.80 

  - 192.168.16.81 

  - 192.168.16.82 

  - 127.0.0.1 

  extraArgs: 

    authorization-mode: Node,RBAC 

  timeoutForControlPlane: 4m0s 
apiVersion: 

kubeadm.k1betas2c8s.io/vertific 
kubernetes/pki 

clusterName: kubernetes 

controlPlaneEndpoint: 192.168.16.88:16443 # load balancing virtual IP (VIP) and port 

controllerManager: {} 

dns: 

  type: CoreDNS 

etcd: 

  external: # use external etcd 

    endpoints: 

    - https://192.168.16.80 :2379 # etcd cluster with 3 nodes 

    - https://192.168.16.81:2379

    - https://192.168.16.82:2379 

    caFile: /opt/etcd/ssl/ca.pem # Certificate required to connect etcd 

    certFile: /opt/etcd/ssl/server.pem 

    keyFile: /opt/etcd/ssl/server -key.pem 

imageRepository: registry.aliyuncs.com/google_containers # Because the default pull image address k8s.gcr.io cannot be accessed in China, here is the address of the Alibaba Cloud image repository. kind 

: ClusterConfiguration 

kubernetesVersion: v1.20.0 # K8s version, same as above Installed consistent 

network: 

  dnsDomain: cluster.local 

  podSubnet: 10.244.0.0/16 # Pod network, consistent with the CNI network component yaml deployed below 

  serviceSubnet: 10.96.0.0/12 # Cluster internal virtual network, Pod unified access entry 

scheduler : {} 

EOF

Or bootstrap with a config file:

kubeadm init --config kubeadm-config.yaml

...

Your Kubernetes control-plane has initialized successfully!



To start using your cluster, you need to run the following as a regular user:



  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config



Alternatively, if you are the root user, you can run:



  export KUBECONFIG=/etc/kubernetes/admin.conf



You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/



You can now join any number of control-plane nodes by copying certificate authorities

and service account keys on each node and then running the following as root:



  kubeadm join 192.168.31.88:16443 --token 9037x2.tcaqnpaqkra9vsbw \

    --discovery-token-ca-cert-hash sha256:b1e726042cdd5df3ce62e60a2f86168cd2e64bff856e061e465df10cd36295b8 \

    --control-plane 



Then you can join any number of worker nodes by running the following on each as root:



kubeadm join 192.168.31.88:16443 --token 9037x2.tcaqnpaqkra9vsbw \

    --discovery-token-ca-cert-hash sha256:b1e726042cdd5df3ce62e60a2f86168cd2e64bff856e061e465df10cd36295b8 

After the initialization is complete, there will be two join commands, with --control-plane is used to join to form a multi-master cluster, and without it is used to join nodes.

Copy the connection k8s authentication file used by kubectl to the default path:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get node

NAME          STATUS     ROLES                  AGE     VERSION

k8s-master1   NotReady   control-plane,master   6m42s   v1.20.0

5.2 Initialize Master2

Copy the certificate generated by the Master1 node to Master2:

 scp -r /etc/kubernetes/pki/ 192.168.16.81:/etc/kubernetes/

Copy and join the master join command to execute on master2:

  kubeadm join 192.168.31.88:16443 --token 9037x2.tcaqnpaqkra9vsbw \

    --discovery-token-ca-cert-hash sha256:b1e726042cdd5df3ce62e60a2f86168cd2e64bff856e061e465df10cd36295b8 \

    --control-plane 

Copy the connection k8s authentication file used by kubectl to the default path:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config
 
  
kubectl get node

NAME          STATUS     ROLES                  AGE     VERSION

k8s-master1   NotReady   control-plane,master   28m     v1.20.0

k8s-master2   NotReady   control-plane,master   2m12s   v1.20.0

Note: NotReady because the network plugin has not been deployed yet

5.3 Access load balancer test

Find any node in the K8s cluster, use curl to view the K8s version test, and use VIP access:

curl -k https://192.168.16.88:16443/version

{

  "major": "1",

  "minor": "20",

  "gitVersion": "v1.20.0",

  "gitCommit": "e87da0bd6e03ec3fea7933c4b5263d151aafd07c",

  "gitTreeState": "clean",

  "buildDate": "2021-02-18T16:03:00Z",

  "goVersion": "go1.15.8",

  "compiler": "gc",

  "platform": "linux/amd64"
}

The K8s version information can be obtained correctly, indicating that the load balancer is set up normally. The request data flow: curl -> vip(nginx) -> apiserver

You can also see the forwarding apiserver IP by looking at the Nginx log:

tail /var/log/nginx/k8s-access.log -f

192.168.31.71 192.168.31.71:6443 - [02/Apr/2021:19:17:57 +0800] 200 423

192.168.31.71 192.168.31.72:6443 - [02/Apr/2021:19:18:50 +0800] 200 423

6. Join Kubernetes Node

Executed at 192.168.16.82 (Node).

To add a new node to the cluster, execute the kubeadm join command output by kubeadm init:

kubeadm join 192.168.16.88:16443 --token 9037x2.tcaqnpaqkra9vsbw \

    --discovery-token-ca-cert-hash sha256:e6a724bb7ef8bb363762fbaa088f6eb5975e0c654db038560199a7063735a697 

Subsequent other nodes are also added in the same way.

Note: The default token is valid for 24 hours. After the expiration, the token will not be available. At this time, you need to recreate the token, which can be generated directly by using the command: kubeadm token create --print-join-command

7. Deploy network components

Calico is a pure three-tier data center network solution, which is currently the mainstream network solution for Kubernetes.

Deploy Calico:

kubectl apply -f calico.yaml

kubectl get pods -n kube-system

When the Calico Pods are running, the node will be ready:

kubectl get node

NAME          STATUS   ROLES                  AGE   VERSION

k8s-master1    Ready    control-plane,master   50m   v1.20.0

k8s-master2    Ready    control-plane,master   24m   v1.20.0

k8s-node1     Ready    <none>            20m   v1.20.0

8. Deploy Dashboard

Dashboard is an official UI that can be used for basic management of K8s resources.

kubectl apply -f kubernetes-dashboard.yaml 

#View deployment 

kubectl get pods -n kubernetes-dashboard

Access address: https://NodeIP:30001

Create a service account and bind the default cluster-admin administrator cluster role:

kubectl create serviceaccount dashboard-admin -n kube-system

kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin

kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

Use the output token to log in to Dashboard.

Guess you like

Origin blog.csdn.net/Wemesun/article/details/126385710