Use kubeasz to deploy kubernetes 1.20.1 cluster to suse 12

1. Summary

  • kubeasz is a kubernetes cluster deployment tool on github
  • When using kubeasz to deploy suse 12, there will be some operations that need to be prepared in advance.
    • The chrony in role currently only supports Debian, Centos, Redhat, so suse needs to deploy the chrony time synchronization service by itself.
    • The path of iptables in the docker service startup file needs to be modified
    • The two modules br_netfilter and ip_conntrack need to be turned on
    • ~/.bashrc needs to be created manually

2. Environmental preparation

2.1. Introduction to the environment

IP HOSTNAME SERVICE
192.168.10.175 k8s-01 master&node
192.168.10.176 k8s-02 master&node
192.168.10.177 k8s-03 master&node
  • Official suggestion: the master node requires a minimum of 2C2G
  • Linux kernel needs 4.x or higher
  • If you create a virtual machine by yourself, the disk is directly given to 100G. VMware uses as much as you want, and the disk is less. When you play by yourself in the future, you will be halfway through the game and there are not enough disks. Embarrassing
# 发行版
linux-oz6w:~ # cat /etc/os-release
NAME="SLES"
VERSION="12-SP3"
VERSION_ID="12.3"
PRETTY_NAME="SUSE Linux Enterprise Server 12 SP3"
ID="sles"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:suse:sles:12:sp3"
# 内核
linux-oz6w:~ # uname -r
4.4.73-5-default

2.2, configure a static network

linux-oz6w:~ # cp /etc/sysconfig/network/ifcfg-eth0{,.bak}    # 先备份,失败了还有后悔药
# 配置IP
linux-oz6w:~ # cat > /etc/sysconfig/network/ifcfg-eth0  <<EOF  
BOOTPROTO='static'
BROADCAST=''
ETHTOOL_OPTIONS=''
IPADDR='192.168.10.175/24'
MTU=''
NAME=''
NETMASK=''
NETWORK=''
REMOTE_IPADDR=''
STARTMODE='auto'
DHCLIENT_SET_DEFAULT_ROUTE='yes'
EOF
# 配置网关
linux-oz6w:~ # cat > /etc/sysconfig/network/ifroute-eth0  <<EOF 
default 192.168.10.2 - eth0
EOF
# 配置DNS
linux-oz6w:~ # cat >> /etc/resolv.conf   <<EOF                   
nameserver 192.168.10.2
EOF
# 重启使配置生效
linux-oz6w:~ # systemctl restart network && ping www.baidu.com -w 3
PING www.a.shifen.com (180.101.49.12) 56(84) bytes of data.
64 bytes from 180.101.49.12: icmp_seq=1 ttl=128 time=13.0 ms
64 bytes from 180.101.49.12: icmp_seq=2 ttl=128 time=11.8 ms
64 bytes from 180.101.49.12: icmp_seq=3 ttl=128 time=10.5 ms

--- www.a.shifen.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms
rtt min/avg/max/mdev = 10.533/11.819/13.077/1.042 ms
# 创建hosts解析
linux-oz6w:~ # cat >> /etc/hosts <<EOF
192.168.10.175 k8s-01
192.168.10.176 k8s-02
192.168.10.177 k8s-03
EOF
# 修改hostname
linux-oz6w:~ # hostnamectl set-hostname --static k8s-01
# 断开终端,再次连接,hostname就会更新
  • The remaining two machines are configured in the same way
  • The following operations only need to be operated on the k8s-01 machine.

2.3, configure ssh to avoid password

# 我的机器密码是123.com,注意修改
#!/usr/bin/env bash
ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa
for host in k8s-01 k8s-02 k8s-03
do
    expect -c "
    spawn ssh-copy-id -i /root/.ssh/id_rsa.pub root@$host
        expect {
                \"*yes/no*\" {send \"yes\r\"; exp_continue}
                \"*Password*\" {send \"123.com\r\"; exp_continue}     
                \"*Password*\" {send \"123.com\r\";}
               }"
done
  • Use ssh root@k8s-01 to verify whether the password-free and hosts resolution is successful

2.4. Open modules in batches and create files

#!/usr/bin/env bash
for host in k8s-01 k8s-02 k8s-03
do
  ssh root@${host}  "modprobe br_netfilter"
  ssh root@${host}  "modprobe ip_conntrack"
  ssh root@${host}  "touch ~/.bashrc"
done

2.5, install ansible

  • Here we choose pip to install ansible
2.5.1, install pip
k8s-01:~ # wget https://pypi.python.org/packages/source/s/setuptools/setuptools-11.3.tar.gz
k8s-01:~ # tar xf setuptools-11.3.tar.gz
k8s-01:~ # python setuptools-11.3/setup.py install
k8s-01:~ # easy_install https://mirrors.aliyun.com/pypi/packages/0b/f5/be8e741434a4bf4ce5dbc235aa28ed0666178ea8986ddc10d035023744e6/pip-20.2.4.tar.gz#sha256=85c99a857ea0fb0aedf23833d9be5c40cf253fe24443f0829c7b472e23c364a1
2.5.2, install ansible
k8s-01:~ # pip install ansible -i https://mirrors.aliyun.com/pypi/simple/

2.6, download kubeasz

  • The kubeasz on github has been updated, which makes it different from the blog I wrote before. I kept a copy of kubeasz locally, and now it has been uploaded to Baidu Cloud. The new version of kubeasz has not been played well for the time being. It is estimated that there will still be on suse There are many problems, after all, the version difference problem
  • Link: https://pan.baidu.com/s/1rFscCCLHhD4O3os_9yKqEQ
    Extraction code: o1bs
从百度云上下载下来后,本地解压好,把kubeasz目录下所有内容都上传到服务器的/etc/ansible目录下,如下即可:
k8s-01:~ # mkdir /etc/ansible
k8s-01:~ # cd /etc/ansible/
k8s-01:/etc/ansible # ll
total 88
-rw-r--r-- 1 root root   414 Feb 12 16:13 .gitignore
-rw-r--r-- 1 root root   395 Feb 12 16:13 01.prepare.yml
-rw-r--r-- 1 root root    58 Feb 12 16:13 02.etcd.yml
-rw-r--r-- 1 root root   149 Feb 12 16:13 03.containerd.yml
-rw-r--r-- 1 root root   137 Feb 12 16:13 03.docker.yml
-rw-r--r-- 1 root root   470 Feb 12 16:13 04.kube-master.yml
-rw-r--r-- 1 root root   140 Feb 12 16:13 05.kube-node.yml
-rw-r--r-- 1 root root   408 Feb 12 16:13 06.network.yml
-rw-r--r-- 1 root root    77 Feb 12 16:13 07.cluster-addon.yml
-rw-r--r-- 1 root root  3686 Feb 12 16:13 11.harbor.yml
-rw-r--r-- 1 root root   431 Feb 12 16:13 22.upgrade.yml
-rw-r--r-- 1 root root  2119 Feb 12 16:13 23.backup.yml
-rw-r--r-- 1 root root   113 Feb 12 16:13 24.restore.yml
-rw-r--r-- 1 root root  1752 Feb 12 16:13 90.setup.yml
-rw-r--r-- 1 root root  1127 Feb 12 16:13 91.start.yml
-rw-r--r-- 1 root root  1120 Feb 12 16:13 92.stop.yml
-rw-r--r-- 1 root root   337 Feb 12 16:13 99.clean.yml
-rw-r--r-- 1 root root  5654 Feb 12 16:13 README.md
-rw-r--r-- 1 root root 10283 Feb 12 16:13 ansible.cfg
drwxr-xr-x 1 root root   534 Feb 12 16:13 bin
drwxr-xr-x 1 root root    18 Feb 12 16:13 dockerfiles
drwxr-xr-x 1 root root    76 Feb 12 16:12 docs
drwxr-xr-x 1 root root   432 Feb 12 16:13 down
drwxr-xr-x 1 root root    60 Feb 12 16:13 example
drwxr-xr-x 1 root root   232 Feb 12 16:12 manifests
drwxr-xr-x 1 root root   424 Feb 12 16:13 pics
drwxr-xr-x 1 root root   338 Feb 12 16:12 roles
drwxr-xr-x 1 root root   386 Feb 12 16:13 tools

2.7, configure chrony time synchronization

k8s-01:~ # zypper in -y chrony
k8s-01:~ # cp /etc/chrony.conf{,.bak}       # 多备份,少跑路
k8s-01:~ # vim /etc/chrony.conf
# k8s-01配置
server ntp.aliyun.com iburst
server ntp1-7.aliyun.com iburst
makestep 1.0 3
rtcsync
allow 192.168.10.0/16
local stratum 10
k8s-01:~ # systemctl enable chronyd.service --now
# k8s-02和k8s-03配置
server 192.168.10.175 iburst
k8s-02:~ # systemctl enable chronyd.service --now

2.8, modify the docker.server.j2 file

k8s-01:~ # cd /etc/ansible/
k8s-01:/etc/ansible # vim roles/docker/templates/docker.service.j2
ExecStartPost=/usr/sbin/iptables -I FORWARD -s 0.0.0.0/0 -j ACCEPT   # 修改iptables命令所在路径

2.9. Configure ansible host inventory file

k8s-01:/etc/ansible # cp example/hosts.multi-node ./hosts   # 注意路径,复制到/etc/ansible目录下
k8s-01:/etc/ansible # vim hosts
# 'etcd' cluster should have odd member(s) (1,3,5,...)
# variable 'NODE_NAME' is the distinct name of a member in 'etcd' cluster
[etcd]
192.168.10.175 NODE_NAME=etcd1
192.168.10.176 NODE_NAME=etcd2
192.168.10.177 NODE_NAME=etcd3

# master node(s)
[kube-master]
192.168.10.175
192.168.10.176
192.168.10.177

# work node(s)
[kube-node]
192.168.10.175
192.168.10.176
192.168.10.177

# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'yes' to install a harbor server; 'no' to integrate with existed one
# 'SELF_SIGNED_CERT': 'no' you need put files of certificates named harbor.pem and harbor-key.pem in directory 'down'
[harbor]
#192.168.1.8 HARBOR_DOMAIN="harbor.yourdomain.com" NEW_INSTALL=no SELF_SIGNED_CERT=yes

# [optional] loadbalance for accessing k8s from outside
[ex-lb]
#192.168.1.6 LB_ROLE=backup EX_APISERVER_VIP=192.168.1.250 EX_APISERVER_PORT=8443
#192.168.1.7 LB_ROLE=master EX_APISERVER_VIP=192.168.1.250 EX_APISERVER_PORT=8443

# [optional] ntp server for the cluster
[chrony]
#192.168.1.1

[all:vars]
# --------- Main Variables ---------------
# Cluster container-runtime supported: docker, containerd
CONTAINER_RUNTIME="docker"

# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="flannel"

# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
PROXY_MODE="ipvs"

# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.68.0.0/16"

# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="172.20.0.0/16"

# NodePort Range
NODE_PORT_RANGE="20000-40000"

# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="cluster.local."

# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
bin_dir="/opt/kube/bin"

# CA and other components cert/key Directory
ca_dir="/etc/kubernetes/ssl"

# Deploy Directory (kubeasz workspace)
base_dir="/etc/ansible"
-------------------------------------------------------------------------------------
# 我机器内存不够,所以选择了3master,3node共存在三台主机,暂时只部署了kubernetes集群,harbor和lb没有部署,有兴趣的,可以自己尝试一下
# 需要harbor,打开[harbor]下方的注释,修改ip和自己想要访问的域名即可
# 需要高可用,配置[ex-lb]下方的ip即可
  • After the configuration is complete, verify whether ansible can connect to the node
k8s-01:/etc/ansible # ansible all -m ping
192.168.10.177 | SUCCESS => {
    
    
    "ansible_facts": {
    
    
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
192.168.10.176 | SUCCESS => {
    
    
    "ansible_facts": {
    
    
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}
192.168.10.175 | SUCCESS => {
    
    
    "ansible_facts": {
    
    
        "discovered_interpreter_python": "/usr/bin/python"
    },
    "changed": false,
    "ping": "pong"
}

3. Install and verify the Kubernetes cluster

3.1, install Kubernetes cluster

  • All of these yml files in the directory can be executed. The execution of 90.setup.yml will be deployed according to the configuration in the host manifest file. If you need to add harbor or ex-lb later, you only need to execute the corresponding yml file separately.
01.prepare.yml
02.etcd.yml
03.containerd.yml
03.docker.yml
04.kube-master.yml
05.kube-node.yml
06.network.yml
07.cluster-addon.yml
11.harbor.yml
22.upgrade.yml
23.backup.yml
24.restore.yml
90.setup.yml
91.start.yml
92.stop.yml
99.clean.yml
k8s-01:/etc/ansible # ansible-playbook 90.setup.yml

3.2, verify the Kubernetes cluster

  • After the deployment is complete, disconnect the terminal and reconnect to make the kubectl completion effective

  • Check whether each node of k8s is ready

k8s-01:~ # kubectl get nodes -o wide
NAME             STATUS   ROLES    AGE     VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                              KERNEL-VERSION     CONTAINER-RUNTIME
192.168.10.175   Ready    master   3m18s   v1.20.1   192.168.10.175   <none>        SUSE Linux Enterprise Server 12 SP3   4.4.73-5-default   docker://19.3.14
192.168.10.176   Ready    master   3m18s   v1.20.1   192.168.10.176   <none>        SUSE Linux Enterprise Server 12 SP3   4.4.73-5-default   docker://19.3.14
192.168.10.177   Ready    master   3m18s   v1.20.1   192.168.10.177   <none>        SUSE Linux Enterprise Server 12 SP3   4.4.73-5-default   docker://19.3.14
  • Check which pods are in the Kubernetes cluster and which nodes are running on
k8s-01:~ # kubectl get pod -A -o wide
NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE     IP               NODE             NOMINATED NODE   READINESS GATES
kube-system   coredns-5787695b7f-9wzv8                     1/1     Running   0          8m54s   172.20.1.2       192.168.10.176   <none>           <none>
kube-system   dashboard-metrics-scraper-79c5968bdc-r8md7   1/1     Running   0          8m16s   172.20.1.3       192.168.10.176   <none>           <none>
kube-system   kube-flannel-ds-amd64-j5dnz                  1/1     Running   0          9m35s   192.168.10.176   192.168.10.176   <none>           <none>
kube-system   kube-flannel-ds-amd64-r7kgh                  1/1     Running   0          9m35s   192.168.10.177   192.168.10.177   <none>           <none>
kube-system   kube-flannel-ds-amd64-vnnzc                  1/1     Running   0          9m35s   192.168.10.175   192.168.10.175   <none>           <none>
kube-system   kubernetes-dashboard-c4c6566d6-n8hs9         1/1     Running   0          8m16s   172.20.2.2       192.168.10.177   <none>           <none>
kube-system   metrics-server-8568cf894b-gxnvr              1/1     Running   0          8m22s   172.20.0.2       192.168.10.175   <none>           <none>
  • View all services
# 可以看到,dashboard也已经部署成功了,访问https://192.168.10.175:23292即可
k8s-01:~ # kubectl get svc -A
NAMESPACE     NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes                  ClusterIP   10.68.0.1       <none>        443/TCP                  6m28s
kube-system   dashboard-metrics-scraper   ClusterIP   10.68.114.225   <none>        8000/TCP                 2m54s
kube-system   kube-dns                    ClusterIP   10.68.0.2       <none>        53/UDP,53/TCP,9153/TCP   3m33s
kube-system   kubernetes-dashboard        NodePort    10.68.146.89    <none>        443:23292/TCP            2m55s
kube-system   metrics-server              ClusterIP   10.68.18.87     <none>        443/TCP                  3m
About how to get the token of the dashboard
k8s-01:~ # kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-pw96q
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 7e8b9c7e-f1a1-4dc7-acb1-ec72ccfd2192

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1350 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IlROSUt0NWV5Q045SlJ5WXdmSXZyRmRYU3RiZklLQkp5bEh6b2ZXYlRmTGsifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXB3OTZxIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI3ZThiOWM3ZS1mMWExLTRkYzctYWNiMS1lYzcyY2NmZDIxOTIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.HVBPYED-m12WOf_4G81JGeYXduWYF3j-94GLvgUCHMxcbnPDWX2WTvIrQp4tyDVCfge6HkgCFeIZoBLNa5Xc_rDRjwzqVp9VVcKEGK0i6aEPWz2dHCfzJ8XG_jC8J87nK4wG6ZT-N-VOF2kljdfBh2mS_nx7G9LEanJELcK65177MG-cWJ9RLiieOSBu4L0elCeuqzI5cdeq67YoQuJ_0LAHdix27oiHBBfi9GKauLQv9Po4QEjhtsHsOMKsYLM_pe1cvUwGtXAz46PeHdTvmrzbaACz6HKD2b3OTZ33633BGy7UgByGw9TNlXa81nGFRBwTg_nkijqhIYmZk8iBmg
  • At this point, the entire Kubernetes cluster has been deployed and you can start learning

Guess you like

Origin blog.csdn.net/u010383467/article/details/112306954