kubeadm安装高可用K8S-1.19集群(外部etcd方式)


解耦了控制平面和Etcd,集群风险小,单独挂了一台master或etcd对集群影响很小。etcd在外部方便维护和恢复。
在这里插入图片描述

集群规划

主机 ip 角色
vip 192.168.100.240 虚拟VIP
master01 192.168.100.241 etcd1、master
master02 192.168.100.242 etcd2、master
node01 192.168.100.243 etcd3、node

初始化工作(master01-master02-node01都须要操作)

## 安装系统
Linux node02 3.10.0-1062.el7.x86_64 #1 SMP Wed Aug 7 18:08:02 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

## 配置系统
###### 关闭 防火墙
systemctl stop firewalld
systemctl disable firewalld

###### 关闭 SeLinux
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

###### 关闭 swap
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab

###### yum epel源
yum install wget telnet -y
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum clean all
yum makecache

###### 修改 /etc/sysctl.conf
modprobe br_netfilter
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF
sysctl -p /etc/sysctl.d/k8s.conf

###### 开启 ipvs
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

# 设置 yum repository
yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 安装并启动 docker
yum install -y docker-ce-18.09.8 docker-ce-cli-18.09.8 containerd.io

# 添加ipvs支持
yum install -y nfs-utils ipset ipvsadm

安装步骤

一、安装keepalived创建VIP

1、安装keepalived
master01 master02 两台主机操作

yum install -y keepalived

2、配置
master01配置
vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived
 
vrrp_instance VI_1 {
 state MASTER    #BACKUP上修改为BACKUP
 interface eth0   #主网卡信息
 virtual_router_id 44   #虚拟路由标识,主从相同
 priority 100
 advert_int 1
 authentication {
 auth_type PASS
 auth_pass 1111   #主从认证密码必须一致
 }
 virtual_ipaddress {  #虚拟IP(VIP)
 192.168.100.240
 }
}

virtual_server 192.168.100.240 6443 {   #对外虚拟IP地址
 delay_loop 6    #检查真实服务器时间,单位秒
 lb_algo rr      #设置负载调度算法,rr为轮训
 lb_kind DR     #设置LVS负载均衡NAT模式
 protocol TCP    #使用TCP协议检查realserver状态
 
 real_server 192.168.100.241 6443 {  #第一个节点
  weight 3          #节点权重值
  TCP_CHECK {       #健康检查方式
  connect_timeout 3 #连接超时
  nb_get_retry 3    #重试次数
  delay_before_retry 3  #重试间隔/S
   }
  }
  
 real_server 192.168.100.242 6443 {  #第二个节点
  weight 3
  TCP_CHECK {
  connect_timeout 3
  nb_get_retry 3
  delay_before_retry 3
    }
  }
}

master02 配置
vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived
 
vrrp_instance VI_1 {
 state BACKUP   #BACKUP上修改为BACKUP
 interface eth0
 virtual_router_id 44   #虚拟路由标识,主从相同
 priority 80   #权重
 advert_int 1
 authentication {
 auth_type PASS
 auth_pass 1111   #主从认证密码必须一致
 }
 virtual_ipaddress {  #虚拟IP(VIP)
 192.168.100.240
 }
}

virtual_server 192.168.100.240 6443 {   #对外虚拟IP地址
 delay_loop 6    #检查真实服务器时间,单位秒
 lb_algo rr      #设置负载调度算法,rr为轮训
 lb_kind DR     #设置LVS负载均衡NAT模式
 protocol TCP    #使用TCP协议检查realserver状态
 
 real_server 192.168.100.241 6443 {  #第一个节点
  weight 3          #节点权重值
  TCP_CHECK {       #健康检查方式
  connect_timeout 3 #连接超时
  nb_get_retry 3    #重试次数
  delay_before_retry 3  #重试间隔/S
   }
  }
  
 real_server 192.168.100.242 6443 {  #第二个节点
  weight 3
  TCP_CHECK {
  connect_timeout 3
  nb_get_retry 3
  delay_before_retry 3
    }
  }
}

3、启动服务

systemctl start keepalived
systemctl enable keepalived
systemctl status keepalived

4、查看

[root@master01 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.100.240:6443 rr
# 这里只会有一个vip地址、等集群启动后有6443端口就会出现rr轮训地址

二、搭建高可用etcd集群

1、在master1上安装cfssl

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

chmod +x cfssl_linux-amd64 cfssljson_linux-amd64

mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson

2、安装etcd二进制文件

# 创建目录
mkdir -p /data/etcd/bin
# 下载
cd /tmp
wget https://storage.googleapis.com/etcd/v3.3.25/etcd-v3.3.25-linux-amd64.tar.gz
tar zxf etcd-v3.3.25-linux-amd64.tar.gz
cd etcd-v3.3.25-linux-amd64
mv etcd etcdctl /data/etcd/bin/

3、创建ca证书,客户端,服务端,节点之间的证书
Etcd属于server ,etcdctl 属于client,二者之间通过http协议进行通信。

  • ca证书 自己给自己签名的权威证书,用来给其他证书签名
  • server证书 etcd的证书
  • client证书 客户端,比如etcdctl的证书
  • peer证书 节点与节点之间通信的证书

1) 创建目录

mkdir -p /data/etcd/ssl
cd /data/etcd/ssl

2) 创建ca证书
创建vim ca-config.json

{
    
    
    "signing": {
    
    
        "default": {
    
    
            "expiry": "438000h"
        },
        "profiles": {
    
    
            "server": {
    
    
                "expiry": "438000h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            },
            "client": {
    
    
                "expiry": "438000h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "client auth"
                ]
            },
            "peer": {
    
    
                "expiry": "438000h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}

server auth表示client可以用该ca对server提供的证书进行验证
client auth表示server可以用该ca对client提供的证书进行验证
创建证书签名请求ca-csr.json
vim ca-csr.json

{
    
    
    "CN": "etcd",
    "key": {
    
    
        "algo": "rsa",
        "size": 2048
    }
}

生成CA证书和私钥

cfssl gencert -initca ca-csr.json | cfssljson -bare ca
# ls ca*
# ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem

3) 生成客户端证书
vim client.json

{
    
    
    "CN": "client",
    "key": {
    
    
        "algo": "ecdsa",
        "size": 256
    }
}

生成

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client.json  | cfssljson -bare client -
# ls ca*
# ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem client-key.pem client.pem

4) 生成server,peer证书
创建配置 vim etcd.json

{
    
    
    "CN": "etcd",
    "hosts": [
        "192.168.100.241",
        "192.168.100.242",
        "192.168.100.243"
    ],
    "key": {
    
    
        "algo": "ecdsa",
        "size": 256
    },
    "names": [
        {
    
    
            "C": "CN",
            "L": "BJ",
            "ST": "BJ"
        }
    ]
}

生成

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server etcd.json | cfssljson -bare server

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd.json | cfssljson -bare peer

5) 将master01的/data/etcd/ssl目录同步到master02和node01上

scp -r /data/etcd 192.168.100.242:/data/etcd
scp -r /data/etcd 192.168.100.243:/data/etcd

4、 systemd配置文件
vim /usr/lib/systemd/system/etcd.service
三台主机配置不一样用的时候把注释最好删除

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/data/etcd/
ExecStart=/data/etcd/bin/etcd \
  --name=etcd1 \      # 这里须要改
  --cert-file=/data/etcd/ssl/server.pem \
  --key-file=/data/etcd/ssl/server-key.pem \
  --peer-cert-file=/data/etcd/ssl/peer.pem \
  --peer-key-file=/data/etcd/ssl/peer-key.pem \
  --trusted-ca-file=/data/etcd/ssl/ca.pem \
  --peer-trusted-ca-file=/data/etcd/ssl/ca.pem \
  --initial-advertise-peer-urls=https://192.168.100.241:2380 \  # 改为本机ip
  --listen-peer-urls=https://192.168.100.241:2380 \  # 改为本机ip
  --listen-client-urls=https://192.168.100.241:2379 \  # 改为本机ip
  --advertise-client-urls=https://192.168.100.241:2379 \  # 改为本机ip
  --initial-cluster-token=etcd-cluster-0 \
  --initial-cluster=etcd1=https://192.168.100.241:2380,etcd2=https://192.168.100.242:2380,etcd3=https://192.168.100.243:2380 \
  --initial-cluster-state=new \
  --data-dir=/data/etcd \
  --snapshot-count=50000 \
  --auto-compaction-retention=1 \
  --max-request-bytes=10485760 \
  --quota-backend-bytes=8589934592
Restart=always
RestartSec=15
LimitNOFILE=65536
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target

5、 启动 etcd

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd
systemctl status etcd

6、 验证是否成功

cd /data/etcd/ssl
# 查看状态
../bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.241:2379" cluster-health
# 查看集群主机
../bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.100.241:2379" member list

三、安装 kubeadm, kubelet 和 kubectl

所有节点安装 kubeadm, kubelet 。kubectl是可选的,你可以安装在所有机器上,也可以只安装在一台master1上。

1、添加国内yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2、指定版本安装

yum install -y kubelet-1.19.2 kubeadm-1.19.2 kubectl-1.19.2

3、在所有安装kubelet的节点上,将kubelet设置为开机启动

systemctl enable kubelet 

四、初始化master

1) 在master1上将搭建etcd时生成的的ca证书和客户端证书复制到指定地点并重命名,如下

[root@master01] ~$ mkdir -p /etc/kubernetes/pki/etcd/

#etcd集群的ca证书
[root@master01] ~$ cp /data/etcd/ssl/ca.pem /etc/kubernetes/pki/etcd/

#etcd集群的client证书,apiserver访问etcd使用
[root@master01] ~$ cp /data/etcd/ssl/client.pem /etc/kubernetes/pki/apiserver-etcd-client.pem

#etcd集群的client私钥
[root@master01] ~$ cp /data/etcd/ssl/client-key.pem /etc/kubernetes/pki/apiserver-etcd-client-key.pem

#确保
[root@master01] ~$ tree /etc/kubernetes/pki/
/etc/kubernetes/pki/
├── apiserver-etcd-client-key.pem
├── apiserver-etcd-client.pem
└── etcd
    └── ca.pem

1 directory, 3 files

2) 创建初始化配置文件
生成默认配置文件

kubeadm config print init-defaults > kubeadm-init.yaml

最后修改后

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.100.241  # 本机ip
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: master01  # 本机hostname
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {
    
    }
dns:
  type: CoreDNS
etcd: 
#  local:
#    dataDir: /var/lib/etcd  # 下面为自定义etcd集群
  external:
    endpoints:
    - https://192.168.100.241:2379
    - https://192.168.100.242:2379
    - https://192.168.100.243:2379
    caFile: /etc/kubernetes/pki/etcd/ca.pem  #搭建etcd集群时生成的ca证书
    certFile: /etc/kubernetes/pki/apiserver-etcd-client.pem   #搭建etcd集群时生成的客户端证书
    keyFile: /etc/kubernetes/pki/apiserver-etcd-client-key.pem  #搭建etcd集群时生成的客户端密钥
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.19.0
controlPlaneEndpoint: 192.168.100.240  # vip地址
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {
    
    }
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"

3) 执行初始化

kubeadm init --config=kubeadm-init.yaml

4)配置kubectl
要使用 kubectl来 管理集群操作集群,需要做如下配置:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

测试下,kubectl是否正常,需要注意是此时master1的notready状态是正常的,因为我们还没有部署flannel网络插件

[root@master01] # kubectl get node
NAME       STATUS     ROLES    AGE   VERSION
master01   NotReady   master   66s   v1.19.2

添加master02
1)首先将 master1 中的 生成的集群共用的ca 证书,scp 到其他 master 机器。

scp -r  /etc/kubernetes/pki/* 192.168.100.242:/etc/kubernetes/pki/

2) 将初始化配置文件复制到master2

scp kubeadm-init.yaml 192.168.100.242:/root/

3) 初始化master2
修改后初始化具体修改内容根据上面的标准文件注释修改

kubeadm init --config=kubeadm-init.yaml

五、将worker节点加入集群

在master01上生成加入key

kubeadm token create --print-join-command

在节点主机直接执行添加

ubeadm join 192.168.100.240:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash \
    sha256:fb4e252253b55974edff65cb4765e9979f8785cd67a6ed41f87c83c6bcc3ac4a

六、安装插件flannel、metrics-server

略、请查看之前文章

七、测试集群

节点状态

kubectl get nodes
NAME           STATUS   ROLES    AGE    VERSION
uat-master01   Ready    master   46h    v1.19.2
uat-master02   Ready    master   45h    v1.19.2
uat-node01     Ready    <none>   46h    v1.19.2

组件状态

kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-2               Healthy   {
    
    "health":"true"}   
etcd-1               Healthy   {
    
    "health":"true"}   
etcd-0               Healthy   {
    
    "health":"true"}   

服务账户

kubectl get serviceaccount
NAME      SECRETS   AGE
default   1         46h

集群信息,注意这里的api地址正是我们搭建的vip地址。

kubectl cluster-info
Kubernetes master is running at https://192.168.100.240:6443
KubeDNS is running at https://192.168.100.240:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://192.168.100.240:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

猜你喜欢

转载自blog.csdn.net/lswzw/article/details/109027255