Instructions for this deployment
In the previous article, the binary k8s cluster deployment has been completed, but the stand-alone master is not suitable for the actual use of the enterprise (because in the stand-alone master, only one master is used as the dispatching command of the node server, once it goes down, it means the paralysis of the entire cluster, so mature k8s clusters must consider the high availability of the master.) Enterprise applications generally have at least two masters and above. Add steps to repeat the addition). After adding the master, we will use the keepalived+nginx architecture to achieve a highly available master [you can also use haproxy+keepalived or keepalived+lvs (not recommended, the steps are too complicated)]
In addition, we will also build the ui management interface of k8s
Architecture components for this deployment
Architecture description:
The kubelet of the node node can only connect to the apiserver of one master node, and it is impossible to connect to the apiserver of multiple master nodes at the same time. In short, a node section can only have one master to lead it.
Kubelet and kube-proxy connect to the master node through the server parameter in the kubelet.kubeconfig and kube-proxy.kubeconfig files.
Therefore, in an environment with multiple master nodes, an nginx load balancer is required for scheduling, and a keepalived high-availability construction (master-slave two nodes) is required to prevent the failure of the master node from causing the entire k8s cluster to be unavailable.
1. Building a new master node
1.1 Initial configuration of master02
conf << EOF #Enable the bridge mode, which can pass the traffic of the bridge to the iptables chain
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
#Close ipv6 protocol
net.ipv6.conf.all.disable_ipv6=1
net.ipv4.ip_forward=1
EOF
sysctl --system
#time synchronization
yum install ntpdate -y
ntpdate ntp.aliyun.com
#time The synchronization operation is added to the scheduled task to ensure that all nodes guarantee time synchronization
crontab -e
*/30 * * * * /usr/sbin/ntpdate ntp.aliyun.com
crontab -l
1.2 Migrate the configuration of master01 to master02
##------------ 1. Master01 node, copy files to master02 ------------------------------- #Copy
certificate files, configuration files of each master component and service management files from master01 node to master02 node
scp -r /opt/etcd/ [email protected]:/opt/ scp -r
/opt/kubernetes/ [email protected] 0:/opt/ scp
/usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service [email protected]:/usr/lib/systemd/system/
scp -r /root/.kube/ master02:/root
/
##----------- 2、 master02节点,修改配置文件并启动相关服务-------------------------
#修改配置文件kube-apiserver中的IP
vim /opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \ #输出日志,false表示标准错误不输出到屏幕,而是输出到日志中。true表示标准错误会输出到屏幕。
--v=4 \ #日志级别
--etcd-servers=https://192.168.73.105:2379,https://192.168.73.106:2379,https://192.168.73.107:2379 \ #etcd节点的IP通信地址
--bind-address=192.168.73.110 \ #修改,当前绑定的内网IP监听的地址
--secure-port=6443 \ #基于HPPTS开放端口
--advertise-address=192.168.73.110 \ #修改,内网通告地址,让其他node节点地址通信
......
#在 master02 节点上启动各服务并设置开机自启
systemctl enable --now kube-apiserver.service
systemctl enable --now kube-controller-manager.service
systemctl enable --now kube-scheduler.service
#Create
an executable file and create a soft link
ln -s /opt/kubernetes/bin/* /usr/local/bin/
#View
node node status
kubectl get nodes kubectl
get nodes -o wide #-o= wide: output additional information; for Pod, it will output the name of the Node where the Pod is located#At this time, the node node status found on the master02 node is only the information queried from etcd, and at this time the node node has not actually established a communication connection with the master02 node, so a
VIP is needed to associate the node node with the master node
2. Deployment of load balancing
#Configure load balancer cluster dual-machine hot standby load balancing (nginx realizes load balancing, keepalived realizes dual-machine hot standby)#-----------------
1.
Two load balancers configure nginx --------------------------------------
#
Configure the official online yum source of nginx, configure the yum source of local nginx
cat > /etc/yum.repos.d/nginx.repo << 'EOF'
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0
EOF
yum
install nginx -y
#Modify
nginx configuration file, configure four-layer reverse proxy load balancing, specify k8s cluster 2 master node ip and port 6443 vim /etc/nginx/nginx.conf events { worker_connections 1024
;
} #Add st ream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status
$upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream
k8s-apiserver { server 192.168.73.105:6443; #master01 server 192.1 68.73.110:6443; #master02 } server { listen 6443; proxy_pass k8s-apiserver; } } http { ...... # Check configuration file syntax nginx -t #Start nginx service, check port 6443 has been monitored systemctl start nginx systemctl enable nginx ss -lntp|grep nginx
#------------------ 2. Configure keepalived on two load balancers ------------------------------
#Deploy
keepalived service
yum install keepalived -y
#
Modify keepalived configuration file
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs { router_id nginx_master } vrrp_script check_nginx { script "/etc/nginx/check_nginx .sh" #Specify the path of the detection script, and the script acts as a heartbeat detection script} vrrp_instance VI_1 { state MASTER #The specified state is the master node, 109 is the BACKUP backup node interface ens33 virtual_router_id 51 priority 100 #108 has a priority of 100 and 109 has a priority of 90, which determines the position of the master and backup. advert_int 1 authentication { auth_ type PASS auth_pass 1111 }
virtual_ipaddress {
192.168.73.66
}
track_script {
check_nginx #追踪脚本的进程
}
}
#将该文件 发送给备用调度器,并且将其中的配置修改为备用调度器的属性
cd /etc/keepalived/
scp keepalived.conf [email protected]:`pwd`
#创建nginx状态检查脚本
vim /etc/nginx/check_nginx.sh
#!/bin/bash
killall -0 nginx &>/dev/null
if [ $? -ne 0 ];then
systemctl stop keepalived
fi
chmod +x /etc/nginx/check_nginx.sh #为脚本增加执行权限
#将该脚本发送给备用调度器
cd /etc/nginx
scp check_nginx.conf [email protected]:`pwd`
#Two active and standby schedulers start the keepalived service (must start the nginx service first, and then start the keepalived service)
systemctl start keepalived
systemctl enable keepalived
ip addr #Check whether the VIP of the master node
is generated
#---------------- 3. Shut down the nginx service of the master node, simulate a failure, and test keepalived-----------------------
#
Shut down the Nginx service of the master node lb01, simulate a downtime, and observe whether the VIP drifts to the standby node
systemctl stop nginx
ip addr
systemctl status keepalived #At this time, keepalived is killed by the script#
Standby
node checks whether VIP ip addr is generated
#At this time, the VIP drifts to the standby node
lb0 2
#Restore primary node
systemctl start nginx #Start nginx first
systemctl start keepalived #Restart keepalived
ip addr
3. Construction of k8s web UI interface
//Operate on the master01 node
#Upload the recommended.yaml file to the /opt/k8s directory, deploy CoreDNS
cd /opt/k8s
vim recommended.yaml
#The default Dashboard can only be accessed within the cluster, modify the Service to NodePort type, and expose it to the outside:
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: ku bernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
nodePort: 30001 #Add
type: NodePort #Add selector
:
k8s-app: kubernetes-dashboard
#Through the recommended.yaml resource configuration list, use kubectl apply to create resources, -f specifies the resource configuration list file
kubect l apply -f recommended.yaml
#Create a service account and bind the default cluster-admin administrator cluster role
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin #Get
token value
kubectl describe secrets -n kube-system $(kubect l -n kube-system get secret | awk '/dashboard-admin/{print $1}') #Use the output token to log in to the Dashboard and access the node
node
https://192.168.73.106:30001