Kubeadm deploys k8s cluster

Table of contents

1. Initialization settings

2. Install docker on all nodes

3.k8s node install kubeadm, kubelet and kubectl

4. Deploy the K8S cluster 

5. Deploy Dashboard

6. Deploy harbor private warehouse


name set up components
master 192.168.116.70 (2C/4G, the number of CPU cores must be greater than 2) docker、kubeadm、kubelet、kubectl、flannel
node01 192.168.116.60(2C/2G) docker、kubeadm、kubelet、kubectl、flannel
node02 192.168.116.50(2C/2G) docker、kubeadm、kubelet、kubectl、flannel
harbor 192.168.116.50(hub.abc.com) docker、docker-compose、harbor-offline-v1.2.2

1. Initialization settings

#所有节点,关闭防火墙规则,关闭selinux,关闭swap交换
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
swapoff -a						#交换分区必须要关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab		#永久关闭swap分区,&符号在sed命令中代表上次匹配的结果
#加载 ip_vs 模块
for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done

#修改主机名
hostnamectl set-hostname master
hostnamectl set-hostname node01
hostnamectl set-hostname node02

#所有节点修改hosts文件
vim /etc/hosts
192.168.116.70 master
192.168.116.60 node01
192.168.116.50 node02

#调整内核参数
#开启网桥模式,可将网桥的流量传递给iptables链
#关闭ipv6协议
cat > /etc/sysctl.d/kubernetes.conf << EOF
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
net.ipv6.conf.all.disable_ipv6=1
net.ipv4.ip_forward=1
EOF

#生效参数
sysctl --system  

2. Install docker on all nodes

yum install -y yum-utils device-mapper-persistent-data lvm2 
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 
yum install -y docker-ce docker-ce-cli containerd.io

mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https://6ijb8ubo.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  }
}
EOF
#使用Systemd管理的Cgroup来进行资源控制与管理,因为相对Cgroupfs而言,Systemd限制CPU、内存等资源更加简单和成熟稳定。
#日志使用json-file格式类型存储,大小为100M,保存在/var/log/containers目录下,方便ELK等日志系统收集和管理日志。

systemctl daemon-reload
systemctl restart docker.service
systemctl enable docker.service 

3.k8s node install kubeadm, kubelet and kubectl

#定义kubernetes源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum install -y kubelet-1.20.15 kubeadm-1.20.15 kubectl-1.20.15

#开机自启kubelet
systemctl enable --now kubelet.service

#K8S通过kubeadm安装出来以后都是以Pod方式存在,即底层是以容器方式运行,所以kubelet必须设置开机自启

4. Deploy the K8S cluster 

kubeadm config images list --kubernetes-version 1.20.15        #View the images required for initialization

Pull the image required above (I directly load the image compression package here)

#在 master 节点上传 v1.20.15.zip 压缩包至 /opt 目录
unzip v1.20.15.zip -d /opt/k8s
cd /opt/k8s/
for i in $(ls *.tar); do docker load -i $i; done

#复制镜像和脚本到 node 节点,并在 node 节点上执行脚本加载镜像文件
scp -r /opt/k8s root@node01:/opt
scp -r /opt/k8s root@node02:/opt

Initialize kubeadm

kubeadm config print init-defaults > /opt/k8s/kubeadm-config.yaml        #Get initialization template file

Modify the file and perform initialization

kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
#--upload-certs parameter can automatically distribute the certificate file when the subsequent execution joins the node
#tee kubeadm-init.log is used for output log

If initialization fails, check the problem and perform the following operations to reinitialize

kubeadm reset -f
ipvsadm --clear (need to download ipvsadm first)
rm -rf ~/.kube

Check the component status after execution, and you will find that there are two component statuses that are not healthy (due to a problem with the official configuration file)

To solve this problem, you need to modify the yaml files of scheduler and controller-manager

All nodes deploy the network plug-in flannel, upload the flannel plug-in and image package, and the master prepares the flannel deployment file

Execute the flannel deployment file on the master

check status (success)

Add the node node to the cluster (execute the command to initialize the log prompt on the node node, which can be viewed through the log file kubeadm-init.log we exported)

check status (success)

5. Deploy Dashboard

Select a node node to upload the dashboard software

master upload configuration file

#在 master01 节点上操作
#上传 recommended.yaml 文件到 /opt/k8s 目录中
cd /opt/k8s
vim recommended.yaml
#默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001     #添加
  type: NodePort          #添加
  selector:
    k8s-app: kubernetes-dashboard
	
kubectl apply -f recommended.yaml

Execute the configuration file and view

Create a service account and bind the default cluster-admin administrator cluster role to obtain token

kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

Browser access, enter token to log in

6. Deploy harbor private warehouse

First add the private warehouse address and host mapping in all node docker configuration files

Restart the docker service after adding

Then install harbor

#上传 harbor-offline-installer-v1.2.2.tgz 和 docker-compose 文件到 /opt 目录
cd /opt
cp docker-compose /usr/local/bin/
chmod +x /usr/local/bin/docker-compose

tar zxvf harbor-offline-installer-v1.2.2.tgz
cd harbor/
vim harbor.cfg
#修改主机名,使用https
5  hostname = hub.abc.com
9  ui_url_protocol = https
24 ssl_cert = /data/cert/server.crt
25 ssl_cert_key = /data/cert/server.key
59 harbor_admin_password = Harbor12345

make a private certificate

mkdir -p /data/cert
cd /data/cert
#生成私钥
openssl genrsa -des3 -out server.key 2048
输入两遍密码:

#生成证书签名请求文件
openssl req -new -key server.key -out server.csr
输入私钥密码:
输入国家名:CN
输入省名:
输入市名:
输入组织名:
输入机构名:
输入域名:hub.abc.com
输入管理员邮箱:[email protected]
其它全部直接回车

#备份私钥
cp server.key server.key.bak

#清除私钥密码
openssl rsa -in server.key.bak -out server.key
输入私钥密码:

#签名证书
openssl x509 -req -days 1000 -in server.csr -signkey server.key -out server.crt

Deploy harbor and check the status after completion

cd /opt/harbor/
./prepare
./install.sh

Browser access to https://hub.abc.com

Guess you like

Origin blog.csdn.net/weixin_58544496/article/details/128283797