[Cloud Native] Kubeadmin deploys Kubernetes cluster

Table of contents

Edit

1. Environmental preparation

1.2 Adjust kernel parameters

2. Deploy docker on all nodes

3. Install kubeadm, kubelet and kubectl on all nodes

3.1 Define kubernetes source

3.2 Start kubelet automatically after booting

4. Deploy K8S cluster

4.1 View the images required for initialization

4.2 Upload the v1.20.11.zip compressed package to the /opt directory on the master node

4.3 Copy the image and script to the node, and execute the script on the node to load the image file

4.4 Initialize kubeadm

method one:

Method Two

4.5 Setting kubectl

4.6 Deploy network plug-in flannel on all nodes

method one:

Method Two:

4.7 Check the node status on the master node

4.8 Test pod resource creation

4.9 Expose ports to provide services

4.10 Test access

4.11 Expansion 3 copies

5. Deploy Dashboard

6. Install Harbor private warehouse

6.1 Modify the host name

6.2 All nodes plus host name mapping

6.3 Install docker

6.4 Install Harbor

6.5 Generate certificate

6.6 Log in to harbo on a node node

6.7 Upload image

6.8 Delete the previously created nginx resources on the master node 

7. Kernel parameter optimization plan


1. Environmental preparation

master(2C/4G,cpu核心数要求大于2)  	192.168.10.19		docker、kubeadm、kubelet、kubectl、flannel
node01(2C/2G)						 	192.168.10.20		docker、kubeadm、kubelet、kubectl、flannel
node02(2C/2G)						 	192.168.10.21		docker、kubeadm、kubelet、kubectl、flannel
Harbor节点(hub.kgc.com)			    192.168.10.13		docker、docker-compose、harbor-offline-v1.2.2

1. Install Docker and kubeadm on all nodes

2. Deploy Kubernetes Master

3. Deploy container network plug-in

4. Deploy Kubernetes Node and add the node to the Kubernetes cluster

5. Deploy the Dashboard Web page to visually view Kubernetes resources

6. Deploy Harbor private warehouse to store mirror resources

systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
swapoff -a						#交换分区必须要关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab		#永久关闭swap分区,&符号在sed命令中代表上次匹配的结果
#加载 ip_vs 模块
for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done

//修改主机名
hostnamectl set-hostname master01      
hostnamectl set-hostname node01
hostnamectl set-hostname node02

//所有节点修改hosts文件
vim /etc/hosts
192.168.10.19 master01
192.168.10.20 node01
192.168.10.21 node02

1.2 Adjust kernel parameters

cat > /etc/sysctl.d/kubernetes.conf << EOF
#开启网桥模式,可将网桥的流量传递给iptables链
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
#关闭ipv6协议
net.ipv6.conf.all.disable_ipv6=1
net.ipv4.ip_forward=1
EOF

//生效参数
sysctl --system  

2. Deploy docker on all nodes

yum install -y yum-utils device-mapper-persistent-data lvm2 
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 
yum install -y docker-ce docker-ce-cli containerd.io

mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https://6ijb8ubo.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  }
}
EOF
#使用Systemd管理的Cgroup来进行资源控制与管理,因为相对Cgroupfs而言,Systemd限制CPU、内存等资源更加简单和成熟稳定。
#日志使用json-file格式类型存储,大小为100M,保存在/var/log/containers目录下,方便ELK等日志系统收集和管理日志。

systemctl daemon-reload
systemctl restart docker.service
systemctl enable docker.service 

docker info | grep "Cgroup Driver"
Cgroup Driver: systemd

3. Install kubeadm, kubelet and kubectl on all nodes

3.1 Define kubernetes source

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum install -y kubelet-1.20.11 kubeadm-1.20.11 kubectl-1.20.11

3.2 Start kubelet automatically after booting

systemctl enable kubelet.service
#K8S通过kubeadm安装出来以后都是以Pod方式存在,即底层是以容器方式运行,所以kubelet必须设置开机自启

4. Deploy K8S cluster

4.1 View the images required for initialization

kubeadm config images list

4.2 Upload the v1.20.11.zip compressed package to the /opt directory on the master node

unzip v1.20.11.zip -d /opt/k8s
cd /opt/k8s/v1.20.11
for i in $(ls *.tar); do docker load -i $i; done

4.3 Copy the image and script to the node, and execute the script on the node to load the image file

scp -r /opt/k8s root@node01:/opt
scp -r /opt/k8s root@node02:/opt

4.4 Initialize kubeadm

method one:

kubeadm config print init-defaults > /opt/kubeadm-config.yaml

cd /opt/
vim kubeadm-config.yaml
......
11 localAPIEndpoint:
12   advertiseAddress: 192.168.10.19		#指定master节点的IP地址
13   bindPort: 6443
......
34 kubernetesVersion: v1.20.11				#指定kubernetes版本号
35 networking:
36   dnsDomain: cluster.local
37   podSubnet: "10.244.0.0/16"				#指定pod网段,10.244.0.0/16用于匹配flannel默认网段
38   serviceSubnet: 10.96.0.0/16			#指定service网段
39 scheduler: {}

#末尾再添加以下内容
--- 

apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs									#把默认的kube-proxy调度方式改为ipvs模式

kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
#--experimental-upload-certs 参数可以在后续执行加入节点时自动分发证书文件,K8S V1.16版本开始替换为 --upload-certs
#tee kubeadm-init.log 用以输出日志

#查看 kubeadm-init 日志
less kubeadm-init.log

#kubernetes配置文件目录
ls /etc/kubernetes/

#存放ca等证书和密码的目录
ls /etc/kubernetes/pki		

Method Two

kubeadm init \
--apiserver-advertise-address=192.168.10.19 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.20.11 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.244.0.0/16 \

--token-ttl=0

To initialize the cluster, you need to use the kubeadm init command. You can specify specific parameters for initialization, or you can specify a configuration file for initialization. Optional parameters:

--apiserver-advertise-address: The IP address advertised by apiserver to other components. It should generally be the IP address of the Master node used for cluster internal communication. 0.0.0.0 represents all available addresses on the node.

--apiserver-bind-port: the listening port of apiserver, the default is 6443

--cert-dir: SSL certificate file for communication, default/etc/kubernetes/pki

--control-plane-endpoint: Shared terminal of the console plane, which can be a load-balanced IP address or DNS domain name. It needs to be added when using a high-availability cluster.

--image-repository: Image repository to pull images from, the default is k8s.gcr.io

--kubernetes-version: Specify kubernetes version

--pod-network-cidr: The network segment of the pod resource must be consistent with the value setting of the pod network plug-in. The default value of the Flannel network plug-in is 10.244.0.0/16, and the default value of the Calico plug-in is 192.168.0.0/16;

--service-cidr: network segment of service resource

--service-dns-domain: The suffix of the service full domain name, the default is cluster.local

--token-ttl: The default token validity period is 24 hours. If you don’t want to expire, you can add the --token-ttl=0 parameter.

Method 2: After initialization, you need to modify the configmap of kube-proxy and enable ipvs kubectl edit cm kube-proxy -n=kube-system Modify mode: ipvs

hint:

......
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.10.19:6443 --token wfjo7j.baa0aheyw39w3m7h \
    --discovery-token-ca-cert-hash sha256:77100ff66b20100cbd9f1c289788e43aee69c5b4e24cc2c74c2e5d634a074fdc 

4.5 Setting kubectl

kubectl needs to be authenticated and authorized by the API server before it can perform corresponding management operations. The cluster deployed by kubeadm generates an authentication configuration file /etc/kubernetes/admin.conf with administrator rights for it, which can be used by kubectl through the default "$ HOME/.kube/config" path to load.

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config


//如果 kubectl get cs 发现集群不健康,更改以下两个文件
vim /etc/kubernetes/manifests/kube-scheduler.yaml 
vim /etc/kubernetes/manifests/kube-controller-manager.yaml

#修改如下内容

把--bind-address=127.0.0.1变成--bind-address=192.168.10.19		#修改成k8s的控制节点master01的ip
把httpGet:字段下的hosts由127.0.0.1变成192.168.10.19(有两处)
#- --port=0					# 搜索port=0,把这一行注释掉

systemctl restart kubelet

4.6 Deploy network plug-in flannel on all nodes

method one:

/All nodes upload the flannel image flannel.tar to the /opt directory, and the master node uploads the kube-flannel.yml file

cd /opt
docker load < flannel.tar

Create the flannel resource on the master node

kubectl apply -f kube-flannel.yml 

Method Two:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml


#在 node 节点上执行 kubeadm join 命令加入群集
kubeadm join 192.168.10.19:6443 --token rc0kfs.a1sfe3gl4dvopck5 \
    --discovery-token-ca-cert-hash sha256:864fe553c812df2af262b406b707db68b0fd450dc08b34efb73dd5a4771d37a2

4.7 Check the node status on the master node

kubectl get nodes

kubectl get pods -n kube-system

NAME                             READY   STATUS    RESTARTS   AGE
coredns-bccdc95cf-c9w6l          1/1     Running   0          71m
coredns-bccdc95cf-nql5j          1/1     Running   0          71m
etcd-master                      1/1     Running   0          71m
kube-apiserver-master            1/1     Running   0          70m
kube-controller-manager-master   1/1     Running   0          70m
kube-flannel-ds-amd64-kfhwf      1/1     Running   0          2m53s
kube-flannel-ds-amd64-qkdfh      1/1     Running   0          46m
kube-flannel-ds-amd64-vffxv      1/1     Running   0          2m56s
kube-proxy-558p8                 1/1     Running   0          2m53s
kube-proxy-nwd7g                 1/1     Running   0          2m56s
kube-proxy-qpz8t                 1/1     Running   0          71m
kube-scheduler-master            1/1     Running   0          70m

4.8 Test pod resource creation

kubectl create deployment nginx --image=nginx

kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
nginx-554b9c67f9-zr2xs   1/1     Running   0          14m   10.244.1.2   node01   <none>           <none>

4.9 Expose ports to provide services

kubectl expose deployment nginx --port=80 --type=NodePort

kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        3h57m
myapp-ky20   NodePort    10.96.56.120   <none>        80:32404/TCP   3s

4.10 Test access

​curl http://node01:32404

4.11 Expansion 3 copies

kubectl scale deployment nginx --replicas=3
kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
nginx-554b9c67f9-9kh4s   1/1     Running   0          66s   10.244.1.3   node01   <none>           <none>
nginx-554b9c67f9-rv77q   1/1     Running   0          66s   10.244.2.2   node02   <none>           <none>
nginx-554b9c67f9-zr2xs   1/1     Running   0          17m   10.244.1.2   node01   <none>           <none>

5. Deploy Dashboard

Operate on the master01 node

#上传 recommended.yaml 文件到 /opt/k8s 目录中
cd /opt/k8s
vim recommended.yaml
#默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:

   - port: 443
     targetPort: 8443
     nodePort: 30001     #添加
       type: NodePort          #添加
       selector:
         k8s-app: kubernetes-dashboard

kubectl apply -f recommended.yaml

#创建service account并绑定默认cluster-admin管理员集群角色
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')


#使用输出的token登录Dashboard
https://NodeIP:30001

6. Install Harbor private warehouse

6.1 Modify the host name

hostnamectl set-hostname hub.kgc.com

6.2 All nodes plus host name mapping

echo '192.168.10.23 hub.kgc.com' >> /etc/hosts

6.3 Install docker

yum install -y yum-utils device-mapper-persistent-data lvm2 
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 
yum install -y docker-ce docker-ce-cli containerd.io

#所有 node 节点都修改 docker 配置文件,加上私有仓库配置

mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https://6ijb8ubo.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "insecure-registries": ["https://hub.kgc.com"]
}
EOF

systemctl start docker
systemctl enable docker

6.4 Install Harbor

Upload harbor-offline-installer-v1.2.2.tgz and docker-compose files to the /opt directory

cd /opt
cp docker-compose /usr/local/bin/
chmod +x /usr/local/bin/docker-compose

tar zxvf harbor-offline-installer-v1.2.2.tgz
cd harbor/
vim harbor.cfg
5  hostname = hub.kgc.com
9  ui_url_protocol = https
24 ssl_cert = /data/cert/server.crt
25 ssl_cert_key = /data/cert/server.key
59 harbor_admin_password = Harbor12345

6.5 Generate certificate

mkdir -p /data/cert
cd /data/cert
#生成私钥
openssl genrsa -des3 -out server.key 2048
输入两遍密码:123456

#生成证书签名请求文件
openssl req -new -key server.key -out server.csr
输入私钥密码:123456
输入国家名:CN
输入省名:BJ
输入市名:BJ
输入组织名:KGC
输入机构名:KGC
输入域名:hub.kgc.com
输入管理员邮箱:[email protected]
其它全部直接回车

#备份私钥
cp server.key server.key.org

#清除私钥密码
openssl rsa -in server.key.org -out server.key
输入私钥密码:123456

#签名证书
openssl x509 -req -days 1000 -in server.csr -signkey server.key -out server.crt

chmod +x /data/cert/*

cd /opt/harbor/
./install.sh
​在本地使用火狐浏览器访问:https://hub.kgc.com​

Add Exception -> Confirm Security Exception

Username: admin

Password: Harbor12345

6.6 Log in to harbo on a node node

docker login -u admin -p Harbor12345 https://hub.kgc.com

6.7 Upload image

docker tag nginx:latest hub.kgc.com/library/nginx:v1
docker push hub.kgc.com/library/nginx:v1

6.8 Delete the previously created nginx resources on the master node 

kubectl delete deployment nginx

kubectl create deployment nginx-deployment --image=hub.kgc.com/library/nginx:v1 --port=80 --replicas=3

kubectl expose deployment nginx-deployment --port=30000 --target-port=80

kubectl get svc,pods
NAME                       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)     AGE
service/kubernetes         ClusterIP   10.96.0.1       <none>        443/TCP     10m
service/nginx-deployment   ClusterIP   10.96.222.161   <none>        30000/TCP   3m15s

NAME                                    READY   STATUS    RESTARTS   AGE
pod/nginx-deployment-77bcbfbfdc-bv5bz   1/1     Running   0          16s
pod/nginx-deployment-77bcbfbfdc-fq8wr   1/1     Running   0          16s
pod/nginx-deployment-77bcbfbfdc-xrg45   1/1     Running   0          3m39s


yum install ipvsadm -y
ipvsadm -Ln

curl 10.96.222.161:30000


kubectl edit svc nginx-deployment
25   type: NodePort						#把调度策略改成NodePort

kubectl get svc
NAME                       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)           AGE
service/kubernetes         ClusterIP   10.96.0.1       <none>        443/TCP           29m
service/nginx-deployment   NodePort    10.96.222.161   <none>        30000:32340/TCP   22m

Browser access:

192.168.10.19:32340

192.168.10.20:32340

192.168.10.21:32340

#Grant cluster-admin role permissions to user system:anonymous 

kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous

7. Kernel parameter optimization plan

cat > /etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0									#禁止使用 swap 空间,只有当系统内存不足(OOM)时才允许使用它
vm.overcommit_memory=1							#不检查物理内存是否够用
vm.panic_on_oom=0								#开启 OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963							#指定最大文件句柄数
fs.nr_open=52706963								#仅4.4以上版本支持
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF

Guess you like

Origin blog.csdn.net/m0_71888825/article/details/132743700