Three node node deployment k8s components

The second Subsequently, Master deployed over the top three roles, then deploy node node
main deployment: kubelet kube-proxy

A ready environment (The following are operations on the master)

1 Create a directory, copies of two components

mkdir /home/yx/kubernetes/{bin,cfg,ssl} -p
# 两个node节点都拷贝
scp -r /home/yx/src/kubernetes/server/bin/kubelet [email protected]:/home/yx/kubernetes/bin
scp -r /home/yx/src/kubernetes/server/bin/kube-proxy [email protected]:/home/yx/kubernetes/bin

2 kubelet-bootstrap bind to the cluster system role

kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

3 and generates bootstrap.kubeconfig kube-proxy.kubeconfig two documents, use kubeconfig.sh script, such as the following:

Bash kubeconfig.sh 192.168.18.104 performed wherein the first parameter is a master IP node, the second path is ssl certificates, ultimately results in the above two documents, the two files and then copied to these two nodes above the node

# 创建 TLS Bootstrapping Token
#BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
BOOTSTRAP_TOKEN=71b6d986c47254bb0e63b2a20cfaf560

cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

#----------------------

APISERVER=$1
SSL_DIR=$2

# 创建kubelet bootstrapping kubeconfig 
export KUBE_APISERVER="https://$APISERVER:6443"

# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=$SSL_DIR/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

#----------------------

# 创建kube-proxy kubeconfig文件

kubectl config set-cluster kubernetes \
  --certificate-authority=$SSL_DIR/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy \
  --client-certificate=$SSL_DIR/kube-proxy.pem \
  --client-key=$SSL_DIR/kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

4 copies and generated bootstrap.kubeconfig kube-proxy.kubeconfig

scp bootstrap.kubeconfig kube-proxy.kubeconfig [email protected]:/home/yx/kubernetes/cfg
scp bootstrap.kubeconfig kube-proxy.kubeconfig [email protected]:/home/yx/kubernetes/cfg

Two node node installation

1 component deployment kubelet

Creating kubelet profile:

 cat /home/yx/kubernetes/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.18.105 \
--kubeconfig=/home/yx/kubernetes/cfg/kubelet.kubeconfig \
--experimental-bootstrap-kubeconfig=/home/yx/kubernetes/cfg/bootstrap.kubeconfig \
--config=/home/yx/kubernetes/cfg/kubelet.config \
--cert-dir=/home/yx/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
参数说明:
--hostname-override 在集群中显示的主机名
--kubeconfig 指定kubeconfig文件位置,会自动生成
--bootstrap-kubeconfig 指定刚才生成的bootstrap.kubeconfig文件
--cert-dir 颁发证书存放位置
--pod-infra-container-image 管理Pod网络的镜像

Creating kubelet.config

 cat /home/yx/kubernetes/cfg/kubelet.config 

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.18.105
port: 10250
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2 
clusterDomain: cluster.local.
failSwapOn: false

Startup script

 cat /usr/lib/systemd/system/kubelet.service 
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service

[Service]
EnvironmentFile=/home/yx/kubernetes/cfg/kubelet
ExecStart=/home/yx/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process

[Install]
WantedBy=multi-user.target

start up

 systemctl daemon-reload
 systemctl enable kubelet
 systemctl restart kubelet

Check whether to activate

Three node node deployment k8s components

2 kube-proxy deployment assembly

Creating kube-proxy configuration file:

 cat /home/yx/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.18.105 \
--cluster-cidr=10.0.0.0/24 \
--proxy-mode=ipvs \
--kubeconfig=/home/yx/kubernetes/cfg/kube-proxy.kubeconfig"

Startup script

[yx@tidb-tikv-02 cfg]$ cat /usr/lib/systemd/system/kube-proxy.service 
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=-/home/yx/kubernetes/cfg/kube-proxy
ExecStart=/home/yx/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure

[Install]
WantedBy=multi-user.targe

start up

 systemctl daemon-reload
systemctl enable kube-proxy
 systemctl restart kube-proxy

verification
Three node node deployment k8s components

Similarly, also perform above note on another node to node ip change it to

Three in the Master Node approval to join the cluster:

View

[yx@tidb-tidb-03 cfg]$ kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-jn-F4xSn1LAwJhom9l7hlW0XuhDQzo-RQrnkz1j4q6Y 16m kubelet-bootstrap Pending
node-csr-kB2CFmTqkCA2Ix5qYGSXoAP3-ctes-cHcjs7D84Wb38 5h55m kubelet-bootstrap Approved,Issued
node-csr-wWa0cKQ6Ap9Bcqap3m9d9ZBqBclwkLB84W8bpB3g_m0 22s kubelet-bootstrap Pending

Allowed to join

kubectl certificate approve node-csr-wWa0cKQ6Ap9Bcqap3m9d9ZBqBclwkLB84W8bpB3g_m0
certificatesigningrequest.certificates.k8s.io/node-csr-wWa0cKQ6Ap9Bcqap3m9d9ZBqBclwkLB84W8bpB3g_m0 approved
# 允许完成之后,状态会发生改变由Pending变成Approved,Issued

Four viewing cluster status (master on)

[yx@tidb-tidb-03 cfg]$ kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.18.104 Ready <none> 41s v1.12.1
192.168.18.105 Ready <none> 52s v1.12.1

[yx@tidb-tidb-03 cfg]$ kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok                   
controller-manager Healthy ok                   
etcd-1 Healthy {"health": "true"}   
etcd-2 Healthy {"health": "true"}   
etcd-0 Healthy {"health": "true"} 

At this point the entire k8s binary installation completed, followed by the actual operation of the

Guess you like

Origin blog.51cto.com/825536458/2422563