Kubernetes (K8s) 安装部署过程(六)之部署node节点

部署前的准备

1)、关闭swapp 功能否则kubelet启动将失败。

vim /etc/fstab注释下面这行内容

/dev/mapper/cl-swap     swap                    swap    defaults        0 0

然后执行

swapoff -a

2)关闭senlinux

  关闭SeLinux的方法

  A 不需要重启服务器

  [root@localhost ~]# setenforce 0

  B 需要重启Linux:

  vi /etc/selinux/config 将SELINUX=enforcing 改成SELINUX=disabled

3) 安装docker服务,详情请见(centos8系统 https://blog.csdn.net/baidu_38432732/article/details/105315880 centos7系统:https://blog.csdn.net/baidu_38432732/article/details/106432786)

4)修改docker.service配置文件,在文件中添加一下内容

EnvironmentFile=/etc/flannel/subnet.env

 然后重启docker服务

[root@k8s_Node1 ~]# systemctl daemon-reload
[root@k8s_Node1 ~]# systemctl restart docker
[root@k8s_Node1 ~]# systemctl status docker

kubelet 启动时向 kube-apiserver 发送 TLS bootstrapping 请求,需要先将 bootstrap token 文件中的 kubelet-bootstrap 用户赋予 system:node-bootstrapper cluster 角色(role), 然后 kubelet 才能有权限创建认证请求(certificate signing requests):

cd /etc/kubernetes
kubectl create clusterrolebinding kubelet-bootstrap \
  --clusterrole=system:node-bootstrapper \
  --user=kubelet-bootstrap

 实际上从上述提示的system:kube-controller-manager的提示很容易发现真正的原因在于证书内容或者设定的错误。但是一般还是一步步地来确认

# 确认当前user信息
[root@k8s_Master ~]# kubectl config current-context
kubernetes

# 确认kubectl的config设定
[root@k8s_Master ~]# kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.0.221:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: admin
  name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

# 看一下相关的clusterrole是否存在
[root@k8s_Master ~]# kubectl get clusterrole |grep system:node-bootstrapper
system:node-bootstrapper                                               2020-08-24T17:51:06Z

# 看一下clusterrole的详细信息
[root@k8s_Master ~]# kubectl describe clusterrole system:node-bootstrapper
Name:         system:node-bootstrapper
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
  Resources                                       Non-Resource URLs  Resource Names  Verbs
  ---------                                       -----------------  --------------  -----
  certificatesigningrequests.certificates.k8s.io  []                 []              [create get list watch]
  • --user=kubelet-bootstrap 是在 /etc/kubernetes/token.csv 文件中指定的用户名,同时也写入了 /etc/kubernetes/bootstrap.kubeconfig 文件;

kubelet 通过认证后向 kube-apiserver 发送 register node 请求,需要先将 kubelet-nodes 用户赋予 system:node cluster角色(role) 和 system:nodes 组(group), 然后 kubelet 才能有权限创建节点请求:

kubectl create clusterrolebinding kubelet-nodes \
  --clusterrole=system:node \
  --group=system:nodes

查询相关的配置信息,实际上从上述提示的system:kube-controller-manager的提示很容易发现真正的原因在于证书内容或者设定的错误。但是一般还是一步步地来确认,

# 确认当前user信息
[root@k8s_Node2 ~]# kubectl config current-context
kubernetes

# 确认kubectl的config设定
[root@k8s_Node2 ~]# kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://192.168.0.221:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: admin
  name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: admin
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED

# 看一下相关的clusterrole是否存在:
[root@k8s_Node2 ~]#  kubectl get clusterrole |grep system:node
system:node                                                            2020-08-24T17:51:06Z
system:node-bootstrapper                                               2020-08-24T17:51:06Z
system:node-problem-detector                                           2020-08-24T09:33:10Z
system:node-proxier                                                    2020-08-24T09:34:09Z

# 看一下clusterrole的详细信息
[root@k8s_Node2 ~]# kubectl describe clusterrole system:node
Name:         system:node
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
  Resources                                       Non-Resource URLs  Resource Names  Verbs
  ---------                                       -----------------  --------------  -----
  leases.coordination.k8s.io                      []                 []              [create delete get patch update]
  csinodes.storage.k8s.io                         []                 []              [create delete get patch update]
  nodes                                           []                 []              [create get list watch patch update]
  certificatesigningrequests.certificates.k8s.io  []                 []              [create get list watch]
  events                                          []                 []              [create patch update]
  pods/eviction                                   []                 []              [create]
  serviceaccounts/token                           []                 []              [create]
  tokenreviews.authentication.k8s.io              []                 []              [create]
  localsubjectaccessreviews.authorization.k8s.io  []                 []              [create]
  subjectaccessreviews.authorization.k8s.io       []                 []              [create]
  pods                                            []                 []              [get list watch create delete]
  configmaps                                      []                 []              [get list watch]
  secrets                                         []                 []              [get list watch]
  services                                        []                 []              [get list watch]
  runtimeclasses.node.k8s.io                      []                 []              [get list watch]
  csidrivers.storage.k8s.io                       []                 []              [get list watch]
  persistentvolumeclaims/status                   []                 []              [get patch update]
  endpoints                                       []                 []              [get]
  persistentvolumeclaims                          []                 []              [get]
  persistentvolumes                               []                 []              [get]
  volumeattachments.storage.k8s.io                []                 []              [get]
  nodes/status                                    []                 []              [patch update]
  pods/status                                     []                 []              [patch update]

2、我们已经获得了bin文件,开始配置相应的服务器文件

添加配置文件kubelt:

  • 对于kuberentes1.8集群中的kubelet配置,取消了KUBELET_API_SERVER的配置,而改用kubeconfig文件来定义master地址,所以请注释掉KUBELET_API_SERVER配置。

vim /etc/kubernets/kubelet

###
## kubernetes kubelet (minion) config
#
## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=192.168.0.222"
#
## The port for the info server to serve on
#KUBELET_PORT="--port=10250"
#
## You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=192.168.0.222"
#
## location of the api-server
## COMMENT THIS ON KUBERNETES 1.8+
#KUBELET_API_SERVER="--api-servers=http://192.168.0.221:8080"
#
## pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=pause-amd64:3.0"
#
## Add your own!
KUBELET_ARGS="--cgroup-driver=systemd --cluster-dns=10.254.0.2 --experimental-bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig  --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local --hairpin-mode promiscuous-bridge --serialize-image-pulls=false"
KUBELET_POD_INFRA_CONTAINER是指定pod运行的基础镜像,必须存在,我这里直接指定的是一个本地的镜像,镜像的或许地址为:

1)KUBELET_POD_INFRA_CONTAINER是指定pod运行的基础镜像,必须存在,我这里直接指定的是一个本地的镜像,镜像的或许地址为:

[root@k8s_Node1 kubernetes]# docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
3.0: Pulling from google-containers/pause-amd64
a3ed95caeb02: Pull complete 
f11233434377: Pull complete 
Digest: sha256:3b3a29e3c90ae7762bdf587d19302e62485b6bef46e114b741f7d75dba023bd3
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0

 
下载到本地后tag一下,方便使用,当然你也可以添加其他的公共pod基础镜像,在线地址也行,注意不要被墙就好。

docker tag registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 pause-amd64:3.0

3、下载对应的包后,进行以下操作

[root@k8s_Node1 ~]# tar -xf kubernetes-server-linux-amd64.tar.gz 
[root@k8s_Node1 ~]# cd kubernetes
[root@k8s_Node1 kubernetes]# tar -xf kubernetes-src.tar.gz 
[root@k8s_Node1 kubernetes]# cp -r ./server/bin/{kube-proxy,kubelet} /usr/local/bin/

4、创建系统启动文件

[root@k8s_Node2 kubernetes]# vim /usr/lib/systemd/system/kubelet.service
[root@k8s_Node2 kubernetes]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/local/bin/kubelet \
            $KUBE_LOGTOSTDERR \
            $KUBE_LOG_LEVEL \
            $KUBELET_API_SERVER \
            $KUBELET_ADDRESS \
            $KUBELET_PORT \
            $KUBELET_HOSTNAME \
            $KUBE_ALLOW_PRIV \
            $KUBELET_POD_INFRA_CONTAINER \
            $KUBELET_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target

kubelet的配置文件/etc/kubernetes/kubelet。其中的IP地址更改为你的每台node节点的IP地址。

注意:在启动kubelet之前,需要先手动创建/var/lib/kubelet目录。

[root@k8s_Node1 kubernetes]# mkdir /var/lib/kubelet

4、启动服务

[root@k8s_Node1 ~]# systemctl daemon-reload
[root@k8s_Node1 ~]# systemctl restart kubelet
[root@k8s_Node1 ~]# systemctl status kubelet
[root@k8s_Node1 ~]# netstat -atnpu|grep 6443
tcp        0      0 192.168.0.222:41730     192.168.0.221:6443      ESTABLISHED 11173/kubelet

5、配置 kube-proxy

1)安装conntrack

[root@k8s_Node2 kubernetes]# vim /usr/lib/systemd/system/kube-proxy.service
[root@k8s_Node2 kubernetes]# cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/local/bin/kube-proxy \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_MASTER \
        $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

2)kube-proxy配置文件/etc/kubernetes/proxy

[root@k8s_Node1 ~]# cat /etc/kubernetes/proxy
###
# kubernetes proxy config

# default config should be adequate

# Add your own!
KUBE_PROXY_ARGS="--bind-address=192.168.0.222 --hostname-override=192.168.0.222 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16"

3)启动服务

[root@k8s_Node1 ~]# systemctl daemon-reload
[root@k8s_Node1 ~]# systemctl enable kube-proxy
Created symlink /etc/systemd/system/multi-user.target.wants/kube-proxy.service → /usr/lib/systemd/system/kube-proxy.service.
[root@k8s_Node1 ~]# systemctl start kube-proxy
[root@k8s_Node1 ~]# systemctl status kube-proxy

6、服务验证

猜你喜欢

转载自blog.csdn.net/baidu_38432732/article/details/108155121