[k8s集群系列-06]Kubernetes Node部署

kubernetes Node 节点包含如下组件:

  • kublete
  • kube-proxy
  • docker-ce
  • flanneld

安装配置docker-ce

Uninstall old versions

yum remove docker docker-common  docker-selinux  docker-engine -y
ansible k8s-node -a 'yum remove docker docker-common  docker-selinux  docker-engine -y'

Install Docker CE

# install required packages
yum install -y yum-utils device-mapper-persistent-data lvm2
ansible k8s-node -a 'yum install -y yum-utils device-mapper-persistent-data lvm2'

# Use the following command to set up the stable repository
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
ansible k8s-node -a 'yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo'

# changed mirror to aliyun
sed -i 's@https://download.docker.com/@https://mirrors.aliyun.com/docker-ce/@g' /etc/yum.repos.d/docker-ce.repo
ansible k8s-node -a 'sed -i 's@https://download.docker.com/@https://mirrors.aliyun.com/docker-ce/@g' /etc/yum.repos.d/docker-ce.repo'

# install docker-ce
yum install docker-ce -y
ansible k8s-node -a 'yum install docker-ce -y'

docker一些自定义配置

mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["http://1bdb58cb.m.daocloud.io"],
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

# 批量配置
ansible k8s-node -a 'mkdir -p /etc/docker'
tee daemon.json <<-'EOF'
{
  "registry-mirrors": ["http://1bdb58cb.m.daocloud.io"],
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF
ansible k8s-node -m copy -a 'src=/root/daemon.json dest=/etc/docker/daemon.json'

启动docker服务

ansible k8s-node -m systemd -a 'daemon-reload=yes enabled=yes name=docker state=started'

部署kubelte

kubelet-官方文档

kubelet bootstapping kubeconfig

RBAC授权

kubelet 启动时向kube-apiserver 发送TLS bootstrapping 请求,需要先将bootstrap token 文件中的kubelet-bootstrap 用户赋予system:node-bootstrapper 角色,然后kubelet 才有权限创建认证请求(certificatesigningrequests).
下面两段在哪执行都可以

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

--user=kubelet-bootstrap 是文件 /etc/kubernetes/token.csv 中指定的用户名,同时也写入了文件 /etc/kubernetes/bootstrap.kubeconfig

另外还需要为Node 请求创建一个RBAC 授权规则:

kubectl create clusterrolebinding kubelet-nodes --clusterrole=system:node --group=system:nodes

分发kubelet二进制文件

ansible k8s-node -m copy -a 'src=/usr/local/src/kubernetes/server/bin/kubelet dest=/usr/local/kubernetes/bin/kubelet mode=0755'

创建kubelet 的systemd unit 文件

创建工作目录

mkdir /var/lib/kubelet
mkdir /var/log/kubernetes/kubelet -p

ansible k8s-node -m file -a 'path=/var/lib/kubelet state=directory'
ansible k8s-node -m file -a 'path=/var/log/kubernetes/kubelet state=directory'

安装conntrack

ansible k8s-node  -a 'yum install conntrack -y'

hosts

ansible k8s -m copy -a 'src=/etc/hosts dest=/etc/hosts'

systemd unit文件

如果在 master 上启动 kubelet,请将 node-role.kubernetes.io/k8s-node=true 修改为 node-role.kubernetes.io/k8s-master=true

cat > /root/k8s-node/systemd/kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/kubernetes/bin/kubelet \\
  --address=192.168.16.238 \\
  --hostname-override=k8s-n1-16-238 \\
  --node-labels=node-role.kubernetes.io/k8s-node=true \\
  --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \\
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
  --cert-dir=/etc/kubernetes/ssl \\
  --cluster-dns=10.254.0.2 \\
  --cluster-domain=dns.kubernetes \\
  --hairpin-mode promiscuous-bridge \\
  --feature-gates=RotateKubeletClientCertificate=true,RotateKubeletServerCertificate=true \\
  --fail-swap-on=false \\
  --cgroup-driver=cgroupfs \\
  --allow-privileged=true \\
  --pod-infra-container-image=clouding/pause-amd64:3.0 \\
  --serialize-image-pulls=false \\
  --logtostderr=false \\
  --log-dir=/var/log/kubernetes/kubelet/ \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
  • -address 不能设置为 127.0.0.1,否则后续 Pods 访问 kubelet 的 API 接口时会失败,因为 Pods 访问的 127.0.0.1指向自己而不是 kubelet
  • 如果设置了 --hostname-override 选项,则 kube-proxy 也需要设置该选项,否则会出现找不到 Node 的情况
  • -bootstrap-kubeconfig 指向 bootstrap kubeconfig 文件,kubelet 使用该文件中的用户名和 token 向 kube-apiserver 发送 TLS Bootstrapping 请求
  • 管理员通过了 CSR 请求后,kubelet 自动在 --cert-dir 目录创建证书和私钥文件(kubelet-client.crt 和 kubelet-client.key),然后写入 --kubeconfig 文件(自动创建 --kubeconfig 指定的文件)
  • 建议在 --kubeconfig 配置文件中指定 kube-apiserver 地址,如果未指定 --api-servers 选项,则必须指定 --require-kubeconfig 选项(在1.10+已废弃此选项)后才从配置文件中读取 kue-apiserver 的地址,否则 kubelet 启动后将找不到 kube-apiserver (日志中提示未找到 API Server),kubectl get nodes 不会返回对应的 Node 信息
  • --cluster-dns 指定 kubedns 的 Service IP(可以先分配,后续创建 kubedns 服务时指定该 IP),--cluster-domain 指定域名后缀,这两个参数同时指定后才会生效

分发文件,并逐一修改ip,hostname

ansible k8s-node -m copy -a 'src=/root/k8s-node/systemd/kubelet.service dest=/usr/lib/systemd/system/kubelet.service'

启动kubelet服务

ansible k8s-node -m systemd -a 'daemon-reload=yes enabled=yes name=kubelet state=started'

通过kubelet 的TLS 证书请求

kubelet 首次启动时向kube-apiserver 发送证书签名请求,必须通过后kubernetes 系统才会将该 Node 加入到集群。

查看未授权的CSR 请求

> kubectl get csr
NAME                                                   AGE       REQUESTOR           CONDITION
node-csr--vERPmYzSaAZqezwWDKoeyyXjK6KvVHAf5e1SQdHPZo   42s       kubelet-bootstrap   Pending
node-csr-1nFaIXpMrQ8TS_jAZFrCz86-lRsiYYWVbvynKsq6ebg   10m       kubelet-bootstrap   Pending
node-csr-6clvNX325wgtNd5UPjq8yMAImKp4Qa8XeSypVRK2bqU   40s       kubelet-bootstrap   Pending
node-csr-Ff-BmDSdgIF0Riyk0krAT0Bll_u5P4TLNbRU7HZ3T3M   41s       kubelet-bootstrap   Pending
node-csr-WWh63mfVRUOflQAnGPIfnTFro2hkswOL3RGy9P9vaVU   1m        kubelet-bootstrap   Pending
node-csr-ZAKQ_kY84ORptLMMIJPHu12BraxOLBMFJ33wj_mLM9Q   10m       kubelet-bootstrap   Pending
node-csr-vKRJanqdwG9TPXtY1x5e6KP0DJ5XvCWbr7e1tQb0-10   41s       kubelet-bootstrap   Pending

通过CSR 请求:

kubectl certificate approve node-csr--vERPmYzSaAZqezwWDKoeyyXjK6KvVHAf5e1SQdHPZo
certificatesigningrequest.certificates.k8s.io "node-csr-XeGvv-LBiJ_Q-WXtSCQV3nTIMP6B_L6o69EOIH2utY0" approved

可以使用一条命令搞定:kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve
清除所有的证书:kubectl get csr | grep Approved | awk '{print $1}' |xargs kubectl delete csr

自动生成了kubelet kubeconfig 文件和公私钥:

ls /etc/kubernetes/
kubelet.kubeconfig

ls /etc/kubernetes/ssl
ca-key.pem  ca.pem  kubelet-client.crt  kubelet-client.key  kubelet.crt  kubelet.key

查看node节点:

> kubectl get node
NAME            STATUS    ROLES      AGE       VERSION
k8s-n1-16-238   Ready     k8s-node   12m       v1.10.3
k8s-n2-16-239   Ready     k8s-node   7m        v1.10.3
k8s-n3-16-240   Ready     k8s-node   12m       v1.10.3
k8s-n4-16-241   Ready     k8s-node   12m       v1.10.3
k8s-n5-16-242   Ready     k8s-node   12m       v1.10.3
k8s-n6-16-243   Ready     k8s-node   7m        v1.10.3
k8s-n7-16-244   Ready     k8s-node   7m        v1.10.3

部署kube-proxy+ipvs

安装相关程序

yum install conntrack-tools ipvsadm -y
ansible k8s-node -a 'yum install conntrack-tools ipvsadm -y'

分发kube-proxy程序

ansible k8s-node -m copy -a 'src=/usr/local/src/kubernetes/server/bin/kube-proxy dest=/usr/local/kubernetes/bin/kube-proxy mode=0755'

创建kube-proxy 的systemd unit 文件

创建工作目录

mkdir -p /var/lib/kube-proxy
ansible k8s-node -m file -a 'path=/var/lib/kube-proxy state=directory'

创建日志目录

mkdir /var/log/kubernetes/kube-proxy
ansible k8s-node -m file -a 'path=/var/log/kubernetes/kube-proxy state=directory'

systemd unit

cat > kube-proxy.service <<EOF
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/kubernetes/bin/kube-proxy \\
  --bind-address=192.168.16.238 \\
  --hostname-override=k8s-n1-16-238 \\
  --cluster-cidr=10.254.0.0/16 \\
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig \\
  --masquerade-all \\
  --feature-gates=SupportIPVSProxyMode=true \\
  --proxy-mode=ipvs \\
  --ipvs-min-sync-period=5s \\
  --ipvs-sync-period=5s \\
  --ipvs-scheduler=rr \\
  --logtostderr=false \\
  --log-dir=/var/log/kubernetes/kube-proxy/ \\
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
  • --hostname-override 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 iptables 规则
  • --cluster-cidr 必须与 kube-apiserver 的 --service-cluster-ip-range 选项值一致
  • kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr 或 --masquerade-all 选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT
  • --kubeconfig 指定的配置文件嵌入了 kube-apiserver 的地址、用户名、证书、秘钥等请求和认证信息
  • 预定义的 RoleBinding cluster-admin 将User system:kube-proxy 与 Role system:node-proxier 绑定,该 Role 授予了调用 kube-apiserver Proxy 相关 API 的权限

分发

ansible k8s-node -m copy -a 'src=/root/kube-proxy.service dest=/usr/lib/systemd/system/kube-proxy.service'   # 记录逐一修改ip和主机名

启动kube-proxy

ansible k8s-node -m systemd -a 'daemon-reload=yes enabled=yes name=kube-proxy state=started'

检查 ipvs

# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr persistent 10800
  -> 192.168.16.235:6443          Masq    1      0          0
  -> 192.168.16.236:6443          Masq    1      0          0
  -> 192.168.16.237:6443          Masq    1      0          0

猜你喜欢

转载自www.cnblogs.com/knmax/p/9213489.html