La cuarta parte (clúster k8s de implementación binaria --- implementación de clúster maestro)

Este artículo implementa el clúster
k8s k8s-master1 en los siguientes hosts : 192.168.206.31
k8s-master2: 192.168.206.32
k8s-master3: 192.168.206.33

Los componentes principales del nodo maestro de kubernetes son:
kube-apiserver
kube-planificador
kube-controller-manager
Actualmente, estos tres componentes deben implementarse en la misma máquina.
Las funciones de kube-Scheduler, kube-controller-manager y kube-apiserver están estrechamente relacionadas;
al mismo tiempo, solo puede haber un proceso kube-Scheduler, kube-controller-manager en el estado de funcionamiento, si hay varias ejecutando, es necesario elegir uno líder;

1. Implementar la herramienta de comando kubectl

kubectl es una herramienta de administración de línea de comandos para el clúster de kubernetes. De forma predeterminada, kubectl lee la dirección, el certificado, el nombre de usuario y otra información de kube-apiserver del archivo ~ / .kube / config. Si no está configurado, pueden ocurrir errores al ejecutar el comando kubectl.
1. Descarga kubectl

wget https://dl.k8s.io/v1.12.3/kubernetes-server-linux-amd64.tar.gz
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/
cp kube-apiserver kubeadm kube-controller-manager kubectl kube-scheduler /opt/kubernetes/bin/

2. Cree un certificado de solicitud

cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Zhejiang",
      "L": "hangzhou",
      "O": "system:masters",
      "OU": "System"
    }
  ]
}
EOF 

O 为 system:masters,kube-apiserver 收到该证书后将请求的 Group 设置为 system:masters;
预定义的 ClusterRoleBinding cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予所有 API的权限;
该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空;

生成证书和私钥

cfssl gencert -ca=/data/ssl/ca.pem \
  -ca-key=/data/ssl/ca-key.pem \
  -config=/data/ssl/ca-config.json \
  -profile=kubernetes admin-csr.json | cfssljson -bare admin 

3. Cree el archivo ~ / .kube / config

mkdir -p  ~/.kube

kubectl config set-cluster kubernetes \
  --certificate-authority=/data/ssl/ca.pem \
  --embed-certs=true \
  --server=https://192.168.206.30:8443 \
  --kubeconfig=kubectl.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials admin \
  --client-certificate=admin.pem \
  --client-key=admin-key.pem \
  --embed-certs=true \
  --kubeconfig=kubectl.kubeconfig

# 设置上下文参数
kubectl config set-context kubernetes \
  --cluster=kubernetes \
  --user=admin \
  --kubeconfig=kubectl.kubeconfig

# 设置默认上下文
kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig 

这个地方是复制到~/.kube/目录下名字为config不要搞错了
cp kubectl.kubeconfig ~/.kube/config

Dos, implementar api-server

1. Cree una solicitud de firma de certificado para kube-apiserver:

cat > kubernetes-csr.json <<EOF
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "192.168.206.31",
    "192.168.206.32",
    "192.168.206.33",
    "192.168.206.36",
    "192.168.206.37",
    "192.168.206.30",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Zhejiang",
      "L": "hangzhou",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
EOF

hosts 字段指定授权使用该证书的 IP 或域名列表,这里列出了 VIP 、apiserver 节点 IP、kubernetes 服务 IP 和域名;
域名最后字符不能是 .(如不能为 kubernetes.default.svc.cluster.local.),否则解析时失败,提示: x509: cannot parse dnsName "kubernetes.default.svc.cluster.local.";
如果使用非 cluster.local 域名,如 bqding.com,则需要修改域名列表中的最后两个域名为:kubernetes.default.svc.bqding、kubernetes.default.svc.bqding.com
主机依次为master节点的ip,以及负载均衡器的内网和公网IP。

生成证书和私钥:
cfssl gencert -ca=/data/ssl/ca.pem \
  -ca-key=/data/ssl/ca-key.pem \
  -config=/data/ssl/ca-config.json \
  -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

mkdir /opt/kubernetes/ssl/kubernetes
cp kubernetes*.pem /opt/kubernetes/ssl/kubernetes

3. Cree un archivo de configuración encriptado

cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: $(head -c 32 /dev/urandom | base64)
      - identity: {}
EOF

分发加密配置文件到master节点
cp encryption-config.yaml /opt/kubernetes/ssl/kubernetes

4. Cree el archivo de unidad systemd kube-apiserver

cat > /etc/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/opt/kubernetes/bin/kube-apiserver \
  --enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --anonymous-auth=false \
  --experimental-encryption-provider-config=/opt/kubernetes/ssl/kubernetes/encryption-config.yaml \
  --advertise-address=192.168.206.31 \
  --bind-address=192.168.206.31 \
  --insecure-port=0 \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all \
  --enable-bootstrap-token-auth \
  --service-cluster-ip-range=10.254.0.0/16 \
  --service-node-port-range=30000-32700 \
  --tls-cert-file=/opt/kubernetes/ssl/kubernetes/kubernetes.pem \
  --tls-private-key-file=/opt/kubernetes/ssl/kubernetes/kubernetes-key.pem \
  --client-ca-file=/opt/kubernetes/ssl/kubernetes/ca.pem \
  --kubelet-client-certificate=/opt/kubernetes/ssl/kubernetes/kubernetes.pem \
  --kubelet-client-key=/opt/kubernetes/ssl/kubernetes/kubernetes-key.pem \
  --service-account-key-file=/opt/kubernetes/ssl/kubernetes/ca-key.pem \
  --etcd-cafile=/opt/kubernetes/ssl/kubernetes/ca.pem \
  --etcd-certfile=/opt/kubernetes/ssl/kubernetes/kubernetes.pem \
  --etcd-keyfile=/opt/kubernetes/ssl/kubernetes/kubernetes-key.pem \
  --etcd-servers=https://192.168.206.31:2379,https://192.168.206.32:2379,https://192.168.206.33:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/opt/kubernetes/log/kube-apiserver-audit.log \
  --event-ttl=1h \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log \
  --v=2
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

--experimental-encryption-provider-config:启用加密特性;
--authorization-mode=Node,RBAC: 开启 Node 和 RBAC 授权模式,拒绝未授权的请求;
--enable-admission-plugins:启用 ServiceAccount 和 NodeRestriction;
--service-account-key-file:签名 ServiceAccount Token 的公钥文件,kube-controller-manager 的 --service-account-private-key-file 指定私钥文件,两者配对使用;
--tls-*-file:指定 apiserver 使用的证书、私钥和 CA 文件。--client-ca-file 用于验证 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)请求所带的证书;
--kubelet-client-certificate、--kubelet-client-key:如果指定,则使用 https 访问 kubelet APIs;需要为证书对应的用户(上面 kubernetes*.pem 证书的用户为 kubernetes) 用户定义 RBAC 规则,否则访问 kubelet API 时提示未授权;
--bind-address: 不能为 127.0.0.1,否则外界不能访问它的安全端口 6443;
--insecure-port=0:关闭监听非安全端口(8080);
--service-cluster-ip-range: 指定 Service Cluster IP 地址段;
--service-node-port-range: 指定 NodePort 的端口范围;
--runtime-config=api/all=true: 启用所有版本的 APIs,如 autoscaling/v2alpha1;
--enable-bootstrap-token-auth:启用 kubelet bootstrap 的 token 认证;
--apiserver-count=3:指定集群运行模式,多台 kube-apiserver 会通过 leader 选举产生一个工作节点,其它节点处于阻塞状态;

这些也要分发kube-apiserver.service文件到其他master

7. Inicie el servicio api-server

 systemctl stop kube-apiserver
 systemctl daemon-reload
 systemctl enable kube-apiserver
 systemctl start kube-apiserver

8. Verifique el estado del clúster y el servidor de API

[root@k8s-master1 ~]# netstat -ptln | grep kube-apiserve
tcp        0      0 192.168.206.31:6443     0.0.0.0:*               LISTEN      985/kube-apiserver  
[root@k8s-master1 ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.206.30:8443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

9. Otorgue acceso de certificado de kubernetes a la API de kubelet

kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes

3. Implementar kube-controller-manager
El clúster contiene 3 nodos. Después del inicio, se generará un nodo líder a través de un mecanismo de elección competitivo y se bloquearán otros nodos. Cuando el nodo líder no está disponible, los nodos restantes se elegirán nuevamente para generar un nuevo nodo líder para garantizar la disponibilidad del servicio.
Para garantizar la seguridad de las comunicaciones, este documento genera primero un certificado x509 y una clave privada, que kube-controller-manager utiliza en las dos situaciones siguientes:

1. Cree una solicitud de certificado de kube-controller-manager:

cat > kube-controller-manager-csr.json << EOF
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "192.168.206.31",
      "192.168.206.32",
      "192.168.206.33"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "Zhejiang",
        "L": "hangzhou",
        "O": "system:kube-controller-manager",
        "OU": "System"
      }
    ]
}
EOF

hosts 列表包含所有 kube-controller-manager 节点 IP;
CN 为 system:kube-controller-manager、O 为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限。
生成证书和私钥:
cfssl gencert -ca=/data/ssl/ca.pem \
  -ca-key=/data/ssl/ca-key.pem \
  -config=/data/ssl/ca-config.json \
  -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

将生成的证书和私钥分发到所有 master 节点
 mkdir /opt/kubernetes/ssl/kube-controller-manager
 cp kube-controller-manager*.pem /opt/kubernetes/ssl/kube-controller-manager/

2. Cree y distribuya el archivo kubeconfig

kubectl config set-cluster kubernetes \
  --certificate-authority=/data/ssl/ca.pem \
  --embed-certs=true \
  --server=https://192.168.206.30:8443 \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-credentials system:kube-controller-manager \
  --client-certificate=kube-controller-manager.pem \
  --client-key=kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-context system:kube-controller-manager \
  --cluster=kubernetes \
  --user=system:kube-controller-manager \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

分发 kube-controller-manager.kubeconfig 到所有 master 节点
cp kube-controller-manager.kubeconfig /opt/kubernetes/ssl/kube-controller-manager/
cp /opt/kubernetes/ssl/kubernetes/ca* /opt/kubernetes/ssl/kube-controller-manager/

3. Cree y distribuya el archivo de unidad systemd kube-controller-manager

cat > /etc/systemd/system/kube-controller-manager.service  << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/kubernetes/bin/kube-controller-manager \
  --address=127.0.0.1 \
  --kubeconfig=/opt/kubernetes/ssl/kube-controller-manager/kube-controller-manager.kubeconfig \
  --authentication-kubeconfig=/opt/kubernetes/ssl/kube-controller-manager/kube-controller-manager.kubeconfig \
  --service-cluster-ip-range=10.254.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/opt/kubernetes/ssl/kube-controller-manager/ca.pem \
  --cluster-signing-key-file=/opt/kubernetes/ssl/kube-controller-manager/ca-key.pem \
  --experimental-cluster-signing-duration=8760h \
  --root-ca-file=/opt/kubernetes/ssl/kube-controller-manager/ca.pem \
  --service-account-private-key-file=/opt/kubernetes/ssl/kube-controller-manager/ca-key.pem \
  --leader-elect=true \
  --feature-gates=RotateKubeletServerCertificate=true \
  --controllers=*,bootstrapsigner,tokencleaner \
  --horizontal-pod-autoscaler-use-rest-clients=true \
  --horizontal-pod-autoscaler-sync-period=10s \
  --tls-cert-file=/opt/kubernetes/ssl/kube-controller-manager/kube-controller-manager.pem \
  --tls-private-key-file=/opt/kubernetes/ssl/kube-controller-manager/kube-controller-manager-key.pem \
  --use-service-account-credentials=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log \
  --v=2
Restart=on
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

分发kube-controller-manager systemd unit文件到其他master服务器

-address:指定监听的地址为127.0.0.1
--kubeconfig:指定 kubeconfig 文件路径,kube-controller-manager 使用它连接和验证 kube-apiserver;
--cluster-signing-*-file:签名 TLS Bootstrap 创建的证书;
--experimental-cluster-signing-duration:指定 TLS Bootstrap 证书的有效期;
--root-ca-file:放置到容器 ServiceAccount 中的 CA 证书,用来对 kube-apiserver 的证书进行校验;
--service-account-private-key-file:签名 ServiceAccount 中 Token 的私钥文件,必须和 kube-apiserver 的 --service-account-key-file 指定的公钥文件配对使用;
--service-cluster-ip-range :指定 Service Cluster IP 网段,必须和 kube-apiserver 中的同名参数一致;
--leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;
--feature-gates=RotateKubeletServerCertificate=true:开启 kublet server 证书的自动更新特性;
--controllers=*,bootstrapsigner,tokencleaner:启用的控制器列表,tokencleaner 用于自动清理过期的 Bootstrap token;
--horizontal-pod-autoscaler-*:custom metrics 相关参数,支持 autoscaling/v2alpha1;
--tls-cert-file、--tls-private-key-file:使用 https 输出 metrics 时使用的 Server 证书和秘钥;
--use-service-account-credentials=true:

4. Inicie el servicio kube-controller-manager

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager

5. Verifique el servicio kube-controller-manager

[root@k8s-master1 ssl]# netstat -lnpt|grep kube-controll
tcp        0      0 127.0.0.1:10252         0.0.0.0:*               LISTEN      17906/kube-controll 
tcp6       0      0 :::10257                :::*                    LISTEN      17906/kube-controll

6. Ver el líder actual de kube-controller-manager

[root@master1 ssl]# kubectl get endpoints kube-controller-manager --namespace=kube-system  -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master1_0f2ea8d8-2955-11eb-84f5-000c296e7f49","leaseDurationSeconds":15,"acquireTime":"2020-11-18T04:18:06Z","renewTime":"2020-11-18T04:20:33Z","leaderTransitions":0}'
  creationTimestamp: 2020-11-18T04:18:06Z
  name: kube-controller-manager
  namespace: kube-system
  resourceVersion: "3578"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
  uid: 0f2fc6db-2955-11eb-b0b5-000c29979eeb

4. Implementar el programador de kube.
El clúster contiene 3 nodos. Después del inicio, se generará un nodo líder a través de un mecanismo de elección competitivo y se bloquearán otros nodos. Cuando el nodo líder no está disponible, los nodos restantes se elegirán nuevamente para generar un nuevo nodo líder para garantizar la disponibilidad del servicio.
1. Cree una solicitud de certificado del programador de kube

cat > kube-scheduler-csr.json << EOF
{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "192.168.206.31",
      "192.168.206.32",
      "192.168.206.33"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "Zhejiang",
        "L": "hangzhou",
        "O": "system:kube-scheduler",
        "OU": "System"
      }
    ]
}
EOF

hosts 列表包含所有 kube-scheduler 节点 IP;
CN 为 system:kube-scheduler、O 为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限。

生成证书和私钥:
cfssl gencert -ca=/data/ssl/ca.pem \
  -ca-key=/data/ssl/ca-key.pem \
  -config=/data/ssl/ca-config.json \
  -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

2. Cree y distribuya el archivo kube-Scheduler.kubeconfig

kubectl config set-cluster kubernetes \
  --certificate-authority=/data/ssl/ca.pem \
  --embed-certs=true \
  --server=https://192.168.206.30:8443 \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-credentials system:kube-scheduler \
  --client-certificate=kube-scheduler.pem \
  --client-key=kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-context system:kube-scheduler \
  --cluster=kubernetes \
  --user=system:kube-scheduler \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

分发 kubeconfig 到所有 master 节点
mkdir /opt/kubernetes/ssl/kube-scheduler 
cp kube-scheduler.kubeconfig /opt/kubernetes/ssl/kube-scheduler 

3. Cree y distribuya archivos de unidad systemd del programador kube

cat > /etc/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/opt/kubernetes/bin/kube-scheduler \
  --address=127.0.0.1 \
  --kubeconfig=/opt/kubernetes/ssl/kube-scheduler/kube-scheduler.kubeconfig \
  --leader-elect=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/opt/kubernetes/log \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

--address:在 127.0.0.1:10251 端口接收 http /metrics 请求;kube-scheduler 目前还不支持接收 https 请求;
--kubeconfig:指定 kubeconfig 文件路径,kube-scheduler 使用它连接和验证 kube-apiserver;
--leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;

4. Inicie el servicio de programador de kube

 systemctl daemon-reload
 systemctl enable kube-scheduler
 systemctl start kube-scheduler 

5. Ver el líder actual del programador de kube

[root@master1 kube-scheduler]#  kubectl get endpoints kube-scheduler --namespace=kube-system  -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"master1_c2b12771-2957-11eb-a36d-000c296e7f49","leaseDurationSeconds":15,"acquireTime":"2020-11-18T04:37:28Z","renewTime":"2020-11-18T04:38:58Z","leaderTransitions":0}'
  creationTimestamp: 2020-11-18T04:37:28Z
  name: kube-scheduler
  namespace: kube-system
  resourceVersion: "4509"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler
  uid: c34cf106-2957-11eb-a5a4-000c2936c402

6. Verifique que la función sea normal en todos los nodos maestros

[root@master1 kube-scheduler]# kubectl get componentstatuses
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-2               Healthy   {"health":"true"}   
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}

Supongo que te gusta

Origin blog.51cto.com/14033037/2552486
Recomendado
Clasificación