Recovery program K8S cluster certificate has expired and etcd and apiserver can not work properly under

In this extreme case, be careful planning and operation, it will not let the cluster be dead completely. First, a few ca root certificate is a 10-year period, should not expired. We can be based on these root certificates, regenerate the components of a certification available.

Early, first formulated the following program steps can be achieved, to be verified.

First, the basic document production certificate.

Ca-csr.json (because the root certificate is OK, so this file, but listed here, do not spend)

{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "ca": {
    "expiry": "438000h"
  }
}

Ca-config.json (which is used to generate a new self-signed root certificate from ca.crt and ca.key, can share)

{
    "signing": {
        "default": {
            "expiry": "43800h"
        },
        "profiles": {
            "server": {
                "expiry": "43800h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth"
                ]
            },
            "client": {
                "expiry": "43800h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "client auth"
                ]
            },
            "peer": {
                "expiry": "43800h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}

Second, regenerate etcd series certificate ((note that this is based on the / etc / kubernetes / pki / etcd / ca certificate directory)

Etcd-server.json

{
    "CN": "etcdServer",
    "hosts": [
        "127.0.0.1",
        "localhost",
        "小写主机名",
        "本机ip"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "O": "etcd",
            "OU": "etcd Security",
            "C": "CN",
            "L": "ShangHai",
            "ST": "ShangHai"
        }
    ]
}
cfssl gencert -ca=ca.crt -ca-key=ca.key -config=ca-config.json -profile=server etcd-server.json|cfssljson -bare server

etcd-peer.json

{
    "CN": "etcdPeer",
    "hosts": [
    "127.0.0.1",
    "localhost",
    "小写主机名",
    "本机ip "
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
        "O": "etcd",
        "OU": "etcd Security",
            "C": "CN",
            "L": "ShangHai",
            "ST": "ShangHai"
        }
    ]
}
cfssl gencert -ca=ca.crt -ca-key=ca.key -config=ca-config.json -profile=peer etcd-peer.json|cfssljson -bare peer

etcd-client.json

{
    "CN": "etcdClient",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
        "O": "etcd",
        "OU": "etcd Security",
            "C": "CN",
            "L": "ShangHai",
            "ST": "ShangHai"
        }
    ]
}
cfssl gencert -ca=ca.crt -ca-key=ca.key -config=ca-config.json -profile=client etcd-client.json|cfssljson -bare client

Third, re-create apiserver certificate (note that this is based on the / etc / kubernetes / ca certificates under the pki directory)

 Apiserver.json

{
    "CN": "kube-apiserver",
    "hosts": [
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster.local",
        "10.96.0.1",
        "小写主机名",
      "本机ip "
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    }
}
cfssl gencert -ca=ca.crt -ca-key=ca.key -config=ca-config.json -profile=server apiserver.json|cfssljson -bare apiserver

apiserver-kubelet-client.json

{
    "CN": "kube-apiserver-kubelet-client",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
        "O": "system:masters"
        }
    ]
}
cfssl gencert -ca=ca.crt -ca-key=ca.key -config=ca-config.json -profile=client apiserver-kubelet-client.json|cfssljson -bare apiserver-kubelet-client

Third, re-produced front-proxy certificate (note that this is based on the / etc / kubernetes / front-proxy-ca certificate under the pki directory, and it must apiserver of ca not the same, involves authentication sequence apiserver, remember)

 Front-proxy-client.json

{
    "CN": "front-proxy-client",
    "key": {
        "algo": "rsa",
        "size": 2048
    }
}
cfssl gencert -ca=front-proxy-ca.crt -ca-key=front-proxy-ca.key -config=ca-config.json -profile=client front-proxy-client.json|cfssljson -bare front-proxy-client

Fourth, if the / etc / kubernetes sa.key under / pki directory, sa.pub exist, you do not need to update because it does not expire concept.

Fifth, after the above documents ready, and now need to rename the k8s command rules, but also according to different files stored in different directories.

Sixth, this time, k8s master should be started. Next, make kubeconfig documents, reference url

https://www.cnblogs.com/netsa/p/8134000.html (bootstrap configuration and kubelet certification)

https://www.cnblogs.com/charlieroro/p/8489515.html (Configuration .kube / config file)

# Set cluster parameters 
kubectl config the SET - Cluster 
# Set the client authentication parameters 
kubectl config the SET - Credentials 
# set the context parameters 
kubectl the SET config - context 
# Set the default context 
kubectl config use -context
Seven, after making good use of these documents by location k8s installation, distribution files, restart the kubelet, should be able to restart the cluster good.

Guess you like

Origin www.cnblogs.com/aguncn/p/10936774.html