Problems encountered when installing metrics-server in binary deployment Kubernetes

ubernetes部署metrics-server后执行kubectl top pod或kubectl top node报错
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io)

1. Problem checking steps:

1.1. View the metrics-server service log

Cluster doesn't provide requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.

The inspection found that the HTTP 403 error was returned due to the lack of permission to call metrics-server

1.2. Check if the following parameters are configured

        args:
          - --cert-dir=/tmp
          - --secure-port=4443
          - --kubelet-insecure-tls=true
          - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,externalDNS

1.3. Summary of the problem:

There is no problem with the configuration of the metrics-server service, but the service still reports Error from server (ServiceUnavailable): the server is currently unable to handle the request (get pods.metrics.k8s.io). There are two ways to solve the problem

1. Authorize the cluster role to the user system: anonymous

kubectl create clusterrolebinding system:anonymous  --clusterrole=cluster-admin  --user=system:anonymous

2. Create the system:metrics-server role and authorize it

2. Problem solving (create system:metrics-server role and authorize)

Configure metrics-server certificate

# vim metrics-server-csr.json
{
  "CN": "system:metrics-server",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "system"
    }
  ]
}

 

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes metrics-server-csr.json | cfssljson -bare metrics-server

Configure metrics-server RBAC authorization

cat > auth-metrics-server.yaml << EOF
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:auth-metrics-server-reader
  labels:
    rbac.authorization.k8s.io/aggregate-to-view: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["metrics.k8s.io"]
  resources: ["pods", "nodes"]
  verbs: ["get", "list", "watch"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: metrics-server:system:auth-metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-metrics-server-reader
subjects:
- kind: User
  name: system:metrics-server
  namespace: kube-system
EOF

kube-apiserver adds the configuration required by metrics-server

--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \
--requestheader-allowed-names=aggregator,metrics-server \
--requestheader-extra-headers-prefix=X-Remote-Extra- \
--requestheader-group-headers=X-Remote-Group \
--requestheader-username-headers=X-Remote-User \
--proxy-client-cert-file=/opt/kubernetes/ssl/metrics-server.pem \
--proxy-client-key-file=/opt/kubernetes/ssl/metrics-server-key.pem 

Check whether the monitoring information can be obtained normally

 

 

Guess you like

Origin blog.csdn.net/heian_99/article/details/115137827