K8S deployment web management page dashboard

Official website  https://kubernetes.io/zh/docs/tasks/access-application-cluster/web-ui-dashboard/

Let's first look at an effect diagram of successful deployment. As the web UI of k8s, dashboard can facilitate us to manage k8s clusters and applications.

 1 kubernetes-dashboard.yaml file

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create"]
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["create"]
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
- apiGroups: [""]
  resources: ["services/proxy"]
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system
---
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --auto-generate-certificates
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 31080
  selector:
    k8s-app: kubernetes-dashboard

2 Download the docker image and import it to the local, so that because of the local existence, the foreign warehouse will not pull the image and cause the timeout to pull down.

Link: https://pan.baidu.com/s/1J0oebrRxKN3o1q-nnEUFcw Extraction code: fio8 
because I don’t know which node to deploy on, so all node nodes import the image

docker load < k8s.gcr.io#kubernetes-dashboard-amd64.tar

3 Create dashboard

kubectl create -f  kubernetes-dashboard.yaml

 4 View

kubectl get pods -n kube-system -o wide

5 visit

Use  https://10.238.162.34:31080/

Click to accept the risk and continue 

 Get token

# kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace) | grep token
Name:         namespace-controller-token-pb2zr
Type:  kubernetes.io/service-account-token
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IlB3OWJyUkxqejBWenJ5eTI1ejdxU2gzcVBvTTJYUXBPdkZGN3ZLQ0s1ZU0ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJuYW1lc3BhY2UtY29udHJvbGxlci10b2tlbi1wYjJ6ciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJuYW1lc3BhY2UtY29udHJvbGxlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImU2NDJhYWY4LWU4ODYtNDQwOC04NTZlLTg5NjU2NDdlMDZkZiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpuYW1lc3BhY2UtY29udHJvbGxlciJ9.eaCMquHlzVv-rziWC1F6_QcnyQzqYbbzzi0aZotlNx3QvSL9VgHXXYUqfZs9l2S0JdJA5nu4SSKDTU6846FFD0EXKXxq9PSFx1fXrodTn4n_ISkwFOlLXoCH0x6vV8mx9KmROQ7UJ9JMF0FkGs-PlIo-ZSqz9Z6dTYU4KbVk8pvHXOUfSAt9t2lwMup1QZxwokBfGlvu_jiA8GG-vcoOr9YI-OnaXFxgAkdVozJu0ouRMNR0MWdaIhmoELbyO6fGaqvh4PXyLl6g68JlP-vBljFaeOz8voA9sj7lsmFZVImJ4A0Icj_IxM7hfGJFo7ILyPddePP3kvpd9pj8h3Y4tg

 Select the token, after pasting

 

 

Error 1 k8s pod status is ContainerCreating  

kubectl get pods -n kube-system -o wide

kubectl describe pod kubernetes-dashboard-79d78c59fb-bz7n4 --namespace=kube-system

 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "8ce7a06dcbd203b43d356f3125c846258fc8bfa2d396a29216e041f171f233ff" network for pod "kubernetes-dashboard-79d78c59fb-bz7n4": networkPlugin cni failed to set up pod "kubernetes-dashboard-79d78c59fb-bz7n4_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.1.1/24

solve:

Because the error pod is on node1, check the network card on node1  

ifconfig

ifconfig cni0 down

ip link delete cni0

https://blog.csdn.net/Wuli_SmBug/article/details/104712653

 

Error two k8s pod status is ImagePullBackOff

Download images manually

Then all node nodes 

docker load < k8s.gcr.io#kubernetes-dashboard-amd64.tar

 docker images

 kubectl delete -f  kubernetes-dashboard.yaml

 kubectl create -f  kubernetes-dashboard.yaml

kubectl get pods -n kube-system -o wide

It is already running on node2 

Error 3 After the Firefox browser outputs ip:port (10.238.162.34:31080), the page displays garbled characters.

Use  https://10.238.162.34:31080/

Click to accept the risk and continue 

 Get token

# kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace) | grep token
Name:         namespace-controller-token-pb2zr
Type:  kubernetes.io/service-account-token
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IlB3OWJyUkxqejBWenJ5eTI1ejdxU2gzcVBvTTJYUXBPdkZGN3ZLQ0s1ZU0ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJuYW1lc3BhY2UtY29udHJvbGxlci10b2tlbi1wYjJ6ciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJuYW1lc3BhY2UtY29udHJvbGxlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImU2NDJhYWY4LWU4ODYtNDQwOC04NTZlLTg5NjU2NDdlMDZkZiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpuYW1lc3BhY2UtY29udHJvbGxlciJ9.eaCMquHlzVv-rziWC1F6_QcnyQzqYbbzzi0aZotlNx3QvSL9VgHXXYUqfZs9l2S0JdJA5nu4SSKDTU6846FFD0EXKXxq9PSFx1fXrodTn4n_ISkwFOlLXoCH0x6vV8mx9KmROQ7UJ9JMF0FkGs-PlIo-ZSqz9Z6dTYU4KbVk8pvHXOUfSAt9t2lwMup1QZxwokBfGlvu_jiA8GG-vcoOr9YI-OnaXFxgAkdVozJu0ouRMNR0MWdaIhmoELbyO6fGaqvh4PXyLl6g68JlP-vBljFaeOz8voA9sj7lsmFZVImJ4A0Icj_IxM7hfGJFo7ILyPddePP3kvpd9pj8h3Y4tg

 Select the token, after pasting

 Error four

Clicking on some labels will report an error, possibly because the version is too low.

error_outline Not Found (404)

the server could not find the requested resource
3 秒内重定向到上一个页面……

Command to view logs 

kubectl logs -f  kubernetes-dashboard-bbfcb94b8-p9m2p -n kube-system

Error resolution refer to https://blog.csdn.net/ppppppushcar/article/details/102608450

https://www.cnblogs.com/wucaiyun1/p/11692204.html

 delete

kubectl delete -f kubernetes-dashboard.yaml

View

kubectl get pods -n kube-system -o wide

1.10 version download 

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.0

Modify the image in the file kubernetes-dashboard.yaml to registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.0 

Redeploy 

 kubectl create -f kubernetes-dashboard.yaml

Get token

[root@k8s-master ~]#  kubectl -n kube-system describe $(kubectl -n kube-system get secret -n kube-system -o name | grep namespace) | grep token
Name:         namespace-controller-token-pb2zr
Type:  kubernetes.io/service-account-token
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IlB3OWJyUkxqejBWenJ5eTI1ejdxU2gzcVBvTTJYUXBPdkZGN3ZLQ0s1ZU0ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJuYW1lc3BhY2UtY29udHJvbGxlci10b2tlbi1wYjJ6ciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJuYW1lc3BhY2UtY29udHJvbGxlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImU2NDJhYWY4LWU4ODYtNDQwOC04NTZlLTg5NjU2NDdlMDZkZiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpuYW1lc3BhY2UtY29udHJvbGxlciJ9.eaCMquHlzVv-rziWC1F6_QcnyQzqYbbzzi0aZotlNx3QvSL9VgHXXYUqfZs9l2S0JdJA5nu4SSKDTU6846FFD0EXKXxq9PSFx1fXrodTn4n_ISkwFOlLXoCH0x6vV8mx9KmROQ7UJ9JMF0FkGs-PlIo-ZSqz9Z6dTYU4KbVk8pvHXOUfSAt9t2lwMup1QZxwokBfGlvu_jiA8GG-vcoOr9YI-OnaXFxgAkdVozJu0ouRMNR0MWdaIhmoELbyO6fGaqvh4PXyLl6g68JlP-vBljFaeOz8voA9sj7lsmFZVImJ4A0Icj_IxM7hfGJFo7ILyPddePP3kvpd9pj8h3Y4tg

Google browser access

reference 

https://blog.csdn.net/java_zyq/article/details/82178152

https://www.cnblogs.com/liugp/p/12115945.html 

git address  https://github.com/kubernetes/dashboard

Official document 

Guess you like

Origin blog.csdn.net/weixin_48154829/article/details/109168885