Helm installation of dashboard deployment Kubernetes

Kubernetes Dashboard is a WEB UI management tools, code hosting k8s cluster on github, address: https://github.com/kubernetes/dashboard

installation

Kubernetes-dashboard.yaml:

image:
  repository: k8s.gcr.io/kubernetes-dashboard-amd64
  tag: v1.10.1
ingress:
  enabled: true
  hosts: 
    - k8s.hongda.com
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
  tls:
    - secretName: hongda-com-tls-secret
      hosts:
      - k8s.hongda.com
nodeSelector:
    node-role.kubernetes.io/edge: ''
tolerations:
    - key: node-role.kubernetes.io/master
      operator: Exists
      effect: NoSchedule
    - key: node-role.kubernetes.io/master
      operator: Exists
      effect: PreferNoSchedule
rbac:
  clusterAdminRole: true

Compared to the default configuration, modify the following configuration items:

  • ingress.enabled - set to open Ingress is true, with Ingress will expose Kubernetes Dashboard service, so that we can access browser
  • ingress.annotations - designated ingress.classas nginx, let's install Nginx Ingress Controller to reverse proxy Kubernetes Dashboard service; Kubernetes Dashboard because the back-end service is https listening mode, and Nginx Ingress Controller default HTTP protocol to forward the request to the backend service, with secure-backendsthis annotation to indicate Nginx Ingress Controller HTTPS protocol to forward the request to the backend service
  • ingress.hosts - here replace the domain name configured for the certificate
  • Ingress.tls - Secret Resource Name secretName configured to generate a free cert-manager certificate is located, hosts the domain name to replace the certificate configuration
  • rbac.clusterAdminRole - set to true authority to make dashboard is large enough so that we can easily operate multiple namespace

Command to install:

helm install stable/kubernetes-dashboard \
-n kubernetes-dashboard \
--namespace kube-system  \
-f kubernetes-dashboard.yaml

Output:

[root@master /]# helm install stable/kubernetes-dashboard \
> -n kubernetes-dashboard \
> --namespace kube-system  \
> -f kubernetes-dashboard.yaml
NAME:   kubernetes-dashboard
LAST DEPLOYED: Mon Jul 29 16:14:20 2019
NAMESPACE: kube-system
STATUS: DEPLOYED

RESOURCES:
==> v1/Pod(related)
NAME                                   READY  STATUS             RESTARTS  AGE
kubernetes-dashboard-64f97ccb4f-nbpkx  0/1    ContainerCreating  0         <invalid>

==> v1/Secret
NAME                  TYPE    DATA  AGE
kubernetes-dashboard  Opaque  0     <invalid>

==> v1/Service
NAME                  TYPE       CLUSTER-IP      EXTERNAL-IP  PORT(S)  AGE
kubernetes-dashboard  ClusterIP  10.101.156.153  <none>       443/TCP  <invalid>

==> v1/ServiceAccount
NAME                  SECRETS  AGE
kubernetes-dashboard  1        <invalid>

==> v1beta1/ClusterRoleBinding
NAME                  AGE
kubernetes-dashboard  <invalid>

==> v1beta1/Deployment
NAME                  READY  UP-TO-DATE  AVAILABLE  AGE
kubernetes-dashboard  0/1    1           0          <invalid>

==> v1beta1/Ingress
NAME                  HOSTS            ADDRESS  PORTS  AGE
kubernetes-dashboard  k8s.frognew.com  80, 443  <invalid>


NOTES:
*********************************************************************************
*** PLEASE BE PATIENT: kubernetes-dashboard may take a few minutes to install ***
*********************************************************************************
From outside the cluster, the server URL(s) are:
     https://k8s.frognew.com

View:

[root@master /]# kubectl -n kube-system get secret | grep kubernetes-dashboard-token
kubernetes-dashboard-token-mmr4w                 kubernetes.io/service-account-token   3      18s
[root@master /]# kubectl describe -n kube-system secret/kubernetes-dashboard-token-mmr4w
Name:         kubernetes-dashboard-token-mmr4w
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard
              kubernetes.io/service-account.uid: 03b7dd9a-6f40-4f20-9a0d-7808158c7225

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1tbXI0dyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjAzYjdkZDlhLTZmNDAtNGYyMC05YTBkLTc4MDgxNThjNzIyNSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.baCCzlyMQiJ-cXsFrR8wR7iIN4eYKfNhoMTOJFK4Qc-jcE89zc2LC8Jg5TtuSzU89VwsOGd2bPzwhNm3w0rOJCdDuMUdhrYQwk4n25K4uMs0BTnRKVM6JZCplJxYd4E7MBftKFLuvOl0efLm3xFeBB_DUS-iHJJNAnFGVAg0Lr5Ea55fstzKumRL9Xl0eckVS6L9QI7mSniiMid1lMElq2xKgjdlk4UwV6ODI9hDS1eo3lZ80pRRcCogAuhCiqjSzj1FXjXaRl9fzm0udK0hPdBVNBAoyVKaM-IULlGudeQYe6Brk1lMf-f3d1J0fTjYwgUsv-1RhehIdUwRKp20MA
[root@master /]# 

View pods:

[root@master /]# kubectl get pods -n kube-system -o wide
NAME                                    READY   STATUS             RESTARTS   AGE    IP              NODE      NOMINATED NODE   READINESS GATES
coredns-5c98db65d4-gts57                1/1     Running            1          3d6h   10.244.2.2      slaver2   <none>           <none>
coredns-5c98db65d4-qhwrw                1/1     Running            1          3d6h   10.244.1.2      slaver1   <none>           <none>
etcd-master                             1/1     Running            2          3d6h   18.16.202.163   master    <none>           <none>
kube-apiserver-master                   1/1     Running            2          3d6h   18.16.202.163   master    <none>           <none>
kube-controller-manager-master          1/1     Running            6          3d6h   18.16.202.163   master    <none>           <none>
kube-flannel-ds-amd64-2lwl8             1/1     Running            0          3d1h   18.16.202.227   slaver1   <none>           <none>
kube-flannel-ds-amd64-9bjck             1/1     Running            0          3d1h   18.16.202.95    slaver2   <none>           <none>
kube-flannel-ds-amd64-gxxqg             1/1     Running            0          3d1h   18.16.202.163   master    <none>           <none>
kube-proxy-8cwj4                        1/1     Running            0          107m   18.16.202.163   master    <none>           <none>
kube-proxy-j9zpz                        1/1     Running            0          107m   18.16.202.227   slaver1   <none>           <none>
kube-proxy-vfgjv                        1/1     Running            0          107m   18.16.202.95    slaver2   <none>           <none>
kube-scheduler-master                   1/1     Running            6          3d6h   18.16.202.163   master    <none>           <none>
kubernetes-dashboard-64f97ccb4f-nbpkx   0/1     ImagePullBackOff   0          33m    10.244.0.4      master    <none>           <none>
tiller-deploy-6787c946f8-6b5tv          1/1     Running            0          44m    10.244.1.4      slaver1   <none>           <none>

Unusual problem

View online version:

[root@master /]# helm search kubernetes-dashboard
NAME                        CHART VERSION   APP VERSION DESCRIPTION                                   
stable/kubernetes-dashboard 0.6.0           1.8.3       General-purpose web UI for Kubernetes clusters

It should be inconsistent version, Ali cloud in the latest version 1.8.3, install and configure the helm version 1.10.1, so there is no cause to pull the mirror

Add a new warehouse sources

[root@master /]# helm repo add stable http://mirror.azure.cn/kubernetes/charts/
"stable" has been added to your repositories
[root@master /]# helm search kubernetes-dashboard
NAME                        CHART VERSION   APP VERSION DESCRIPTION                                   
stable/kubernetes-dashboard 1.8.0           1.10.1      General-purpose web UI for Kubernetes clusters

After replacing the warehouse, re-installation, or the same problem, see

[root@master /]# kubectl get namespace
NAME              STATUS   AGE
default           Active   3d8h
ingress-nginx     Active   152m
kube-node-lease   Active   3d8h
kube-public       Active   3d8h
kube-system       Active   3d8h

[root@master /]# kubectl describe pod kubernetes-dashboard-7ffdf885d6-t4htt -n kube-system
Name:           kubernetes-dashboard-7ffdf885d6-t4htt
Namespace:      kube-system
Priority:       0
Node:           master/18.16.202.163
Start Time:     Wed, 31 Jul 2019 16:46:40 +0800
Labels:         app=kubernetes-dashboard
                kubernetes.io/cluster-service=true
                pod-template-hash=7ffdf885d6
                release=kubernetes-dashboard
Annotations:    <none>
Status:         Pending
IP:             10.244.0.20
Controlled By:  ReplicaSet/kubernetes-dashboard-7ffdf885d6
Containers:
  kubernetes-dashboard:
    Container ID:  
    Image:         k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
    Image ID:      
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --auto-generate-certificates
    State:          Waiting
      Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  50Mi
    Requests:
      cpu:        100m
      memory:     50Mi
    Liveness:     http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /certs from kubernetes-dashboard-certs (rw)
      /tmp from tmp-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-pph4g (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kubernetes-dashboard-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard
    Optional:    false
  tmp-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kubernetes-dashboard-token-pph4g:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-token-pph4g
    Optional:    false
QoS Class:       Guaranteed
Node-Selectors:  node-role.kubernetes.io/edge=
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node-role.kubernetes.io/master:PreferNoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  3m47s                default-scheduler  Successfully assigned kube-system/kubernetes-dashboard-7ffdf885d6-t4htt to master
  Normal   Pulling    89s (x4 over 3m45s)  kubelet, master    Pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3"
  Warning  Failed     74s (x4 over 3m30s)  kubelet, master    Failed to pull image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3": rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Warning  Failed     74s (x4 over 3m30s)  kubelet, master    Error: ErrImagePull
  Normal   BackOff    61s (x6 over 3m30s)  kubelet, master    Back-off pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3"
  Warning  Failed     46s (x7 over 3m30s)  kubelet, master    Error: ImagePullBackOff

What's special is obviously pulling the k8s.gcr.iodomain name below, pulling less.

Well, I still can not pull.

Solve the problem

From Docker Hubtaking the same version of a pull, replacing

Pull

docker pull sacred02/kubernetes-dashboard-amd64:v1.10.1

replace

docker tag sacred02/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1

delete

docker rmi sacred02/kubernetes-dashboard-amd64:v1.10.1

Installation using the helm again

helm install stable/kubernetes-dashboard -n kubernetes-dashboard --namespace kube-system  -f kubernetes-dashboard.yaml

View

[root@master /]# helm ls
NAME                    REVISION    UPDATED                     STATUS      CHART                       APP VERSION NAMESPACE    
kubernetes-dashboard    1           Wed Jul 31 17:11:35 2019    DEPLOYED    kubernetes-dashboard-1.8.0  1.10.1      kube-system  
nginx-ingress           1           Wed Jul 31 13:59:14 2019    DEPLOYED    nginx-ingress-1.11.5        0.25.0      ingress-nginx
 
[root@master /]# kubectl get pods -n kube-system |grep dashboard
kubernetes-dashboard-848b8dd798-p44qt   1/1     Running   0          5m2s

token View

[root@master /]# kubectl -n kube-system get secret | grep kubernetes-dashboard-token
kubernetes-dashboard-token-4v624                 kubernetes.io/service-account-token   3      5m42s
[root@master /]# kubectl describe -n kube-system secret/kubernetes-dashboard-token-4v624
Name:         kubernetes-dashboard-token-4v624
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard
              kubernetes.io/service-account.uid: 6688cc3b-5f28-4e38-a37a-67c0927752ab

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi00djYyNCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjY2ODhjYzNiLTVmMjgtNGUzOC1hMzdhLTY3YzA5Mjc3NTJhYiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.Wq6xvzLSJNnt9Zg9u5J-85RB0-Slf6HMFfHzNwDGJDn3Yc2lfxL88YXi0ForX4Q9F0v96nt_GNKOm6DB8FGoKR3cALeWpeuoXSSY_ryY8tj6KFN1mrOlvVnRRgsk_lReOxLZexvR58OQ7N04pDrZ6Okr3PDB22i-31xPaVPBt6BhZU5ee6VZyXr7y3pj8VAJSki7tnr7ZRlG6WJizrMf25sZ9xdznwcGJ7yGz2gD3moYhNKQa5KPwcLOGTfg3GuLUNoQjdz5wUmvx4X2YMhfj6Fx7I3mZzr9whrfhO2PWuNtFheaKscSg2UyIPH5Zav9WTSzXxDedORh8BjX3cUJcQ

Viewk8s.hongda.com

[root@master /]# ping k8s.hongda.com
PING k8s.hongda.com (13.209.58.121) 56(84) bytes of data.
From 18.16.202.169 (18.16.202.169): icmp_seq=2 Redirect Network(New nexthop: 18.16.202.1 (18.16.202.1))
From 18.16.202.169 (18.16.202.169): icmp_seq=3 Redirect Network(New nexthop: 18.16.202.1 (18.16.202.1))
^C
--- k8s.hongda.com ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2002ms

reference:

Use kubeadm installation Kubernetes 1.15

Helm use a key deployment Kubernetes Dashboard and enable HTTPS for free

Guess you like

Origin www.cnblogs.com/hongdada/p/11284534.html