Kubernetes Dashboard deployment

Kubernetes Dashboard is a cluster management Kubernetes fully functional Web interface, UI way intended to completely replace the command-line tool (kubectl, etc.).

table of Contents

  1. deploy
  2. Create a user
  3. Integrated Heapster
  4. access

deploy

Dashboard need to use k8s.gcr.io/kubernetes-dashboardthe mirror, due to network reasons, and can be used to play pre-pull or modify Tag address yaml image file, which is used herein:

kubectl apply -f http://mirror.faasx.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

Yaml used above except that the https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml the k8s.gcr.io replacement for reg.qiniu.com/k8s .

You can then use the kubectl get podscommand to view deployment status:

kubectl get pods --all-namespaces

# 输出
NAMESPACE     NAME                                      READY     STATUS    RESTARTS   AGE
kube-system   kubernetes-dashboard-7d5dcdb6d9-mf6l2     1/1       Running   0          9m

If you want to access the dashboard locally, we need to create a secure channel, run the following command:

kubectl proxy

Now you can by http: 8001 / api / v1 / namespaces / kube-system / services / https:: kubernetes-dashboard: / proxy / // localhost to access the Dashborad UI.

k8s-dashboard-login

Create a user

As shown above, jump to the login page, then we first create a user:

1. Create a service account

First, create a named admin-userservice account, and placed kube-systemunder the namespace:

# admin-user.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system

Execute kubectl createthe command:

kubectl create -f admin-user.yaml

2. Binding role

By default, kubeadmwhen you create a cluster has been created adminrole, we bind directly to:

# admin-user-role-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

Execute kubectl createthe command:

kubectl create -f  admin-user-role-binding.yaml

3. Access Token

Now we need to find the user's Token newly created in order to login dashboard:

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

Output similar to:

Name:         admin-user-token-qrj82
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name=admin-user
              kubernetes.io/service-account.uid=6cd60673-4d13-11e8-a548-00155d000529

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXFyajgyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI2Y2Q2MDY3My00ZDEzLTExZTgtYTU0OC0wMDE1NWQwMDA1MjkiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.C5mjsa2uqJwjscWQ9x4mEsWALUTJu3OSfLYecqpS1niYXxp328mgx0t-QY8A7GQvAr5fWoIhhC_NOHkSkn2ubn0U22VGh2msU6zAbz9sZZ7BMXG4DLMq3AaXTXY8LzS3PQyEOCaLieyEDe-tuTZz4pbqoZQJ6V6zaKJtE9u6-zMBC2_iFujBwhBViaAP9KBbE5WfREEc0SQR9siN8W8gLSc8ZL4snndv527Pe9SxojpDGw6qP_8R-i51bP2nZGlpPadEPXj-lQqz4g5pgGziQqnsInSMpctJmHbfAh7s9lIMoBFW7GVE8AQNSoLHuuevbLArJ7sHriQtDB76_j4fmA
ca.crt:     1025 bytes
namespace:  11 bytes

Then copy the Token Token to the login screen input box, the sign shown below:

k8s-overview

Integrated Heapster

Heapster is a container cluster monitoring and performance analysis tools, and native support Kubernetes CoreOS.

Heapster supports a variety of storage methods, used in this example influxdb, the following command can be directly executed:

kubectl create -f http://mirror.faasx.com/kubernetes/heapster/deploy/kube-config/influxdb/influxdb.yaml
kubectl create -f http://mirror.faasx.com/kubernetes/heapster/deploy/kube-config/influxdb/grafana.yaml
kubectl create -f http://mirror.faasx.com/kubernetes/heapster/deploy/kube-config/influxdb/heapster.yaml
kubectl create -f http://mirror.faasx.com/kubernetes/heapster/deploy/kube-config/rbac/heapster-rbac.yaml

The above command is used yaml from https://github.com/kubernetes/heapster/tree/master/deploy/kube-config/influxdb copied and k8s.gcr.iomodified for domestic mirror.

Then, look at the Pod status:

raining@raining-ubuntu:~/k8s/heapster$ kubectl get pods --namespace=kube-system
NAME                                      READY     STATUS    RESTARTS   AGE
...
heapster-5869b599bd-kxltn                 1/1       Running   0          5m
monitoring-grafana-679f6b46cb-xxsr4       1/1       Running   0          5m
monitoring-influxdb-6f875dc468-7s4xz      1/1       Running   0          6m
...

Wait for the state to Runningrefresh your browser, the latest results are as follows:

k8s-heapsterng

For more detailed usage Heapster refer to the official document: https://github.com/kubernetes/heapster .

access

Kubernetes provides the following four ways to access services:

kubectl proxy

In the example above, we use is kubectl proxy, it creates a proxy between your machine and Kubernetes API, by default, only from local access (to start its machines).

We can use the kubectl cluster-infocommand to check the configuration is correct, whether the cluster can access such as:

raining@raining-ubuntu:~$ kubectl cluster-info
Kubernetes master is running at https://192.168.0.8:6443
Heapster is running at https://192.168.0.8:6443/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://192.168.0.8:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
monitoring-grafana is running at https://192.168.0.8:6443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
monitoring-influxdb is running at https://192.168.0.8:6443/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Start the broker simply execute the following command:

$ kubectl proxy
Starting to serve on 127.0.0.1:8001

We can also use --addressand --accept-hostsparameters to allow external access:

kubectl proxy --address='0.0.0.0'  --accept-hosts='^*$'

Then we visit outside the network http://<master-ip>:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/can successfully access to the login screen, but I can not sign it because Dashboard is only allowed localhostand 127.0.0.1the use of HTTP connection to access, while others address only allowed to use HTTPS. So, if you need to access Dashboard in non-native, you can only select other access method.

NodePort

NodePort node is directly exposed to the foreign network one way, only recommended installation development environment, a single node.

Enable NodePort very simple, just execute kubectl editcommands for editing:

kubectl -n kube-system edit service kubernetes-dashboard

Output is as follows:

apiVersion: v1
kind: Service
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":"kube-system"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
  creationTimestamp: 2018-05-01T07:23:41Z
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
  resourceVersion: "1750"
  selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
  uid: 9329577a-4d10-11e8-a548-00155d000529
spec:
  clusterIP: 10.103.5.139
  ports:
  - port: 443
    protocol: TCP
    targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

Then we will be above type: ClusterIPmodified to type: NodePortuse the Save kubectl get servicecommand to view the port automatic production of:

kubectl -n kube-system get service kubernetes-dashboard

Output is as follows:

NAME                   TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.103.5.139   <none>        443:31795/TCP   4h

As indicated above, Dashboard has 31795port disclosure, can now be used externally https://<cluster-ip>:31795accessed. Note that, in a multi-node cluster, you must run Dashboard to find the IP node to access, rather than IP Master node, in the example in this article, I deployed two servers, MatserIP is 192.168.0.8, ClusterIP is 192.168.0.10.

Last but access may be as follows:

k8s-dashboard-nodeport-notsecure

Unfortunately, due to certificate issues, we can not access, requires that a valid certificate when deploying Dashboard, before they can access. Since the formal environment, it is not recommended to use NodePort way to access the Dashboard, so say no more on how to configure a certificate for Dashboard can refer to: Certificate Management .

API Server

If Kubernetes API server is public and can be accessed from the outside, then we can directly use API Server to access, it is more recommended way.

Dashboard access address is:
https://<master-ip>:<apiserver-port>/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/, but the results may be returned as follows:

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "services \"https:kubernetes-dashboard:\" is forbidden: User \"system:anonymous\" cannot get services/proxy in the namespace \"kube-system\"",
  "reason": "Forbidden",
  "details": {
    "name": "https:kubernetes-dashboard:",
    "kind": "services"
  },
  "code": 403
}

This is because the latest version of k8s enabled by default RBAC, and given a default identity for the unauthenticated user: anonymous.

For API Server, it is the use of certificates for authentication, we need to create a certificate:

1. First, find the kubectlcommand configuration file, by default, as /etc/kubernetes/admin.confin previous , we have copied to $HOME/.kube/configthe.

2. Then we use client-certificate-dataand client-key-datagenerate a p12 file, use the following command:

# 生成client-certificate-data
grep 'client-certificate-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.crt

# 生成client-key-data
grep 'client-key-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.key

# 生成p12
openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client"

3. Finally p12 import files generated above, to re-open the browser, is shown below:

k8s-api-server-select-certificate

Click OK, you can see the familiar login screen:

k8s-api-server-login

We can use the outset to create admin-usera user token to log in, all OK.

For production systems, we should should generate your own certificate for each user, because different users have different namespace access.

Ingress

Ingress open source reverse proxy load balancer (such as Nginx, Apache, Haproxy, etc.) with k8s integration, and can dynamically update Nginx configuration and so on, is more flexible, more recommended way to expose services, but also relatively complex, and then later introduced.

Guess you like

Origin blog.csdn.net/rubbertree/article/details/91042142