Kubernetes Dashboard is a full-featured web interface for managing Kubernetes clusters, designed to completely replace command-line tools (kubectl, etc.) with a UI.
content
deploy
The mirror that Dashboard needs to use k8s.gcr.io/kubernetes-dashboard
, due to network reasons, can be pulled and tagged in advance or the mirror address in the yaml file can be modified. This article uses the latter:
kubectl apply -f http://mirror.faasx.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
The yaml used above just replaces k8s.gcr.io in https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml with reg.qiniu.com/k8s .
You can then use the kubectl get pods
command to view the deployment status:
kubectl get pods --all-namespaces
# 输出
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kubernetes-dashboard-7d5dcdb6d9-mf6l2 1/1 Running 0 9m
If we want to access the dashboard locally, we need to create a secure channel, we can run the following command:
kubectl proxy
The Dashborad UI can now be accessed via http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ .
create user
As shown above, jump to the login page, then we first create a user:
1. Create a service account
First create a admin-user
service account called and place it kube-system
under the namespace:
# admin-user.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
Execute the kubectl create
command:
kubectl create -f admin-user.yaml
2. Bind the role
By default, kubeadm
the role has been created when the cluster is created admin
, and we can directly bind it:
# admin-user-role-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
Execute the kubectl create
command:
kubectl create -f admin-user-role-binding.yaml
3. Get Token
Now we need to find the token of the newly created user to use to log into the dashboard:
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
The output is similar:
Name: admin-user-token-qrj82
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name=admin-user
kubernetes.io/service-account.uid=6cd60673-4d13-11e8-a548-00155d000529
Type: kubernetes.io/service-account-token
Data
====
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXFyajgyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI2Y2Q2MDY3My00ZDEzLTExZTgtYTU0OC0wMDE1NWQwMDA1MjkiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.C5mjsa2uqJwjscWQ9x4mEsWALUTJu3OSfLYecqpS1niYXxp328mgx0t-QY8A7GQvAr5fWoIhhC_NOHkSkn2ubn0U22VGh2msU6zAbz9sZZ7BMXG4DLMq3AaXTXY8LzS3PQyEOCaLieyEDe-tuTZz4pbqoZQJ6V6zaKJtE9u6-zMBC2_iFujBwhBViaAP9KBbE5WfREEc0SQR9siN8W8gLSc8ZL4snndv527Pe9SxojpDGw6qP_8R-i51bP2nZGlpPadEPXj-lQqz4g5pgGziQqnsInSMpctJmHbfAh7s9lIMoBFW7GVE8AQNSoLHuuevbLArJ7sHriQtDB76_j4fmA
ca.crt: 1025 bytes
namespace: 11 bytes
Then copy the Token to the Token input box of the login interface, after login, the display is as follows:
Integrated Heapster
Heapster is a container cluster monitoring and performance analysis tool that naturally supports Kubernetes and CoreOS.
Heapster supports a variety of storage methods, used in this example influxdb
, just execute the following commands directly:
kubectl create -f http://mirror.faasx.com/kubernetes/heapster/deploy/kube-config/influxdb/influxdb.yaml
kubectl create -f http://mirror.faasx.com/kubernetes/heapster/deploy/kube-config/influxdb/grafana.yaml
kubectl create -f http://mirror.faasx.com/kubernetes/heapster/deploy/kube-config/influxdb/heapster.yaml
kubectl create -f http://mirror.faasx.com/kubernetes/heapster/deploy/kube-config/rbac/heapster-rbac.yaml
The yaml used in the above command is copied from https://github.com/kubernetes/heapster/tree/master/deploy/kube-config/influxdb
k8s.gcr.io
and will be modified to a domestic mirror.
Then, check the status of the Pod:
raining@raining-ubuntu:~/k8s/heapster$ kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
...
heapster-5869b599bd-kxltn 1/1 Running 0 5m
monitoring-grafana-679f6b46cb-xxsr4 1/1 Running 0 5m
monitoring-influxdb-6f875dc468-7s4xz 1/1 Running 0 6m
...
Wait for the status to change Running
, refresh the browser, the latest effect is as follows:
For more detailed usage of Heapster, please refer to the official documentation: https://github.com/kubernetes/heapster .
access
Kubernetes provides the following four ways to access services:
kubectl proxy
In the example above, what we're using is kubectl proxy
, it creates a proxy between your machine and the Kubernetes API, which by default is only accessible locally (the machine it's launched on).
We can use the kubectl cluster-info
command to check if the configuration is correct, if the cluster is reachable, etc.:
raining@raining-ubuntu:~$ kubectl cluster-info
Kubernetes master is running at https://192.168.0.8:6443
Heapster is running at https://192.168.0.8:6443/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://192.168.0.8:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
monitoring-grafana is running at https://192.168.0.8:6443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
monitoring-influxdb is running at https://192.168.0.8:6443/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
To start the agent just execute the following command:
$ kubectl proxy
Starting to serve on 127.0.0.1:8001
We can also use --address
and --accept-hosts
parameters to allow external access:
kubectl proxy --address='0.0.0.0' --accept-hosts='^*$'
Then we access the external network http://<master-ip>:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
and can successfully access the login interface, but we cannot log in. This is because Dashboard only allows localhost
and 127.0.0.1
uses HTTP connections for access, while other addresses only allow HTTPS. Therefore, if you need to access the Dashboard from a non-local machine, you can only choose other access methods.
NodePort
NodePort is a way to directly expose nodes to the external network. It is only recommended for development environments and single-node installations.
Enabling NodePort is easy, just execute the kubectl edit
command to edit:
kubectl -n kube-system edit service kubernetes-dashboard
The output is as follows:
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":"kube-system"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
creationTimestamp: 2018-05-01T07:23:41Z
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "1750"
selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
uid: 9329577a-4d10-11e8-a548-00155d000529
spec:
clusterIP: 10.103.5.139
ports:
- port: 443
protocol: TCP
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Then we type: ClusterIP
modify the above to type: NodePort
, after saving, use the kubectl get service
command to view the automatically produced port:
kubectl -n kube-system get service kubernetes-dashboard
The output is as follows:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.103.5.139 <none> 443:31795/TCP 4h
As shown above, Dashboard has been 31795
exposed on the port and can now be accessed externally https://<cluster-ip>:31795
. It should be noted that in a multi-node cluster, the IP of the running Dashboard node must be found to access, not the IP of the Master node. In the example in this article, I have deployed two servers, MatserIP is 192.168.0.8
and ClusterIP is 192.168.0.10
.
But the result of the last visit may be as follows:
Unfortunately, due to certificate issues, we cannot access it, and we need to specify a valid certificate when deploying Dashboard before we can access it. Since it is not recommended to use NodePort to access Dashboard in a formal environment, I won't say more. For how to configure certificates for Dashboard, please refer to: Certificate management .
API Server
If the Kubernetes API server is public and can be accessed from the outside, we can directly use the API Server to access, which is also the recommended way.
The access address of Dashboard is:
https://<master-ip>:<apiserver-port>/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
, but the returned result may be as follows:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "services \"https:kubernetes-dashboard:\" is forbidden: User \"system:anonymous\" cannot get services/proxy in the namespace \"kube-system\"",
"reason": "Forbidden",
"details": {
"name": "https:kubernetes-dashboard:",
"kind": "services"
},
"code": 403
}
This is because the latest version of k8s has RBAC enabled by default and gives unauthenticated users a default identity: anonymous
.
For API Server, it is authenticated using a certificate, we need to create a certificate first:
1. First find kubectl
the configuration file of the command, which is by default /etc/kubernetes/admin.conf
, in the previous article , we have copied it to $HOME/.kube/config
.
2. Then we use client-certificate-data
and client-key-data
generate a p12 file with the following commands:
# 生成client-certificate-data
grep 'client-certificate-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.crt
# 生成client-key-data
grep 'client-key-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -d >> kubecfg.key
# 生成p12
openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client"
3. Finally, import the p12 file generated above, reopen the browser, and display the following:
Click OK to see the familiar login interface:
We can log in using the token of the user we created at the beginning admin-user
and everything is OK.
For production systems, we should generate their own certificates for each user, because different users will have different namespace access rights.
Ingress
Ingress integrates open source reverse proxy load balancers (such as Nginx, Apache, Haproxy, etc.) with k8s, and can dynamically update Nginx configuration, etc., which is a more flexible and recommended way to expose services, but it is also relatively Complicated, will be introduced later.