[Kubernetes] deployment K8s-dashboard v1.10.1

First, the official kubernetes-dashboard.yaml Profile

① first learn what the official kubernetes-dashboard.yaml, we have to download:

[root@K8s-Master test]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

The document is divided into the following sections:
Dashboard Secret
Dashboard Service the Account
Dashboard Role Role & the Binding
Dashboard Deployment
Dashboard Service
Here, we have a simple function of each part are described:

# ------------------- Dashboard Secret ------------------- #

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque

---
# ------------------- Dashboard Service Account ------------------- #

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system

---

Dashboard as defined above, the user, the type of ServiceAccount, entitled kubernetes-dashboard.

# ------------------- Dashboard Role & Role Binding ------------------- #

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
rules:
  # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create"]
  # Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["create"]
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
  # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
  # Allow Dashboard to get metrics from heapster.
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
- apiGroups: [""]
  resources: ["services/proxy"]
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard-minimal
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system

---

As defined above, the Dashboard role, its role name is kubernetes-dashboard-minimal, rules clearly lists the more rights it owns. We can guess by the name, this permission level is relatively low.
As defined the role of binding Dashboard, and its name is kubernetes-dashboard-minimal, roleRef bound for the role, also called kubernetes-dashboard-minimal, subjects for binding user: kubernetes-dashboard.

# ------------------- Dashboard Deployment ------------------- #

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --auto-generate-certificates
          # Uncomment the following line to manually specify Kubernetes API server Host
          # If not specified, Dashboard will attempt to auto discover the API server and connect
          # to it. Uncomment only if the default does not work.
          # - --apiserver-host=http://my-address:port
        volumeMounts:
- name: kubernetes-dashboard-certs
          mountPath: /certs
          # Create on-disk volume to store exec logs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule

---

As can be seen, Dashboard Deployment of use specifies the ServiceAccount kubernetes-dashboard. And also Secret kubernetes-dashboard-certs hanging / certs path to the pod through the interior volumes. Why mount a Secret? The reason is the token will be generated automatically when you create a Secret. Please note that the parameters --auto-generate-certificates, which represents the Dashboard will automatically generate a certificate.

# ------------------- Dashboard Service ------------------- #

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

Second, the deployment Dashboard

kubectl create -f kubernetes-dashboard.yaml

Reinstall the dashboard

kubectl delete -f kubernetes-dashboard.yaml
kubectl create -f kubernetes-dashboard.yaml

View the status of running Pod explanation dashboard has been deployed successfully:

kubectl get pod --namespace=kube-system -o wide | grep dashboard
kubectl get pods --all-namespaces

Dashboard creates its own Deployment and Service in kube-system namespace in:

kubectl get deployment kubernetes-dashboard --namespace=kube-system
kubectl get service kubernetes-dashboard --namespace=kube-system

Error encountered:

Completion of the operation only to find errors or CrashLoopBackOff

use the command to view the cause of the error:

kubectl --namespace=kube-system describe pod <pod_name>

Found pod running on the slave node k8s-node1, needs to be scheduled to a master node up dashboard.
Enter the command, add a label for the master node

kubectl label node k8s-master type=master

Add nodeSelecor defined in kubernetes-dashboard.yaml in:

After configuration is complete, re-install dashboard, found the problem has been resolved.

Third, access to dashboard

According to the official document, at present there are four ways to access the Dashboard:
①NodePort
②kubectl Proxy
③API Server
④Ingress
above four ways, I tested the first two, and kubectl proxy NodePort currently available.
① use NodePort
after adding Service to kubernetes-dashboard.yaml, you can use NodePort access Dashboard. In our physical machine, using Firefox access https://192.168.56.101:30001/, the results as shown below:

Select a token acquisition token login:

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep kubernetes-dashboard | awk '{print $1}')


② use kubectl proxy
here, I mainly introduce the most convenient way kubectl proxy. Kubecll proxy executed on the Master, then use the following address to access the Dashboard:

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy

But is the need to restrict access on the Master, which is obviously a pit, our goal is our real physical machine up access to Master's Dashboard.
So, on the primary node, we perform kubectl proxy --address = 192.168.56.101 --disable-filter = true open proxy.

Where:
address 192.168.56.101 represents the outside world can use to access the Dashboard, we can also use 0.0.0.0
disable-filter = to true to disable request filtering function, otherwise our request will be rejected, and prompts Forbidden (403) Unauthorized.
We can also specify port details, please see kubectl proxy --help
as shown below, proxy default port of 8001 Master listening:
so that we can use the following address to access the login screen:

http://192.168.56.101:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login

Problems encountered: get the token value is invalid login

We recall the third section of this paper describes kubernetes-dashboard.yaml, and now understand why the name of its role as kubernetes-dashboard-minimal. In short, the Role of the authority is not enough!
Therefore, we can change RoleBinding modified to ClusterRoleBinding, and modify roleRef in kind and name, the use of cluster-admin this incredibly prolific CusterRole (superuser privileges, it has all the access kube-apiserver). As follows:

After the modification, re-install kubernetes-dashboard.yaml, Dashboard can have access to the entire K8S cluster API.

Guess you like

Origin www.cnblogs.com/wucaiyun1/p/11692204.html