Domestic fq not install K8S 3: Use helm installation kubernet-dashboard

This article is full accordance :( "frog white" step by step execution of the blog without problems)
https://blog.frognew.com/2019/07/kubeadm-install-kubernetes-1.15.html

3 using kubernet-dashboard mounted helm

3.1 Helm installation

$ curl -O https://get.helm.sh/helm-v2.14.1-linux-amd64.tar.gz
$ tar -zxvf helm-v2.14.1-linux-amd64.tar.gz
$ cd linux-amd64/
$ cp helm /usr/local/bin/

Create a helm-rbac.yaml file:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

Create a tiller use the service account: tiller and assign the appropriate roles to it

$ kubectl create -f helm-rbac.yaml
serviceaccount/tiller created
clusterrolebinding.rbac.authorization.k8s.io/tiller created

Use helm deployment tiller:

helm init --service-account tiller --skip-refresh
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

tiller default deployed in k8s cluster kube-system this namespace:

kubectl get pod -n kube-system -l app=helm
NAME                            READY   STATUS    RESTARTS   AGE
tiller-deploy-c4fd4cd68-dwkhv   1/1     Running   0          83s
helm version
Client: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"}

Note that for some reason you need network access to gcr.io and kubernetes-charts.storage.googleapis.com, if you can not access via helm init --service-account tiller --tiller-image /tiller:v2.13.1 --skip-refresh using private tiller mirror image repository, such as:

helm init --service-account tiller --tiller-image gcr.azk8s.cn/kubernetes-helm/tiller:v2.14.1 --skip-refresh

If you missed how to do? Can "kubectl edit deployment tiller-deploy -n kube-system" can change the default gcr sources, other sources Similarly gcr.

Last Modified mirror address address helm chart warehouse for azure provided on node1:

helm repo add stable http://mirror.azure.cn/kubernetes/charts
"stable" has been added to your repositories

helm repo list
NAME    URL                                     
stable  http://mirror.azure.cn/kubernetes/charts
local   http://127.0.0.1:8879/charts    

3.2 Helm deploy Nginx Ingress

We will kub1 (192.168.15.174) as edge node, marked Label:

$ kubectl label node kub1 node-role.kubernetes.io/edge=
node/kub1 labeled
$ kubectl get node
NAME   STATUS   ROLES         AGE     VERSION
kub1   Ready    edge,master   6h43m   v1.15.2
kub2   Ready    <none>        6h36m   v1.15.2

stable / nginx-ingress chart file ingress-nginx.yaml values ​​as follows:

controller:
  replicaCount: 1
  hostNetwork: true
  nodeSelector:
    node-role.kubernetes.io/edge: ''
  affinity:
    podAntiAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
            - key: app
              operator: In
              values:
              - nginx-ingress
            - key: component
              operator: In
              values:
              - controller
          topologyKey: kubernetes.io/hostname
  tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: PreferNoSchedule
defaultBackend:
  nodeSelector:
    node-role.kubernetes.io/edge: ''
  tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: PreferNoSchedule

Install nginx-ingress

$ helm repo update
$ helm install stable/nginx-ingress \
-n nginx-ingress \
--namespace ingress-nginx  \
-f ingress-nginx.yaml

If the access http://192.168.15.174 return to default backend, the deployment is complete.
If you can not find the image backend of the pod and the above image processing method tiller is not the same as, say no more.

3.3 Use Helm deploy dashboard

Kubernetes-dashboard.yaml:

image:
  repository: k8s.gcr.io/kubernetes-dashboard-amd64
  tag: v1.10.1
ingress:
  enabled: true
  hosts: 
    - k8s.frognew.com
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
  tls:
    - secretName: frognew-com-tls-secret
      hosts:
      - k8s.frognew.com
nodeSelector:
    node-role.kubernetes.io/edge: ''
tolerations:
    - key: node-role.kubernetes.io/master
      operator: Exists
      effect: NoSchedule
    - key: node-role.kubernetes.io/master
      operator: Exists
      effect: PreferNoSchedule
rbac:
  clusterAdminRole: true

Note that the above options there are hosts, because I was tested in the LAN, so the two hosts directly to the delete option, then install using IP access is the same.

$ helm install stable/kubernetes-dashboard \
-n kubernetes-dashboard \
--namespace kube-system  \
-f kubernetes-dashboard.yaml
$ kubectl -n kube-system get secret | grep kubernetes-dashboard-token
kubernetes-dashboard-token-5d5b2                 kubernetes.io/service-account-token   3      4h24m
$ kubectl describe -n kube-system secret/kubernetes-dashboard-token-5d5b2
Name:         kubernetes-dashboard-token-5d5b2
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubernetes-dashboard
              kubernetes.io/service-account.uid: 82c89647-1a1c-450f-b2bb-8753de12f104

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi01ZDViMiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjgyYzg5NjQ3LTFhMWMtNDUwZi1iMmJiLTg3NTNkZTEyZjEwNCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.UF2Fnq-SnqM3oAIwJFvXsW64SAFstfHiagbLoK98jWuyWDPoYyPQvdB1elRsJ8VWSzAyTyvNw2MD9EgzfDdd9_56yWGNmf4Jb6prbA43PE2QQHW69kLiA6seP5JT9t4V_zpjnhpGt0-hSfoPvkS4aUnJBllldCunRGYrxXq699UDt1ah4kAmq5MqhH9l_9jMtcPwgpsibBgJY-OD8vElITv63fP4M16DFtvig9u0EnIwhAGILzdLSkfwBJzLvC_ukii_2A9e-v2OZBlTXYgNQ1MnS7CvU8mu_Ycoxqs0r1kZ4MjlNOUOt6XFjaN8BlPwfEPf2VNx0b1ZgZv-euQQtA

Using the above token to log in dashboard login window.
Access Address: https://192.168.15.174 then select the token to log on, and then paste the above token can go .

3.4 Use Helm deploy metrics-server

metrics-server.yaml:

args:
- --logtostderr
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
nodeSelector:
    node-role.kubernetes.io/edge: ''
tolerations:
    - key: node-role.kubernetes.io/master
      operator: Exists
      effect: NoSchedule
    - key: node-role.kubernetes.io/master
      operator: Exists
      effect: PreferNoSchedule
$ helm install stable/metrics-server \
-n metrics-server \
--namespace kube-system \
-f metrics-server.yaml

Use the command to obtain basic information about the index cluster nodes:

$ kubectl top node
NAME   CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
kub1   433m         5%     2903Mi          37%       
kub2   101m         1%     1446Mi          18%  
$ kubectl top pod -n kube-system
NAME                                    CPU(cores)   MEMORY(bytes)   
coredns-5c98db65d4-7n4gm                7m           14Mi            
coredns-5c98db65d4-s5zfr                7m           14Mi            
etcd-kub1                               49m          72Mi            
kube-apiserver-kub1                     61m          219Mi           
kube-controller-manager-kub1            36m          47Mi            
kube-flannel-ds-amd64-mssbt             5m           17Mi            
kube-flannel-ds-amd64-pb4dz             5m           15Mi            
kube-proxy-hc4kh                        1m           17Mi            
kube-proxy-rp4cx                        1m           18Mi            
kube-scheduler-kub1                     3m           15Mi            
kubernetes-dashboard-77f9fd6985-ctwmc   1m           23Mi            
metrics-server-75bfbbbf76-6blkn         4m           17Mi            
tiller-deploy-7dd9d8cd47-ztl7w          1m           12Mi   

Guess you like

Origin www.cnblogs.com/bugutian/p/11366556.html