One, first of all, I was watching the video along with Silicon Valley still operate there is a problem, a video link: https://www.bilibili.com/video/av66617940/?p=58
Besides at about deployment process
Helm deployment
Helm comprises two components: Helm client and Tiller server, as shown in FIG.
Helm client is responsible chart and release creation and management as well and Tiller interaction. Tiller server running Kubernetes cluster, it will handle Helm client's request, and Kubernetes API Server interaction
Process
1 , the client ( Helm Client ) and server-side ( Tiller ) by gRPC call protocol
2 , our client ( Helm, Client transmitting corresponding instructions), the server ( Tiller ) receives a command will be understood that the data corresponding to the data, and then KubeAPI interact
. 3 , KubeAPI receives an instruction to generate a corresponding data or resources
4 , these generated data or resources will be written to etcd, kubernetes created after accepting
So helm deployment will be divided into two parts Deployment:
1 , the client ( Helm Client ) deployment
2 , the server ( Tiller ) deployment
Server ( Tiller ) is deployed to FIG K8S cluster, so that the server ( Tiller ) installed deployment
Second, the installation steps, these clever you certainly understand at a glance, not much to go into details
wget https://storage.googleapis.com/kubernetes-helm/helm-v2.13.1-linux-amd64.tar.gz tar -zxvf helm-v2.13.1-linux-amd64.tar.gz cd linux-amd64/ cp helm /usr/local/bin/
In order to install the server Tiller , also you need to configure on this machine kubectl tools and kubeconfig documents to ensure kubectl tool can be accessed on this machine apiserver and normal use. Here node1 node and configured the kubectl
Because Kubernetes APIServer opened RBAC access control, so it is necessary to create a tiller use
the Account Service ( SA ) : Tiller and assign the appropriate roles to it. Details can be viewed helm document Role-based Access Control (https://helm.sh/docs/using_helm/#role-based-access-control) . Simplicity assigned directly here cluster- admin This cluster built ClusterRole to it. Creating rbac-config.yaml file:
# Create SA apiVersion: v1 kind: ServiceAccount the Metadata: name: Tiller namespace: Kube - System --- apiVersion: rbac.authorization.k8s.io / v1beta1 kind: the role ClusterRoleBinding # clusters bind the Metadata: name: Tiller roleRef: apiGroup : rbac.authorization.k8s.io kind: ClusterRole name: cluster - ADMIN # role of cluster administrator Subjects: - kind: ServiceAccount name: Tiller namespace: Kube -System
# Create a rbac.yaml
kubectl create -f rbac-config.yaml
#tiller deployment of k8s cluster of
Helm the init --service the Account-Tiller --skip-Refresh
Running a successful video inside, I would explode. So, start of text! ! !
Third, when you install their own problems
kubectl get pods -n kube-system
check the detail information
kubectl describe pods tiller-deploy-58565b5464-jcv8w -n kube-system
These problems arise
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 91s default-scheduler Successfully assigned kube-system/tiller-deploy-58565b5464-jcv8w to liu-node01 Normal Pulling 51s (x2 over 84s) kubelet, liu-node01 Pulling image "gcr.io/kubernetes-helm/tiller:v2.13.1" Warning Failed 21s (x2 over 64s) kubelet, liu-node01 Failed to pull image "gcr.io/kubernetes-helm/tiller:v2.13.1": rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Warning Failed 21s (x2 over 64s) kubelet, liu-node01 Error: ErrImagePull Normal BackOff 6s (x2 over 63s) kubelet, liu-node01 Back-off pulling image "gcr.io/kubernetes-helm/tiller:v2.13.1" Warning Failed 6s (x2 over 63s) kubelet, liu-node01 Error: ImagePullBackOff
According to the problem description, I would like to modify their apiVersion, modify the environment to their own corresponding versions
Use kubectl explain ClusterRoleBinding View
Still have problems after modification, but the problem has changed, not pulling wrong, is pulled failure
Check again: kubectl pods tiller-deploy-58565b5464-jcv8w -n kube-system describe, can be found in the mirror there is a problem
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m44s default-scheduler Successfully assigned kube-system/tiller-deploy-58565b5464-jcv8w to liu-node01 Normal Pulling 2m57s (x4 over 5m37s) kubelet, liu-node01 Pulling image "gcr.io/kubernetes-helm/tiller:v2.13.1" Warning Failed 2m37s (x4 over 5m17s) kubelet, liu-node01 Failed to pull image "gcr.io/kubernetes-helm/tiller:v2.13.1": rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Warning Failed 2m37s (x4 over 5m17s) kubelet, liu-node01 Error: ErrImagePull Warning Failed 2m10s (x7 over 5m16s) kubelet, liu-node01 Error: ImagePullBackOff Normal BackOff 31s (x13 over 5m16s) kubelet, liu-node01 Back-off pulling image "gcr.io/kubernetes-helm/tiller:v2.13.1"
Now that I have a problem pulling the mirror, so I'm going to docker.hub the next check and found that the corresponding image is not found, so you want to modify their own image, the first direct search, you can use a mirror to see those
Then go docker.hub official website to find
Pull process, I do not have the latest version pulled succeed, try several other before they succeed. After a successful pull, change tag
docker tag sapcc/tiller:v2.15.2 gcr.io/kubernetes-helm/tiller:v2.13.1
View, find or not Running, explained mirror is not used, edit the configuration file for pulling strategy
kubectl edit deployment tiller-deploy -n kube-system
Last Seen already Running
The deployment is complete, and then after watching also see a similar solution, the article links:
https://www.jianshu.com/p/d0cdbb49569b
Early would not see his own toss a half-day ha ha! !