K8S集群部署helm+tiller及填坑(2)

环境说明

该问题列表接上一篇,对应的K8S和helm版本如下
K8S:v1.17.0
Helm:v2.14.1
Tiller:v2.14.1

1、问题1:Error: error installing: the server could not find the requested resource

此问题是在执行helm init命令时出现的

1.1、问题原因

helm老版本init命令调用的是K8S老的API接口,K8S新接口已经不作支持

1.2、解决问题

对于接口做修改

# helm init -i 10.43.166.184:50500/tiller:v2.14.1 --stable-repo-url http://10.43.177.220:8879/charts/ --service-account tiller --output yaml >tiller.yaml
# vi tiller.yaml

apiVersion: extensions/v1beta1

改为

apiVersion: apps/v1

并在spec段增加

  selector:
    matchLabels:
      app: helm
      name: tiller

注:完整的tiller.yaml放到附录中

1.3、应用tiller.yaml

# kubectl apply -f tiller.yaml

正常情况下,tiller的pod应该处于running状态,如下:

# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
tiller-deploy-55cf96466f-vkkbd             1/1     Running   0          22h

但是因为缺少了一些步骤,所以导致kubectl看不到tiller这个pod
本文采用系列问题为线索一步一步搞定helm+tiller部署

1.4、注意事项

虽然helm init出错了,但是helm init还是创建了repo,所以这个错误步骤还是要执行

# helm repo list
NAME    URL
stable  http://192.168.177.220:8879/charts/
local   http://127.0.0.1:8879/charts

2、问题2:MinimumReplicasUnavailable导致ReplicaFailure

接问题1,执行kubectl apply -f tiller.yaml发现没有创建pod

2.1、查看deployment状态

查看tiller-deploy这个deployment的创建情况,
状态为:
ReplicaFailure True FailedCreate
步骤卡在了replicaset的第一步:
Scaled up replica set tiller-deploy-55cf96466f to 1

# kubectl describe svc tiller-deploy -n kube-system
Name:              tiller-deploy
Namespace:         kube-system
Labels:            app=helm
                   name=tiller
Annotations:       <none>
Selector:          app=helm,name=tiller
Type:              ClusterIP
IP:                10.96.233.95
Port:              tiller  44134/TCP
TargetPort:        tiller/TCP
Endpoints:         <none>
Session Affinity:  None
Events:            <none>
[root@k8s-m1 linux-amd64]# kubectl describe deployment.apps/tiller-deploy -n kube-system
Name:                   tiller-deploy
Namespace:              kube-system
CreationTimestamp:      Mon, 20 Jan 2020 06:07:22 -0500
Labels:                 app=helm
                        name=tiller
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               app=helm,name=tiller
Replicas:               1 desired | 0 updated | 0 total | 0 available | 1 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           app=helm
                    name=tiller
  Service Account:  tiller
  Containers:
   tiller:
    Image:       10.43.166.184:50500/tiller:v2.14.1
    Ports:       44134/TCP, 44135/TCP
    Host Ports:  0/TCP, 0/TCP
    Liveness:    http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
    Readiness:   http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
    Environment:
      TILLER_NAMESPACE:    kube-system
      TILLER_HISTORY_MAX:  0
    Mounts:                <none>
  Volumes:                 <none>
Conditions:
  Type             Status  Reason
  ----             ------  ------
  Progressing      True    NewReplicaSetCreated
  Available        False   MinimumReplicasUnavailable
  ReplicaFailure   True    FailedCreate
OldReplicaSets:    <none>
NewReplicaSet:     tiller-deploy-55cf96466f (0/1 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  3m17s  deployment-controller  Scaled up replica set tiller-deploy-55cf96466f to 1

2.2、查看replicaset状态

获取replicaset列表

# kubectl get rs -n kube-system
NAME                                 DESIRED   CURRENT   READY   AGE
calico-kube-controllers-564b6667d7   1         1         1       14m
coredns-6955765f44                   2         2         2       14m
tiller-deploy-55cf96466f             1         0         0       97s

查看tiller-deploy-55cf96466f这个replicaset状态
找到了原因是:serviceaccount “tiller” not found

# kubectl describe rs tiller-deploy-55cf96466f -n kube-system
Name:           tiller-deploy-55cf96466f
Namespace:      kube-system
Selector:       app=helm,name=tiller,pod-template-hash=55cf96466f
Labels:         app=helm
                name=tiller
                pod-template-hash=55cf96466f
Annotations:    deployment.kubernetes.io/desired-replicas: 1
                deployment.kubernetes.io/max-replicas: 2
                deployment.kubernetes.io/revision: 1
Controlled By:  Deployment/tiller-deploy
Replicas:       0 current / 1 desired
Pods Status:    0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:           app=helm
                    name=tiller
                    pod-template-hash=55cf96466f
  Service Account:  tiller
  Containers:
   tiller:
    Image:       10.43.166.184:50500/tiller:v2.14.1
    Ports:       44134/TCP, 44135/TCP
    Host Ports:  0/TCP, 0/TCP
    Liveness:    http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
    Readiness:   http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
    Environment:
      TILLER_NAMESPACE:    kube-system
      TILLER_HISTORY_MAX:  0
    Mounts:                <none>
  Volumes:                 <none>
Conditions:
  Type             Status  Reason
  ----             ------  ------
  ReplicaFailure   True    FailedCreate
Events:
  Type     Reason        Age                  From                   Message
  ----     ------        ----                 ----                   -------
  Warning  FailedCreate  29s (x15 over 111s)  replicaset-controller  Error creating: pods "tiller-deploy-55cf96466f-" is forbidden: error looking up service account kube-system/tiller: serviceaccount "tiller" not found

2.3、解决问题

创建serviceaccount
这一步骤本应该在应用tiller之前执行,但是对于忘记执行这一步骤的,本节提供了问题排查思路

# cat serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
# kubectl apply -f serviceaccount.yaml

3、问题3:unable to do port forwarding: socat not found

接问题2,tiller部署成功后,可以看到tiller pod的状态为running

# kubectl get pods -n kube-system |grep tiller
tiller-deploy-55cf96466f-vkkbd             1/1     Running   0          22h

3.1、执行helm list命令

# helm list

打印如下错误信息:

E0120 10:20:29.350660   13410 portforward.go:400] an error occurred forwarding 34503 -> 44134: error forwarding port 44134 to pod b1383ec9d858a5e7dd0958b3b5ff6574527f1f511a035e0ae549f87b0458a7ce, uid : unable to do port forwarding: socat not found
E0120 10:20:30.357758   13410 portforward.go:400] an error occurred forwarding 34503 -> 44134: error forwarding port 44134 to pod b1383ec9d858a5e7dd0958b3b5ff6574527f1f511a035e0ae549f87b0458a7ce, uid : unable to do port forwarding: socat not found
E0120 10:20:31.873201   13410 portforward.go:400] an error occurred forwarding 34503 -> 44134: error forwarding port 44134 to pod b1383ec9d858a5e7dd0958b3b5ff6574527f1f511a035e0ae549f87b0458a7ce, uid : unable to do port forwarding: socat not found

3.2、解决问题

这个原因是因为没有安装socat包

# yum install -y socat

安装socat后,这个问题解决,但是又出现了新问题

4、问题4:Error: configmaps is forbidden: User “system:serviceaccount:kube-system:tiller” cannot list resource “configmaps” in API group “” in the namespace “kube-system”

这个问题中有个关键字forbidden,看样子应该是权限的问题了

4.1、解决问题

# kubectl apply -f clusterrolebinding.yaml
clusterrolebinding.rbac.authorization.k8s.io/tiller created
# cat clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

注:此步骤也应该在应用tiller之前执行

验证helm基本功能

重新执行应用tiller

# kubectl delete -f tiller.yaml
# kubectl apply -f tiller.yaml

可以正常执行helm 命令
比如,创建一个chart sc-mgt,使用install命令安装这个chart

# helm install sc-mgt

查看chart的release情况

# helm list
NAME                    REVISION        UPDATED                         STATUS          CHART                   APP VERSION     NAMESPACE
busted-nightingale      1               Tue Jan 21 05:03:36 2020        DEPLOYED        sc-mgt-1.1.1       v2              default

到此,坑就都填完了

附录

# cat tiller.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: helm
      name: tiller
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: helm
        name: tiller
    spec:
      automountServiceAccountToken: true
      containers:
      - env:
        - name: TILLER_NAMESPACE
          value: kube-system
        - name: TILLER_HISTORY_MAX
          value: "0"
        image: 192.168.166.184:50500/tiller:v2.14.1
        imagePullPolicy: IfNotPresent
        livenessProbe:
          httpGet:
            path: /liveness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        name: tiller
        ports:
        - containerPort: 44134
          name: tiller
        - containerPort: 44135
          name: http
        readinessProbe:
          httpGet:
            path: /readiness
            port: 44135
          initialDelaySeconds: 1
          timeoutSeconds: 1
        resources: {}
      serviceAccountName: tiller
status: {}

---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: helm
    name: tiller
  name: tiller-deploy
  namespace: kube-system
spec:
  ports:
  - name: tiller
    port: 44134
    targetPort: tiller
  selector:
    app: helm
    name: tiller
  type: ClusterIP
status:
  loadBalancer: {}

...
发布了19 篇原创文章 · 获赞 1 · 访问量 426

猜你喜欢

转载自blog.csdn.net/weixin_43905458/article/details/104060485