K8S+Cloud-Controller-Manager对接Openstack Cinder创建PV之四:在已有K8S集群上扩展openstack-cloud-controller-manager组件

1、说明

前面的两篇博文介绍了
如何使用local-up-cluster.sh脚本创建本地集群对接OpenStack
以及local-up-cluster.sh脚本创建本地集群的一些问题
本篇介绍在一个部署好的K8S集群中扩展部署openstack-cloud-controller-manager组件

2、环境说明

K8S集群环境中,目前有四个节点,三个master构成高可用集群,一个node节点

# kubectl get nodes -o wide
NAME               STATUS   ROLES    AGE     VERSION                                INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
k8s-m1             Ready    master   47h     v1.17.0                                192.168.1.172   <none>        CentOS Linux 7 (Core)   3.10.0-1062.el7.x86_64   docker://19.3.5
k8s-m2             Ready    master   47h     v1.17.0                                192.168.1.151   <none>        CentOS Linux 7 (Core)   3.10.0-1062.el7.x86_64   docker://19.3.5
k8s-m3             Ready    master   47h     v1.17.0                                192.168.1.235   <none>        CentOS Linux 7 (Core)   3.10.0-1062.el7.x86_64   docker://19.3.5
k8s-n1             Ready    <none>   47h     v1.17.0                                192.168.1.59    <none>        CentOS Linux 7 (Core)   3.10.0-1062.el7.x86_64   docker://19.3.5

master组件托管在K8S集群之上

# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-564b6667d7-w7mv4   1/1     Running   0          47h
calico-node-b6fzg                          1/1     Running   0          47h
calico-node-kc5wj                          1/1     Running   0          5h32m
calico-node-lxwnz                          1/1     Running   1          47h
calico-node-nk72h                          1/1     Running   0          47h
calico-node-qmb8v                          1/1     Running   0          47h
calico-node-twf7z                          1/1     Running   0          44h
coredns-6955765f44-5qlmk                   1/1     Running   0          47h
coredns-6955765f44-w7d47                   1/1     Running   0          47h
etcd-k8s-m1                                1/1     Running   0          47h
etcd-k8s-m2                                1/1     Running   0          47h
etcd-k8s-m3                                1/1     Running   0          47h
kube-apiserver-k8s-m1                      1/1     Running   0          47h
kube-apiserver-k8s-m2                      1/1     Running   0          47h
kube-apiserver-k8s-m3                      1/1     Running   1          47h
kube-controller-manager-k8s-m1             1/1     Running   0          45h
kube-controller-manager-k8s-m2             1/1     Running   0          44h
kube-controller-manager-k8s-m3             1/1     Running   0          44h
kube-proxy-8gn9f                           1/1     Running   0          47h
kube-proxy-djfnm                           1/1     Running   0          47h
kube-proxy-nrn2p                           1/1     Running   0          5h32m
kube-proxy-p8djp                           1/1     Running   0          47h
kube-proxy-s4zvl                           1/1     Running   0          44h
kube-proxy-sshnm                           1/1     Running   0          47h
kube-scheduler-k8s-m1                      1/1     Running   1          47h
kube-scheduler-k8s-m2                      1/1     Running   0          47h
kube-scheduler-k8s-m3                      1/1     Running   0          47h

3、创建OpenStack环境配置文件

每个master节点都需要放置一份cloud-config文件

# ls /etc/kubernetes/cloud-config
/etc/kubernetes/cloud-config

该文件内容参考:使用local-up-cluster.sh脚本创建本地集群

4、修改每个master节点的kube-controller-manager配置

# vi /etc/kubernetes/manifests/kube-controller-manager.yaml

在command段增加参数

    - --cloud-provider=external
    - --external-cloud-volume-plugin=openstack
    - --cloud-config=/etc/kubernetes/cloud-config

在volumeMounts段增加

    - mountPath: /etc/kubernetes/cloud-config
      name: cloudconfig
      readOnly: true

在volumes段增加:

  - hostPath:
      path: /etc/kubernetes/cloud-config
      type: FileOrCreate
    name: cloudconfig

5、修改每个节点的kubelet配置文件

5.1、找到kubelet配置文件

# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Mon 2020-01-20 11:48:54 EST; 1 day 21h ago
     Docs: http://kubernetes.io/docs/
  Process: 7854 ExecStartPre=/usr/bin/kubelet-pre-start.sh (code=exited, status=0/SUCCESS)
 Main PID: 7868 (kubelet)
    Tasks: 20
   Memory: 42.4M
   CGroup: /system.slice/kubelet.service
           └─7868 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config...

Jan 20 11:48:55 k8s-n1 kubelet[7868]: I0120 11:48:55.273330    7868 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume st...abeaf22")
Jan 20 11:48:55 k8s-n1 kubelet[7868]: I0120 11:48:55.273349    7868 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume st...728b098")
Jan 20 11:48:55 k8s-n1 kubelet[7868]: I0120 11:48:55.273369    7868 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume st...728b098")
Jan 20 11:48:55 k8s-n1 kubelet[7868]: I0120 11:48:55.273453    7868 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume st...728b098")
Jan 20 11:48:55 k8s-n1 kubelet[7868]: I0120 11:48:55.273485    7868 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume st...abeaf22")
Jan 20 11:48:55 k8s-n1 kubelet[7868]: I0120 11:48:55.273524    7868 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume st...728b098")
Jan 20 11:48:55 k8s-n1 kubelet[7868]: I0120 11:48:55.273561    7868 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume st...728b098")
Jan 20 11:48:55 k8s-n1 kubelet[7868]: I0120 11:48:55.273597    7868 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume st...728b098")
Jan 20 11:48:55 k8s-n1 kubelet[7868]: I0120 11:48:55.273618    7868 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for...
Jan 20 11:48:55 k8s-n1 kubelet[7868]: I0120 11:48:55.273635    7868 reconciler.go:156] Reconciler: start to sync state
Hint: Some lines were ellipsized, use -l to show in full.

其中 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf即是为kubelet服务传递参数的配置文件

5.2、修改kubelet配置文件

在KUBELET_CONFIG_ARGS中增加 --cloud-provider=external

# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml --cloud-provider=external"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

6、重启每个节点的kubelet服务

# systemctl restart kubelet

7、启动openstack-cloud-controller-manager进程

/home/k8s/ccm/openstack-cloud-controller-manager --v=3 --vmodule=  --feature-gates=AllAlpha=false --cloud-provider=openstack --cloud-config=/home/k8s/ccm/cloud-config --kubeconfig /etc/kubernetes/controller-manager.conf --use-service-account-credentials --leader-elect=false --master=https://apiserver.cluster.local:6443

其中:

--kubeconfig  为contoller-manager.conf
--cloud-config 为/home/k8s/ccm/cloud-config,同/etc/kubernetes/cloud-config
apiserver.cluster.local 为节点地址 
# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.172 apiserver.cluster.local

8、验证

8.1、创建storageclass

# kubectl apply -f cinder/sc.yaml
# cat cinder/sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard
provisioner: kubernetes.io/cinder

8.2、创建pvc

# kubectl apply -f cinder/pvc.yaml
persistentvolumeclaim/cinder-pvc created

8.3、K8S集群查看pvc

# kubectl get pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
cinder-pvc   Bound    pvc-b90d204e-f7cf-45d9-86d7-bb67c3647ba2   1Gi        RWO            standard       2s

8.4、OpenStack集群查看卷

# openstack volume list
+--------------------------------------+-------------------------------------------------------------+-----------+------+-----------------------      ---------+
| ID                                   | Name                                                        | Status    | Size | Attached to                          |
+--------------------------------------+-------------------------------------------------------------+-----------+------+-----------------------      ---------+
| 9b496dbd-949d-4c44-be8f-c88664df041b | kubernetes-dynamic-pvc-b90d204e-f7cf-45d9-86d7-bb67c3647ba2 | available |    1 |                                      |

9、问题

出现cinder-pvc处于pending状态

# kubectl describe pvc cinder-pvc
Name:          cinder-pvc
Namespace:     default
StorageClass:  standard
Status:        Pending
Volume:
Labels:        <none>
Annotations:   kubectl.kubernetes.io/last-applied-configuration:
                 {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"cinder-pvc","namespace":"default"},"spec":{"accessM...
               volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/cinder
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode:    Filesystem
Mounted By:    <none>
Events:
  Type     Reason              Age              From                         Message
  ----     ------              ----             ----                         -------
  Warning  ProvisioningFailed  3s (x2 over 6s)  persistentvolume-controller  Failed to provision volume with StorageClass "standard": OpenStack cloud provider was not initialized properly : stat /etc/kubernetes/cloud-config: no such file or directory

原因可能是某个master节点的配置文件漏改了或者改错了

发布了19 篇原创文章 · 获赞 1 · 访问量 422

猜你喜欢

转载自blog.csdn.net/weixin_43905458/article/details/104069605
今日推荐