K8S+Cloud-Controller-Manager对接Openstack Cinder创建PV之二:使用local-up-cluster.sh创建本地K8S集群

1、基本说明

K8S源代码中有个local-up-cluster.sh脚本,可以用来创建一个本地单节点集群,并通过openstack-cloud-controller-manager,对接OpenStack
指导步骤在官网上有,具体见官网链接
但是这个步骤中,有不少的坑,对咱们国内用户来说,坑尤其多,究其缘由,不说了,你懂的

2、环境准备

基本环境是一个基于Centos7.7的虚拟机,无法连接外网(可能连外网也没用)

2.1、安装docker和etcd

安装docker

# yum install -y -q docker-ce --enablerepo=docker
# systemctl start docker
# systemctl enable docker

安装etcd

# yum install -y etcd

2.2、安装go编译环境

# yum install -y gcc rsync
# tar -C /usr/local/ -xvzf go1.13.5.linux-amd64.tar.gz
# export GOROOT=/usr/local/go
# mkdir /home/go
# export GOPATH=/home/go
# export PATH=$PATH:$GOROOT/bin

3、下载代码

3.1、下载cloud-provider-openstack并编译

参考:编译cloud-provider-openstack

3.2、下载并编译kubernetes

# wget https://codeload.github.com/kubernetes/kubernetes/tar.gz/v1.17.0 .
# tar zxvf v1.17.0
# export KUBE_FASTBUILD=true
# cd kubernetes-1.17.0/
# make cross

此编译过程中会出现如下问题

3.2.1、问题1:_output/bin/deepcopy-gen: Permission denied

# make cross
./hack/run-in-gopath.sh: line 33: _output/bin/deepcopy-gen: Permission denied
make[2]: *** [gen_deepcopy] Error 1
make[1]: *** [generated_files] Error 2
make: *** [cross] Error 1

解决:清理make环境后重新编译即可

# make clean

3.2.2、问题2:find: ‘rsync’: No such file or directory

# make cross
+++ [0117 05:42:52] Building go targets for linux/amd64:
    ./vendor/k8s.io/code-generator/cmd/deepcopy-gen
find: ‘rsync’: No such file or directory
find: ‘rsync’: No such file or directory

解决:安装rsync包(在环境准备阶段如果没有安装的话)

# yum install -y rsync

编译好的版本:

# ls _output/local/bin/linux/amd64/
apiextensions-apiserver   defaulter-gen  genkubedocs         ginkgo      kubeadm                  kubelet         linkcheck
cloud-controller-manager  e2e_node.test  genman              go2make     kube-apiserver           kubemark        mounter
conversion-gen            e2e.test       genswaggertypedocs  go-bindata  kube-controller-manager  kube-proxy      openapi-gen
deepcopy-gen              gendocs        genyaml             go-runner   kubectl                  kube-scheduler

4、创建cloud-config文件

这个文件为cloud-provider-openstack链接openstack提供环境配置

# mkdir /etc/kubernetes
# vi /etc/kubernetes/cloud-config
# cat /etc/kubernetes/cloud-config
[Global]
username=admin
password='123456'
auth-url=http://192.168.166.104:5000/v3
tenant-id=f654f94875b64adeb23134017b7d1bd6
domain-id=default

这里有个坑,埋个伏笔

5、安装 cfssl

运行etcd会用到cfssl

# wget http://pkg.cfssl.org/R1.2/cfssl_linux-amd64 .
# chmod +x cfssl_linux-amd64
# cp cfssl_linux-amd64 /usr/local/bin/cfssl
# wget http://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 .
# chmod +x cfssljson_linux-amd64
# cp cfssljson_linux-amd64 /usr/local/bin/cfssljson

6、运行etcd

按照说明,可先手动运行etcd
对于没有外网链接的环境来说,这儿会卡死,因为需要取外网下载etcd版本
并需要改下代码

6.1、手工下载etcd版本

# wget https://github.com/coreos/etcd/releases/download/v3.4.3/etcd-v3.4.3-linux-amd64.tar.gz /home/k8s/

6.2、修改脚本,以便使用本地的etcd版本

# vi hack/lib/etcd.sh
kube::etcd::install() {
...
      url="https://github.com/coreos/etcd/releases/download/v${ETCD_VERSION}/etcd-v${ETCD_VERSION}-linux-${arch}.tar.gz"
      download_file="etcd-v${ETCD_VERSION}-linux-${arch}.tar.gz"
      kube::util::download_file "${url}" "${download_file}"
      tar xzf "${download_file}"
      ln -fns "etcd-v${ETCD_VERSION}-linux-${arch}" etcd
      rm "${download_file}"

改为

kube::etcd::install() {
...
      #url="https://github.com/coreos/etcd/releases/download/v${ETCD_VERSION}/etcd-v${ETCD_VERSION}-linux-${arch}.tar.gz"
      download_file="etcd-v${ETCD_VERSION}-linux-${arch}.tar.gz"
      #kube::util::download_file "${url}" "${download_file}"
      cp /home/k8s/etcd-v3.4.3-linux-amd64.tar.gz "${download_file}"
      tar xzf "${download_file}"
      ln -fns "etcd-v${ETCD_VERSION}-linux-${arch}" etcd
      rm "${download_file}"

7、创建本地集群

# export EXTERNAL_CLOUD_PROVIDER_BINARY=/home/k8s/openstack-cloud-controller-manager
# export EXTERNAL_CLOUD_PROVIDER=true
# export CLOUD_PROVIDER=openstack
# export CLOUD_CONFIG=/etc/kubernetes/cloud-config
# cd kubernetes-1.17.0/
# ./hack/install-etcd.sh
# export PATH="/home/k8s/kubernetes-1.17.0/third_party/etcd:${PATH}"
# ./hack/local-up-cluster.sh

其中,openstack-cloud-controller-manager为步骤3.1编译所得

8、验证准备

# ./cluster/kubectl.sh create -f cluster/addons/rbac/kubelet-api-auth/
# export KUBECONFIG=/var/run/kubernetes/admin.kubeconfig
# ./cluster/kubectl.sh

因为没有安装kubectl,所以都使用./cluster/kubectl.sh这个脚本

9、问题

先解决问题,再验证

9.1、问题1: error building controller context: cloud provider could not be initialized: could not init cloud provider “openstack”: Authentication failed

该问题是本文第四章埋的坑引起的,密码不能使用单引号

password='123456'

改为

password=123456

重新执行 ./hack/local-up-cluster.sh

9.2、问题2:W0116 14:57:38]: kube-controller-manager terminated unexpectedly, see /tmp/kube-controller-manager.log

在调用最后会有这么一行,./hack/local-up-cluster.sh

  cluster/kubectl.sh config set-cluster local --server=https://localhost:6443 --certificate-authority=/var/run/kubernetes/server-ca.crt
  cluster/kubectl.sh config set-credentials myself --client-key=/var/run/kubernetes/client-admin.key --client-certificate=/var/run/kubernetes/client-admin.crt
  cluster/kubectl.sh config set-context local --cluster=local --user=myself
  cluster/kubectl.sh config use-context local
  cluster/kubectl.sh
W0120 10:46:45]: kube-controller-manager terminated unexpectedly, see /tmp/kube-controller-manager.log

这个其实不影响,主要是因为退出./hack/local-up-cluster.sh后,这个进程openstack-cloud-controller-manager没有退出,重复执行./hack/local-up-cluster.sh时的打印,要解决的话,只需在执行./hack/local-up-cluster.sh前手工停掉进程openstack-cloud-controller-manager即可

./hack/local-up-cluster.sh: line 338: 20500 Terminated              ${CONTROLPLANE_SUDO} "${GO_OUT}/kube-controller-manager" --v="${LOG_LEVEL}" --vmodule="${LOG_SPEC}" --service-account-private-key-file="${SERVICE_ACCOUNT_KEY}" --root-ca-file="${ROOT_CA_FILE}" --cluster-signing-cert-file="${CLUSTER_SIGNING_CERT_FILE}" --cluster-signing-key-file="${CLUSTER_SIGNING_KEY_FILE}" --enable-hostpath-provisioner="${ENABLE_HOSTPATH_PROVISIONER}" ${node_cidr_args[@]+"${node_cidr_args[@]}"} --pvclaimbinder-sync-period="${CLAIM_BINDER_SYNC_PERIOD}" --feature-gates="${FEATURE_GATES}" "${cloud_config_arg[@]}" --kubeconfig "${CERT_DIR}"/controller.kubeconfig --use-service-account-credentials --controllers="${KUBE_CONTROLLERS}" --leader-elect=false --cert-dir="${CERT_DIR}" --master="https://${API_HOST}:${API_SECURE_PORT}" > "${CTLRMGR_LOG}" 2>&1
./hack/local-up-cluster.sh: line 338: 21030 Terminated              sudo "${GO_OUT}/kube-proxy" --v="${LOG_LEVEL}" --config=/tmp/kube-proxy.yaml --master="https://${API_HOST}:${API_SECURE_PORT}" > "${PROXY_LOG}" 2>&1
/home/k8s/kubernetes-1.17.0/hack/lib/etcd.sh: line 94: 20502 Terminated              ${CONTROLPLANE_SUDO} "${GO_OUT}/kube-scheduler" --v="${LOG_LEVEL}" --leader-elect=false --kubeconfig "${CERT_DIR}"/scheduler.kubeconfig --feature-gates="${FEATURE_GATES}" --master="https://${API_HOST}:${API_SECURE_PORT}" > "${SCHEDULER_LOG}" 2>&1
# ps -elf |grep cloud
0 S root      7419 15398  0  80   0 - 28178 pipe_w 06:41 pts/0    00:00:00 grep --color=auto cloud
4 S root      7506     1  0  80   0 - 36287 futex_ Jan20 ?        00:08:16 /home/k8s/linux-amd64/openstack-cloud-controller-manager --v=3 --vmodule=  --feature-gates=AllAlpha=false --cloud-provider=openstack --cloud-config=/etc/kubernetes/cloud-config --kubeconfig /var/run/kubernetes/controller.kubeconfig --use-service-account-credentials --leader-elect=false --master=https://localhost:6443
# kill -9 7506

10、验证从Cinder创建PV

10.1、创建storageclass

默认情况下,storageclass已经创建好了

# cat /home/k8s/cinder/sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard
provisioner: kubernetes.io/cinder
# ./cluster/kubectl.sh create -f /home/k8s/cinder/sc.yaml
Error from server (AlreadyExists): error when creating "/home/k8s/cinder/sc.yaml": storageclasses.storage.k8s.io "standard" already exists
# ./cluster/kubectl.sh get storageclass
NAME                 PROVISIONER            RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
standard (default)   kubernetes.io/cinder   Delete          Immediate           false                  14m

10.2、创建pvc

# cat /home/k8s/cinder/pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: claim
  namespace: default
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: standard

# ./cluster/kubectl.sh create -f /home/k8s/cinder/pvc.yaml
persistentvolumeclaim/claim created

10.3、查看pvc

k8s中查看pvc,已经处于Bound状态

# ./cluster/kubectl.sh get pvc
NAME    STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
claim   Bound    pvc-46a8e199-616e-49d0-b7a0-cd4282e6e464   1Gi        RWO            standard       11s

在openstack查看cinder卷状态为available

#  openstack volume list
+--------------------------------------+----------------------------------------------------------------------------------------------+-----------+------+------------------------------------------------------------+
| ID                                   | Name                                                                                         | Status    | Size | Attached to                                                |
+--------------------------------------+----------------------------------------------------------------------------------------------+-----------+------+------------------------------------------------------------+
| fa232c86-8c5f-441a-b4fd-05eb1500f06e | kubernetes-dynamic-pvc-46a8e199-616e-49d0-b7a0-cd4282e6e464                                  | available |    1 |

附录

./hack/local-up-cluster.sh完整的打印日志

# ./hack/local-up-cluster.sh
WARNING : The kubelet is configured to not fail even if swap is enabled; production deployments should disable swap.
make: Entering directory `/home/k8s/kubernetes-1.17.0'
make[1]: Entering directory `/home/k8s/kubernetes-1.17.0'
make[1]: Leaving directory `/home/k8s/kubernetes-1.17.0'
+++ [0122 06:00:26] Building go targets for linux/amd64:
    cmd/kubectl
    cmd/kube-apiserver
    cmd/kube-controller-manager
    cmd/cloud-controller-manager
    cmd/kubelet
    cmd/kube-proxy
    cmd/kube-scheduler
make: Leaving directory `/home/k8s/kubernetes-1.17.0'
Kubelet cgroup driver defaulted to use: cgroupfs
/home/k8s/kubernetes-1.17.0/third_party/etcd:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/go/bin
API SERVER insecure port is free, proceeding...
API SERVER secure port is free, proceeding...
Detected host and ready to start services.  Doing some housekeeping first...
Using GO_OUT /home/k8s/kubernetes-1.17.0/_output/local/bin/linux/amd64
Starting services now!
Starting etcd
etcd --advertise-client-urls http://127.0.0.1:2379 --data-dir /tmp/tmp.GqoEidQf3a --listen-client-urls http://127.0.0.1:2379 --debug > "/tmp/etcd.log" 2>/dev/null
Waiting for etcd to come up.
+++ [0122 06:01:36] On try 2, etcd: : {"health":"true"}
{"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"2","raft_term":"2"}}Generating a 2048 bit RSA private key
............................................................+++
..........................................+++
writing new private key to '/var/run/kubernetes/server-ca.key'
-----
Generating a 2048 bit RSA private key
....................................................................................................+++
..........................+++
writing new private key to '/var/run/kubernetes/client-ca.key'
-----
Generating a 2048 bit RSA private key
....................................+++
...........................................................................+++
writing new private key to '/var/run/kubernetes/request-header-ca.key'
-----
2020/01/22 06:01:36 [INFO] generate received request
2020/01/22 06:01:36 [INFO] received CSR
2020/01/22 06:01:36 [INFO] generating key: rsa-2048
2020/01/22 06:01:36 [INFO] encoded CSR
2020/01/22 06:01:36 [INFO] signed certificate with serial number 453031919308810844465058357568071157473298714837
2020/01/22 06:01:36 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2020/01/22 06:01:36 [INFO] generate received request
2020/01/22 06:01:36 [INFO] received CSR
2020/01/22 06:01:36 [INFO] generating key: rsa-2048
2020/01/22 06:01:37 [INFO] encoded CSR
2020/01/22 06:01:37 [INFO] signed certificate with serial number 709261868239232130305798688752665925405005802871
2020/01/22 06:01:37 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2020/01/22 06:01:37 [INFO] generate received request
2020/01/22 06:01:37 [INFO] received CSR
2020/01/22 06:01:37 [INFO] generating key: rsa-2048
2020/01/22 06:01:37 [INFO] encoded CSR
2020/01/22 06:01:37 [INFO] signed certificate with serial number 229867648452227828218679371886950056579068439708
2020/01/22 06:01:37 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2020/01/22 06:01:37 [INFO] generate received request
2020/01/22 06:01:37 [INFO] received CSR
2020/01/22 06:01:37 [INFO] generating key: rsa-2048
2020/01/22 06:01:37 [INFO] encoded CSR
2020/01/22 06:01:37 [INFO] signed certificate with serial number 299352224117110765517174665934492853324845999138
2020/01/22 06:01:37 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2020/01/22 06:01:37 [INFO] generate received request
2020/01/22 06:01:37 [INFO] received CSR
2020/01/22 06:01:37 [INFO] generating key: rsa-2048
2020/01/22 06:01:38 [INFO] encoded CSR
2020/01/22 06:01:38 [INFO] signed certificate with serial number 316691270087856516824387548865800457547775481118
2020/01/22 06:01:38 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2020/01/22 06:01:38 [INFO] generate received request
2020/01/22 06:01:38 [INFO] received CSR
2020/01/22 06:01:38 [INFO] generating key: rsa-2048
2020/01/22 06:01:38 [INFO] encoded CSR
2020/01/22 06:01:38 [INFO] signed certificate with serial number 101943346928278900585818938882521300751370316405
2020/01/22 06:01:38 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2020/01/22 06:01:38 [INFO] generate received request
2020/01/22 06:01:38 [INFO] received CSR
2020/01/22 06:01:38 [INFO] generating key: rsa-2048
2020/01/22 06:01:38 [INFO] encoded CSR
2020/01/22 06:01:38 [INFO] signed certificate with serial number 33930824437173972153485588136591561043396564085
2020/01/22 06:01:38 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
2020/01/22 06:01:38 [INFO] generate received request
2020/01/22 06:01:38 [INFO] received CSR
2020/01/22 06:01:38 [INFO] generating key: rsa-2048
2020/01/22 06:01:39 [INFO] encoded CSR
2020/01/22 06:01:39 [INFO] signed certificate with serial number 436487415659720123623451081520417441880070519797
2020/01/22 06:01:39 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
Waiting for apiserver to come up
+++ [0122 06:01:44] On try 5, apiserver: : ok
clusterrolebinding.rbac.authorization.k8s.io/kube-apiserver-kubelet-admin created
Cluster "local-up-cluster" set.
use 'kubectl --kubeconfig=/var/run/kubernetes/admin-kube-aggregator.kubeconfig' to use the aggregated API server
service/kube-dns created
serviceaccount/kube-dns created
configmap/kube-dns created
deployment.apps/kube-dns created
Kube-dns addon successfully deployed.
2020/01/22 06:01:49 [INFO] generate received request
2020/01/22 06:01:49 [INFO] received CSR
2020/01/22 06:01:49 [INFO] generating key: rsa-2048
2020/01/22 06:01:49 [INFO] encoded CSR
2020/01/22 06:01:49 [INFO] signed certificate with serial number 682999840233530521406074173913896590099075056750
2020/01/22 06:01:49 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
kubelet ( 20641 ) is running.
wait kubelet ready
No resources found in default namespace.
127.0.0.1   NotReady   <none>   1s    v1.17.0
2020/01/22 06:01:51 [INFO] generate received request
2020/01/22 06:01:51 [INFO] received CSR
2020/01/22 06:01:51 [INFO] generating key: rsa-2048
2020/01/22 06:01:52 [INFO] encoded CSR
2020/01/22 06:01:52 [INFO] signed certificate with serial number 623602842026296022400118347217504027883187630076
2020/01/22 06:01:52 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
Create default storage class for openstack
storageclass.storage.k8s.io/standard created
Local Kubernetes cluster is running. Press Ctrl-C to shut it down.

Logs:
  /tmp/kube-apiserver.log
  /tmp/kube-controller-manager.log
  /tmp/cloud-controller-manager.log
  /tmp/kube-proxy.log
  /tmp/kube-scheduler.log
  /tmp/kubelet.log

To start using your cluster, you can open up another terminal/tab and run:

  export KUBECONFIG=/var/run/kubernetes/admin.kubeconfig
  cluster/kubectl.sh

Alternatively, you can write to the default kubeconfig:

  export KUBERNETES_PROVIDER=local

  cluster/kubectl.sh config set-cluster local --server=https://localhost:6443 --certificate-authority=/var/run/kubernetes/server-ca.crt
  cluster/kubectl.sh config set-credentials myself --client-key=/var/run/kubernetes/client-admin.key --client-certificate=/var/run/kubernetes/client-admin.crt
  cluster/kubectl.sh config set-context local --cluster=local --user=myself
  cluster/kubectl.sh config use-context local
  cluster/kubectl.sh
发布了19 篇原创文章 · 获赞 1 · 访问量 424

猜你喜欢

转载自blog.csdn.net/weixin_43905458/article/details/104067763
今日推荐