Kubernetes部署etcd集群-centos7

环境:

etcd01:192.168.12.37,centos7.1

etcd02:192.168.12.178,centos7.1

etcd03:192.168.12.179,centos7.1

软件版本:

etcd:2.2.5

实施步骤:

以etcd1部署为例,其他2个主机步骤一样:

安装etcd

[root@Docker-registry~]# yum install etcd -y

修改配置文件

[root@docker-registry~]# grep -v '^#' /etc/etcd/etcd.conf

ETCD_NAME=etcd01

ETCD_DATA_DIR="/var/lib/etcd/etcd01"

ETCD_LISTEN_PEER_URLS="http://192.168.12.37:2380"

ETCD_LISTEN_CLIENT_URLS="http://192.168.12.37:2379,http://127.0.0.1:2379"

ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.12.37:2380"

ETCD_INITIAL_CLUSTER="etcd01=http://192.168.12.37:2380,etcd02=http://192.168.12.178:2380,etcd03=http://192.168.12.179:2380"

ETCD_INITIAL_CLUSTER_STATE="new"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-00"

ETCD_ADVERTISE_CLIENT_URLS="http://192.168.12.37:2379"

其它两个主机的修改内容为以上红色的部分,其它不能变。/var/lib/etcd/etcd01目录会自己启动时建立,不能提前建立。

修改etcd启动文件

[root@docker-registry~]# more /usr/lib/systemd/system/etcd.service

[Unit]

Description=EtcdServer

After=network.target

After=network-online.target

Wants=network-online.target

[Service]

Type=notify

WorkingDirectory=/var/lib/etcd/

EnvironmentFile=-/etc/etcd/etcd.conf

User=etcd

# setGOMAXPROCS to number of processors

ExecStart=/bin/bash-c "GOMAXPROCS=$(nproc) /usr/bin/etcd --name=\"${ETCD_NAME}\"\

                                                         --data-dir=\"${ETCD_DATA_DIR}\" \

                                                         --listen-peer-urls=\"${ETCD_LISTEN_PEER_URLS}\" \

                                                         --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\"\

                                                         --advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\"\

                                                         --initial-cluster-token=\"${ETCD_INITIAL_CLUSTER_TOKEN}\"\

                                                         --initial-cluster=\"${ETCD_INITIAL_CLUSTER}\" \

                                                         --initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\""

Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

启动etcd服务

[root@docker-registryetcd]# systemctl start etcd

[root@docker-registryetcd]# systemctl status etcd

重复上述步骤,配置etcd02和etcd03.

三个主机的etcd服务全部启动后,查看cluster状态

[root@docker-registryetcd]# etcdctl cluster-health

Memberxxx is healthy…

[root@docker-registryetcd]# etcdctl memberlist

49ce2446964e72e3:name=etcd01 peerURLs=http://192.168.12.37:2380clientURLs=http://192.168.12.37:2379

742a07d658e2e113:name=etcd02 peerURLs=http://192.168.12.178:2380clientURLs=http://192.168.12.178:2379

eb6e0867bfd315e5:name=etcd03 peerURLs=http://192.168.12.179:2380clientURLs=http://192.168.12.179:2379

[root@docker-registryetcd]# etcdctl cluster-health

member49ce2446964e72e3 is healthy: got healthy result fromhttp://192.168.12.37:2379

member742a07d658e2e113 is healthy: got healthy result fromhttp://192.168.12.178:2379

membereb6e0867bfd315e5 is healthy: got healthy result fromhttp://192.168.12.179:2379

clusteris healthy

至此,etcd集群配置完成。

接下来,将当前环境的etcd换成etcd集群。

停止master的etcd

[root@k8s_master~]# systemctl stop etcd

[root@k8s_master~]# systemctl status etcd

将master的apiserver中etcd的配置指向etcd集群

[root@k8s_masterkubernetes]# vi apiserver

# Commaseparated list of nodes in the etcd cluster

KUBE_ETCD_SERVERS="--etcd-servers=http://192.168.12.37:2379,http://192.168.12.178:2379,http://192.168.12.179:2379"

重启master各组件

[root@k8s_masterkubernetes]# systemctl restart kube-apiserverkube-controller-manager kube-scheduler

[root@k8s_masterkubernetes]# systemctl status kube-apiserverkube-controller-manager kube-scheduler

重启node各组件

[root@k8s_node01~]# systemctl restart docker kubelet  kube-proxy

[root@k8s_node01~]# systemctl status docker kubelet  kube-proxy

[root@k8s_node02~]# systemctl restart docker kubelet  kube-proxy

[root@k8s_node02~]# systemctl status docker kubelet  kube-proxy

查看node节点状态

[root@k8s_masterkubernetes]# kubectl get node

NAME            LABELS                                 STATUS   AGE

192.168.12.175  kubernetes.io/hostname=192.168.12.175  Ready    47s

192.168.12.176  kubernetes.io/hostname=192.168.12.176  Ready    9s

新建pod进行测试

[root@k8s_masterpods]# kubectl create -f frontend-controller.yaml

replicationcontroller"frontend" created

[root@k8s_masterpods]# kubectl get pods

NAME            READY    STATUS   RESTARTS  AGE

frontend-40ec5  1/1      Running  0         5s

frontend-43khv  1/1      Running  0         5s

[root@k8s_masterpods]# kubectl get rc

CONTROLLER  Container(S)  IMAGE(S)                          SELECTOR       REPLICAS  AGE

frontend    frontend      kubeguide/guestbook-php-frontend  name=frontend  2         9s

etcd集群测试

1.关闭一台主机的etcd服务

关闭etcd01的etcd服务

[root@docker-registry~]# systemctl stop etcd

[root@docker-registry~]# systemctl status etcd -l

查看cluster状态

[root@kafka02etcd]# etcdctl cluster-health

failedto check the health of member 49ce2446964e72e3 onhttp://192.168.12.37:2379: Get http://192.168.12.37:2379/health:dial tcp 192.168.12.37:2379: connection refused

member49ce2446964e72e3 is unreachable: [http://192.168.12.37:2379] areall unreachable

member742a07d658e2e113 is healthy: got healthy result fromhttp://192.168.12.178:2379

membereb6e0867bfd315e5 is healthy: got healthy result fromhttp://192.168.12.179:2379

clusteris healthy

master查看原数据,依然存在

[root@k8s_masterpods]# kubectl get rc

CONTROLLER  CONTAINER(S)  IMAGE(S)                          SELECTOR       REPLICAS  AGE

frontend    frontend      kubeguide/guestbook-php-frontend  name=frontend  2         4m

[root@k8s_masterpods]# kubectl get pods

NAME            READY    STATUS   RESTARTS  AGE

frontend-40ec5  1/1       Running  0         4m

frontend-43khv  1/1      Running  0         4m

再次新建pod,无异常。

[root@k8s_masterk8s]# kubectl create -f redis-master-controller.yaml

replicationcontroller"redis-master" created

[root@k8s_masterk8s]# kubectl get rc

CONTROLLER    CONTAINER(S)  IMAGE(S)                          SELECTOR           REPLICAS  AGE

frontend      frontend      kubeguide/guestbook-php-frontend  name=frontend      2         5m

redis-master  master        kubeguide/redis-master            name=redis-master  2         9s

[root@k8s_masterk8s]# kubectl get pods

NAME                READY    STATUS   RESTARTS  AGE

frontend-40ec5      1/1      Running  0         5m

frontend-43khv      1/1      Running  0         5m

redis-master-aj9q6  1/1      Running  0         13s

redis-master-dcrxe  1/1      Running  0         13s

2.关闭两台主机的etcd服务

再关闭etcd02的etcd服务,此时只有一台etcd可用。

[root@kafka02etcd]# systemctl stop etcd

[root@kafka02etcd]# systemctl status etcd.service -l

查看cluster状态,此时显示cluster不可用。

[root@kafka03etcd]# etcdctl cluster-health

failedto check the health of member 49ce2446964e72e3 onhttp://192.168.12.37:2379: Get http://192.168.12.37:2379/health:dial tcp 192.168.12.37:2379: connection refused

member49ce2446964e72e3 is unreachable: [http://192.168.12.37:2379] areall unreachable

failedto check the health of member 742a07d658e2e113 onhttp://192.168.12.178:2379: Get http://192.168.12.178:2379/health:dial tcp 192.168.12.178:2379: connection refused

member742a07d658e2e113 is unreachable: [http://192.168.12.178:2379] areall unreachable

membereb6e0867bfd315e5 is unhealthy: got unhealthy result fromhttp://192.168.12.179:2379

clusteris unhealthy

master查看状态,原pod无影响

[root@k8s_masterk8s]# kubectl get pods

NAME                READY    STATUS   RESTARTS  AGE

frontend-40ec5      1/1      Running  0         11m

frontend-43khv      1/1      Running  0         11m

redis-master-aj9q6  1/1      Running  0         6m

redis-master-dcrxe  1/1      Running  0         6m

[root@k8s_masterk8s]# kubectl get rc

CONTROLLER    CONTAINER(S)  IMAGE(S)                          SELECTOR           REPLICAS  AGE

frontend      frontend      kubeguide/guestbook-php-frontend  name=frontend      2         16m

redis-master  master        kubeguide/redis-master            name=redis-master  2         11m

但是已无法新建pod

[root@k8s_masterk8s]# kubectl create -f redis-master-service.yaml

Errorfrom server: error when creating "redis-master-service.yaml":Timeout: request did not complete within allowedduration

即etcd集群,需要至少2个etcd节点才可以正常工作。

本文结束。

猜你喜欢

转载自blog.csdn.net/lic95/article/details/54985802
今日推荐