kubernetes安装kubespray

ens33

PEERDNS=no

Kubespray 是 Kubernetes incubator 中的项目,目标是提供 Production Ready Kubernetes 部署方案,该项目基础是通过 Ansible Playbook 来定义系统与 Kubernetes 集群部署的任务,具有以下几个特点:


可以部署在 AWS, GCE,Azure, OpenStack以及裸机上.


部署 HighAvailable Kubernetes 集群.


可组合性(Composable),可自行选择 Network Plugin (flannel, calico, canal,weave) 来部署.


支持多种 Linuxdistributions(CoreOS, Debian Jessie, Ubuntu 16.04, CentOS/RHEL7)


利用Kubespray可快速扩展K8S集群;初次安装遇到不少问题,过程见下文


 

暂时关闭swap,重启后恢复

[root@kolla ~]#swapoff -a


[root@kolla ~]# cat /etc/fstab 

UUID=517ebfdd-d929-44a3-8bc0-968ca0d186f7 /                       xfs     defaults        0 0

UUID=8104a15b-585f-4649-9bc6-20dbb71b3816 /boot                   xfs     defaults        0 0

#UUID=bbeff511-493a-480a-b4d9-5af1519b5299 swap                    swap    defaults        0 0

配置DNS修改/etc/resolv.conf

[root@kolla ~]# echo "nameserver 8.8.8.8" >/etc/resolv.conf

[root@kolla ~]# systemctl stop firewalld

[root@kolla ~]# systemctl disable firewalld

[root@kolla ~]# ssh-keygen

[root@kolla ~]# ssh-copy-id [email protected]  



[root@kolla ~]# yum install -y epel-release

[root@kolla ~]# yum install -y python-pip python-netaddr ansible git

[root@kolla ~]# pip install --upgrade Jinja2

[root@kolla ~]# pip install --upgrade pip

安装docker

请参考个人另外一篇文章


阿里云中查找响应镜像和版本

阿里云镜像搜索地址:https://dev.aliyun.com/search.html。

dockerhub:https://hub.docker.com/

下载kubespary(官方)

[root@kolla ~]#  cd /root/

[root@kolla ~]#  git clone https://github.com/kubernetes-incubator/kubespray.git

[root@kolla ~]#  cd kubespray

切换分支(待定)

--------------------------------------------待定

cd kubespray

git checkout v2.4.0 -b myv2.4.0

cp inventory/inventory.example inventory/inventory.cfg

--------------------------------------------待定

修改操作系统支持(dashboard需要确定,true或者false访问区别)

------------------------------------------------------

vi inventory/group_vars/all.yml

bootstrap_os: centos

vi inventory/group_vars/k8s-cluster.yml

dashboard_enabled: false

kube_api_pwd: “hello-world8888”

------------------------------------------------------


[root@kolla kubespray]# grep -r 'Versions' .

./roles/dnsmasq/defaults/main.yml:# Versions

./roles/download/defaults/main.yml:# Versions

./roles/kubernetes-apps/ansible/defaults/main.yml:# Versions

./roles/kubernetes-apps/istio/templates/istio-initializer.yml.j2:        apiVersions:

[root@kolla kubespray]# 

[root@kolla kubespray]# grep -r 'v1.9.' .

./roles/bootstrap-os/files/get-pip.py:E3?dmsGKgdl$sm$JB!fuat?v1|9*1OtNCuG%A{j(7h-47SAd*2OgGdIE3?i8oOKiV2yPUwEADJ~L#*d

./roles/kubernetes-apps/cluster_roles/tasks/main.yml:    - kube_version | version_compare('v1.9.0', '>=')

./roles/kubernetes-apps/cluster_roles/tasks/main.yml:    - kube_version | version_compare('v1.9.3', '<=')

./roles/kubernetes-apps/cluster_roles/tasks/main.yml:    - kube_version | version_compare('v1.9.0', '>=')

./roles/kubernetes-apps/cluster_roles/tasks/main.yml:    - kube_version | version_compare('v1.9.3', '<=')

./roles/kubernetes-apps/cluster_roles/tasks/main.yml:    - kube_version | version_compare('v1.9.0', '>=')

./roles/kubernetes-apps/cluster_roles/tasks/main.yml:    - kube_version | version_compare('v1.9.3', '<=')

./roles/kubernetes/master/defaults/main.yml:      {%- if kube_version | version_compare('v1.9', '<') -%}

./roles/kubernetes/master/templates/kubeadm-config.yaml.j2:{% if kube_version | version_compare('v1.9', '>=') %}

./roles/kubernetes/master/templates/manifests/kube-apiserver.manifest.j2:{%   if kube_version | version_compare('v1.9', '<')  %}

./roles/kubernetes/master/templates/manifests/kube-apiserver.manifest.j2:{% if kube_version | version_compare('v1.9', '>=') %}

./roles/kubernetes/master/templates/manifests/kube-apiserver.manifest.j2:{% if kube_version | version_compare('v1.9', '>=') %}

./roles/kubernetes/node/templates/vsphere-cloud-config.j2:{% if kube_version | version_compare('v1.9.2', '>=') %}

./roles/kubernetes/node/templates/vsphere-cloud-config.j2:{% if kube_version | version_compare('v1.9.2', '>=') %}

[root@kolla kubespray]# 


非常重要的:本文源码对应版本,在使用的时候各个镜像及版本已经有变动,请下载kubespray源码,或者在githu中查找历史纪录,修改为指定版本


相关yml文件及路径为

[root@kolla kubespray]#kubespray/roles/kubernetes-apps/ansible/defaults/main.yml

[root@kolla kubespray]#kubespray/roles/download/defaults/main.yml

[root@kolla kubespray]#kubespray/extra_playbooks/roles/download/defaults/main.yml

[root@kolla kubespray]#kubespray/inventory/group_vars/k8s-cluster.yml

[root@kolla kubespray]#kubespray/roles/dnsmasq/templates/dnsmasq-autoscaler.yml


版本相关行(只查看)

[root@kolla ~]# vi kubespray/roles/kubernetes-apps/ansible/defaults/main.yml

      2 # Versions

      3 kubedns_version: 1.14.10

      4 kubednsautoscaler_version: 1.1.2

     44 # Dashboard

     45 dashboard_enabled: true

     46 dashboard_image_repo: gcr.io/google_containers/kubernetes-dashboard-amd64

     47 dashboard_image_tag: v1.8.3

[root@kolla ~]# vi kubespray/roles/download/defaults/main.yml

     26 # Versions

     27 kube_version: v1.10.4

     28 kubeadm_version: "{{ kube_version }}"

     41 weave_version: 2.3.0

    117 kubedns_version: 1.14.10

    130 kubednsautoscaler_version: 1.1.2

[root@kolla ~]# vi kubespray/extra_playbooks/roles/download/defaults/main.yml


[root@kolla ~]# vi kubespray/inventory/sample/group_vars/k8s-cluster.yml

     22 kube_version: v1.10.4

根据需要修改如下内容

     68 kube_network_plugin: calico

     94 kube_service_addresses: 10.233.0.0/18

     99 kube_pods_subnet: 10.233.64.0/18

需要下载的镜像大约25个(通过VPN)

③需FQ下载的镜像 tag列表(25个):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
gcr.io /google_containers/cluster-proportional-autoscaler-amd64 :1.1.2
gcr.io /google_containers/pause-amd64 :3.0
gcr.io /google_containers/k8s-dns-kube-dns-amd64 :1.14.7
gcr.io /google_containers/k8s-dns-dnsmasq-nanny-amd64 :1.14.7 gcr.io /google_containers/k8s-dns-sidecar-amd64 :1.14.7
gcr.io /google_containers/elasticsearch :v2.4.1 
gcr.io /google_containers/fluentd-elasticsearch :1.22 
gcr.io /google_containers/kibana :v4.6.1 
gcr.io /kubernetes-helm/tiller :v2.7.2 
gcr.io /google_containers/kubernetes-dashboard-init-amd64 :v1.0.1 
gcr.io /google_containers/kubernetes-dashboard-amd64 :v1.8.1 
quay.io /l23network/k8s-netchecker-agent :v1.0 
quay.io /l23network/k8s-netchecker-server :v1.0 
quay.io /coreos/etcd :v3.2.4 
quay.io /coreos/flannel :v0.9.1 
quay.io /coreos/flannel-cni :v0.3.0 
quay.io /calico/ctl :v1.6.1 
quay.io /calico/node :v2.6.2 
quay.io /calico/cni :v1.11.0 
quay.io /calico/kube-controllers :v1.0.0 
quay.io /calico/routereflector :v0.4.0
quay.io /coreos/hyperkube :v1.9.0_coreos.0
quay.io /ant31/kargo :master 
quay.io /external_storage/local-volume-provisioner-bootstrap :v1.0.0 
quay.io /external_storage/local-volume-provisioner :v1.0.0


相关镜像,可去dockerhub中查找。

docker pull googlecontainer/镜像名:tag

下述为整理的各镜像。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
docker pull jiang7865134 /gcr .io_google_containers_cluster-proportional-autoscaler-amd64:1.1.2
docker pull jiang7865134 /gcr .io_google_containers_pause-amd64:3.0
docker pull jiang7865134 /gcr .io_google_containers_k8s-dns-kube-dns-amd64:1.14.7
docker pull jiang7865134 /gcr .io_google_containers_k8s-dns-dnsmasq-nanny-amd64:1.14.7
docker pull jiang7865134 /gcr .io_google_containers_k8s-dns-sidecar-amd64:1.14.7
docker pull jiang7865134 /gcr .io_google_containers_elasticsearch:v2.4.1
docker pull jiang7865134 /gcr .io_google_containers_fluentd-elasticsearch:1.22
docker pull jiang7865134 /gcr .io_google_containers_kibana:v4.6.1
docker pull jiang7865134 /gcr .io_kubernetes-helm_tiller:v2.7.2
docker pull jiang7865134 /gcr .io_google_containers_kubernetes-dashboard-init-amd64:v1.0.1
docker pull jiang7865134 /gcr .io_google_containers_kubernetes-dashboard-amd64:v1.8.1
docker pull jiang7865134 /quay .io_l23network_k8s-netchecker-agent:v1.0
docker pull jiang7865134 /quay .io_l23network_k8s-netchecker-server:v1.0
docker pull jiang7865134 /quay .io_coreos_etcd:v3.2.4
docker pull jiang7865134 /quay .io_coreos_flannel:v0.9.1
docker pull jiang7865134 /quay .io_coreos_flannel-cni:v0.3.0
docker pull jiang7865134 /quay .io_calico_ctl:v1.6.1
docker pull jiang7865134 /quay .io_calico_node:v2.6.2
docker pull jiang7865134 /quay .io_calico_cni:v1.11.0
docker pull jiang7865134 /quay .io_calico_kube-controllers:v1.0.0
docker pull jiang7865134 /quay .io_calico_routereflector:v0.4.0
docker pull jiang7865134 /quay .io_coreos_hyperkube:v1.9.0_coreos.0
docker pull jiang7865134 /quay .io_ant31_kargo:master
docker pull jiang7865134 /quay .io_external_storage_local-volume-provisioner-bootstrap:v1.0.0
docker pull jiang7865134 /quay .io_external_storage_local-volume-provisioner:v1.0.0

  

将各yml中的镜像名改为dockerhub对应镜像名(推荐,因master,node部署中均需自动下载各镜像)

1
2
3
4
5
6
7
8
9
10
11
12
13
cd  /root/kubespray
grep  -r  'gcr.io'  .
grep  -r  'quay.io'  .
sed  -i  's#gcr\.io\/google_containers\/#jiang7865134/gcr\.io_google_containers_#g'  roles /download/defaults/main .yml
#sed -i 's#gcr\.io\/google_containers\/#jiang7865134/gcr\.io_google_containers_#g' roles/dnsmasq/templates/dnsmasq-autoscaler.yml.j2
sed  -i  's#gcr\.io\/google_containers\/#jiang7865134/gcr\.io_google_containers_#g'  roles /kubernetes-apps/ansible/defaults/main .yml
sed  -i  's#gcr\.io\/kubernetes-helm\/#jiang7865134/gcr\.io_kubernetes-helm_#g'  roles /download/defaults/main .yml
sed  -i  's#quay\.io\/l23network\/#jiang7865134/quay\.io_l23network_#g'  docs /netcheck .md
sed  -i  's#quay\.io\/l23network\/#jiang7865134/quay\.io_l23network_#g'  roles /download/defaults/main .yml
sed  -i  's#quay\.io\/coreos\/#jiang7865134/quay\.io_coreos_#g'  roles /download/defaults/main .yml
sed  -i  's#quay\.io\/calico\/#jiang7865134/quay\.io_calico_#g'  roles /download/defaults/main .yml
sed  -i  's#quay\.io\/external_storage\/#jiang7865134/quay\.io_external_storage_#g'  roles /kubernetes-apps/local_volume_provisioner/defaults/main .yml
sed  -i  's#quay\.io\/ant31\/kargo#jiang7865134/quay\.io_ant31_kargo_#g'  .gitlab-ci.yml


修改ansible相关配置,

[root@kolla inventory]# ls

local  sample

local为本地单机节点部署 脚本配置, sample为集群部署脚本配置,其中包括master节点HA和etcdHA

[root@kolla inventory]# pwd

/home/hadoop/kubespray/inventory

[root@kolla inventory]# cp /root/kubespray/inventory/sample/hosts.ini ./inventory.cfg

[root@kolla inventory]# cat inventory.cfg



# ## Configure 'ip' variable to bind kubernetes services on a

# ## different ip than the default iface

# node1 ansible_host=95.54.0.12  # ip=10.3.0.1

# node2 ansible_host=95.54.0.13  # ip=10.3.0.2

# node3 ansible_host=95.54.0.14  # ip=10.3.0.3

# node4 ansible_host=95.54.0.15  # ip=10.3.0.4

# node5 ansible_host=95.54.0.16  # ip=10.3.0.5

# node6 ansible_host=95.54.0.17  # ip=10.3.0.6


# ## configure a bastion host if your nodes are not directly reachable

# bastion ansible_host=x.x.x.x ansible_user=some_user


# [kube-master]

# node1

# node2


# [etcd]

# node1

# node2

# node3


# [kube-node]

# node2

# node3

# node4

# node5

# node6


# [kube-ingress]

# node2

# node3


# [k8s-cluster:children]

# kube-master

# kube-node

# kube-ingress


部署两种方法

[root@kolla ~]# ansible-playbook -b -i inventory/inventory.cfg cluster.yml --flush-cache

[root@kolla ~]# ansible-playbook -i inventory/inventory.cfg cluster.yml


其中etcd默认安装1台,若需多台,则需要修改(可能需要修改,也可能不需要修改)

[root@kolla ~]# vi /root/kubespray/roles/kubernetes/preinstall/tasks/verify-settings.yml



部署中遇到问题解决(需要关注ansible批量自动化)

1python版本引起

fatal: [node1]: FAILED! => {"changed": false, "module_stderr": "Shared connection to 172.28.2.211 closed.\r\n", "module_stdout": "/bin/sh: 1: /usr/bin/python: not found\r\n", "msg": "MODULE FAILURE", "rc": 0}

解决办法:(centos等待完善)

apt-add-repository ppa:ansible/ansible

apt-get install python

python -V

Python 2.7.12

2 缺python-netaddr软件包

localhost: The ipaddr filter requires python-netaddr be installed on the ansible controller

解决办法: yum install python-netaddr

3默认配置文件

TASK [kubernetes/preinstall : Stop if even number of etcd hosts]

fatal: [node1]: FAILED! => {

"assertion": "groups.etcd|length is not divisibleby 2",

"changed": false,

"evaluated_to": false

解决方法1(临时):kubespray/inventory/inventory.cfg中etcd组,仅保留master(默认为2,即localhost 和 master)

[etcd]

master

对应yml

前期检查kubernetes/preinstall

cd /root/kubespray/roles/kubernetes/preinstall/tasks

grep etcd *

etchosts.yml:      {% for item in (groups['k8s-cluster'] + groups['etcd'] + groups['calico-rr']|default([]))|unique -%}{{ hostvars[item]['access_ip'] | default(hostvars[item]['ip'] | default(hostvars[item]['ansible_default_ipv4']['address'])) }}{% if (item != hostvars[item]['ansible_hostname']) %} {{ hostvars[item]['ansible_hostname'] }} {{ hostvars[item]['ansible_hostname'] }}.{{ dns_domain }}{% endif %} {{ item }} {{ item }}.{{ dns_domain }}

verify-settings.yml:- name: Stop if even number of etcd hosts

verify-settings.yml:    that: groups.etcd|length is not divisibleby 2

解决方法2:

保留inventory.cfg中etcd组中节点不变,修改verify-settings.yml

49     that: groups.etcd|length is not divisibleby 5

数量改为节点数+1=5  ,目前总共4个节点


报错4: swap未关闭

TASK [kubernetes/preinstall : Stop if swap enabled] **************************************************************

Thursday 01 February 2018  16:10:27 +0800 (0:00:00.095)       0:00:15.064 *****

fatal: [node1]: FAILED! => {

"assertion": "ansible_swaptotal_mb == 0",

解决方法:

根据TASK [kubernetes/preinstall : Stop if swap enabled]可确定任务对应yml文件的目录

cd /root/kubespray/roles/kubernetes/preinstall/tasks

grep swap *

verify-settings.yml:- name: Stop if swap enabled

.....

vim verify-settings.yml

修改swap, 75 - name: Stop if swap enabled

76   assert:

77     that: ansible_swaptotal_mb == 0

78   when: kubelet_fail_swap_on|default(false)

79   ignore_errors: "{{ ignore_assert_errors }}"

或所有机器执行: 关闭swap

swapoff -a 

free -m


报错5:dns配置导致

TASK [docker : check number of nameservers] **********************************************************************

Thursday 01 February 2018  16:46:08 +0800 (0:00:00.091)       0:07:27.328 *****

fatal: [node1]: FAILED! => {"changed": false, "msg": "Too many nameservers. You can relax this check by set docker_dns_servers_strict=no and we will only use the first 3."}

解决方法:

cd /root/kubespray/roles/docker/tasks

grep nameserver *

vim set_facts_dns.yml

......

/etc/resolv.conf中dns记录过多

按提示,两种解决方法:

set docker_dns_servers_strict=no

或修改/etc/resolv.conf,将记录保留2个以内

echo "nameserver 172.16.0.1" >/etc/resolv.conf


报错6:playbook默认配置问题

RUNNING HANDLER [docker : Docker | reload docker] ****************************************************************

Thursday 01 February 2018  17:27:02 +0800 (0:00:00.090)       0:01:42.994 *****

fatal: [master]: FAILED! => {"changed": false, "msg": "Unable to restart service docker: Job for docker.service failed because the control process exited with error code. See \"systemctl status docker.service\" and \"journalctl -xe\" for details.\n"}

fatal: [node2]: 。。。。。

解决方法:

cd /etc/systemd/system/docker.service.d

新增两个配置文件

docker-dns.conf  docker-options.conf

systemctl status docker查看错误信息如下

Error starting daemon: error initializing graphdriver: /var/lib/docker contains several valid graphdrivers: aufs, overlay2; Please cleanup or explicitly choose storage driver (-s <DRIVER

修改配置文件vim docker-options.conf ,增加--storage-driver=**相关配置

[Service]

Environment="DOCKER_OPTS=--insecure-registry=172.28.2.2:4000 --graph=/var/lib/docker  --log-opt max-size=50m --log-opt max-file=5 \

--iptables=false --storage-driver=aufs"

解决办法:

①修改模板文件

/root/kubespray/roles/docker/templates

vim docker-options.conf.j2

[Service]

Environment="DOCKER_OPTS={{ docker_options | default('') }} \

--iptables=false  --storage-driver=aufs"

②修改playbook yml

vim /root/kubespray/inventory/group_vars/k8s-cluster.yml

136 docker_options: "--insecure-registry={{ kube_service_addresses }} --graph={{ docker_daemon_graph }}  {{ docker_log_opts }}"

修改为

136 docker_options: "--insecure-registry=172.28.2.2:4000 --graph={{ docker_daemon_graph }}  {{ docker_log_opts }}"

前提,手动修改docker配置并重启服务,保证docker服务在部署前正常

vim /etc/systemd/system/docker.service.d/docker-options.conf

systemctl daemon-reload

systemctl restart docker


3、部署完成

查看节点

kubectl get node

kubectl get pod -n kube-system

NAME                                    READY     STATUS             RESTARTS   AGE

calico-node-4gm72                       1/1       Running            11         13h

calico-node-8fkfk                       1/1       Running            0          13h

calico-node-fqdwj                       1/1       Running            16         13h

calico-node-lpdtx                       1/1       Running            15         13h

kube-apiserver-master                   1/1       Running            0          13h

.....

kube-dns-79d99cdcd5-5cw5b               0/3       ImagePullBackOff   0          13h

 

若如上所示,有报错,则查看报错相关pod

kubectl describe pod kube-dns-79d99cdcd5-5cw5b -n kube-system

查看引起报错相关镜像,手动下载。

Name:           kube-dns-79d99cdcd5-5cw5b

Namespace:      kube-system

Node:           node3/172.28.2.213

Image:        **********

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
root@master:~ /kubespray # kubectl get nodes
NAME      STATUS    ROLES         AGE       VERSION
master    Ready     master        33d       v1.9.0+coreos.0
node1     Ready     node          33d       v1.9.0+coreos.0
node2     Ready     node          33d       v1.9.0+coreos.0
node3     Ready     node          33d       v1.9.0+coreos.0
 
root@master:~ /kubespray # kubectl get pod -n kube-system
NAME                                    READY     STATUS    RESTARTS   AGE
calico-node-4gm72                       1 /1        Running   52         33d
calico-node-8fkfk                       1 /1        Running   0          33d
calico-node-fqdwj                       1 /1        Running   53         33d
calico-node-lpdtx                       1 /1        Running   47         33d
kube-apiserver-master                   1 /1        Running   0          33d
kube-apiserver-node4                    1 /1        Running   224        18d
kube-controller-manager-master          1 /1        Running   0          33d
kube-controller-manager-node4           1 /1        Running   5          18d
kube-dns-79d99cdcd5-6vvrw               3 /3        Running   0          18d
kube-dns-79d99cdcd5-rkpf2               3 /3        Running   0          18d
kube-proxy-master                       1 /1        Running   0          33d
kube-proxy-node1                        1 /1        Running   0          32d
kube-proxy-node2                        1 /1        Running   0          18d
kube-proxy-node3                        1 /1        Running   0          32d
kube-scheduler-master                   1 /1        Running   0          33d
kubedns-autoscaler-5564b5585f-7z62x     1 /1        Running   0          18d
kubernetes-dashboard-6bbb86ffc4-zmmc2   1 /1        Running   0          18d
nginx-proxy-node1                       1 /1        Running   0          32d
nginx-proxy-node2                       1 /1        Running   0          18d
nginx-proxy-node3                       1 /1        Running   0          32d

  

三、集群扩展

1、修改inventory文件,把新主机node4加入Master或者Node组

[all]

master  ansible_host=172.28.2.210 ip=172.28.2.210 ansible_user=root

node1   ansible_host=172.28.2.211 ip=172.28.2.211 ansible_user=root

node2   ansible_host=172.28.2.212 ip=172.28.2.212 ansible_user=root

node3   ansible_host=172.28.2.213 ip=172.28.2.213 ansible_user=root

node4   ansible_host=172.28.2.214 ip=172.28.2.214 ansible_user=root

[kube-master]

master

node4

[kube-node]

node1

node2

node3

node4

[etcd]

master

[k8s-cluster:children]

kube-node

kube-master

2、安装

ssh-copy-id 172.28.2.214

#安装python 2.7.12

apt-add-repository ppa:ansible/ansible

apt-get install python python-netaddr

#所有机器执行: 关闭swapswapoff -a

echo "nameserver 172.16.0.1" >/etc/resolv.conf

执行:

cd /root/kubespray/

ansible-playbook -i inventory/inventory.cfg cluster.yml --limit node4

3、完成

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
root@master:~ /kubespray # kubectl get nodes
NAME      STATUS    ROLES         AGE       VERSION
master    Ready     master        33d       v1.9.0+coreos.0
node1     Ready     node          33d       v1.9.0+coreos.0
node2     Ready     node          33d       v1.9.0+coreos.0
node3     Ready     node          33d       v1.9.0+coreos.0
node4     Ready     master,node   33d       v1.9.0+coreos.0
root@master:~ /kubespray # kubectl get pod -n kube-system
NAME                                    READY     STATUS    RESTARTS   AGE
calico-node-4gm72                       1 /1        Running   52         33d
calico-node-8fkfk                       1 /1        Running   0          33d
calico-node-fqdwj                       1 /1        Running   53         33d
calico-node-lpdtx                       1 /1        Running   47         33d
calico-node-nq8l2                       1 /1        Running   42         33d
kube-apiserver-master                   1 /1        Running   0          33d
kube-apiserver-node4                    1 /1        Running   224        18d
kube-controller-manager-master          1 /1        Running   0          33d
kube-controller-manager-node4           1 /1        Running   5          18d
kube-dns-79d99cdcd5-6vvrw               3 /3        Running   0          18d
kube-dns-79d99cdcd5-rkpf2               3 /3        Running   0          18d
kube-proxy-master                       1 /1        Running   0          33d
kube-proxy-node1                        1 /1        Running   0          32d
kube-proxy-node2                        1 /1        Running   0          18d
kube-proxy-node3                        1 /1        Running   0          32d
kube-proxy-node4                        1 /1        Running   0          18d
kube-scheduler-master                   1 /1        Running   0          33d
kube-scheduler-node4                    1 /1        Running   3          18d
kubedns-autoscaler-5564b5585f-7z62x     1 /1        Running   0          18d
kubernetes-dashboard-6bbb86ffc4-zmmc2   1 /1        Running   0          18d
nginx-proxy-node1                       1 /1        Running   0          32d
nginx-proxy-node2                       1 /1        Running   0          18d
nginx-proxy-node3                       1 /1        Running   0          32d

  

 

使用kuberspay无坑安装生产级Kubernetes集群: http://www.wisely.top/2017/07/01/no-problem-kubernetes-kuberspay/

kubespray(ansible)自动化安装k8s集群  https://www.cnblogs.com/iiiiher/p/8128184.html


猜你喜欢

转载自blog.csdn.net/xsjzdrxsjzdr/article/details/80703074