docker swarm k8s

1:docker swarm

Docker Swarm 和 Docker Compose 一样,都是 Docker 官方容器编排项目,但不同的是,Docker Compose 是一个在单个服务器或主机上创建多个容器的工具,而 Docker Swarm 则可以在多个服务器或主机上创建容器集群服务,对于微服务的部署,显然 Docker Swarm 会更加适合。

特性
与docker集成的集群管理工具
去中心化设计,只使用docker引擎即可创建各类节点
声明式服务模型。可以声明的方式来定义应用。
动态伸缩。管理节点自动调整服务数量。
高可用,对于服务期望状态做到动态调整,swarm的管理节点会持续监控集群状态,集群中有没有达到期望状态的服务,管理节点会自动调度来达到期望状态。
自定义网络。可以为你的服务指定一个网络,容器创建的时候分配一个IP
服务发现。管理节点给集群中每个服务一个特定的DNS名字,并给运行的容器提供负载均衡。
负载均衡。你可以暴露服务端口给外部的负载均衡。内部swarm提供可配置的容器分配到节点的策略。
默认的安全机制。swarm集群中各个节点强制TLS协议验证。连接加密,你可以自定义根证书。
滚动更新。增量跟新,可以自定义更新下个节点的时间间隔,如果有问题,可以会滚到上个版本。
1:在各个结点上删除多于的网络模式,之保留自带的模式
[root@server1 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
8f6867a123d2        bridge              bridge              local
21aeb8ed91db        host                host                local
0e90fe975352        mac_net1            macvlan             local
06e3ac27f2bc        mac_net2            macvlan             local
d16cd2cc5f88        mac_net3            macvlan             local
e2363758dfbb        none                null                local

[root@server1 ~]# docker network rm mac_net1
mac_net1
[root@server1 ~]# docker network rm mac_net2
mac_net2
[root@server1 ~]# docker network rm mac_net3
mac_net3
[root@server1 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
8f6867a123d2        bridge              bridge              local
21aeb8ed91db        host                host                local
e2363758dfbb        none                null                local


2:删除之前实验的docker-compose
[root@server1 compose]# docker-compose stop
Stopping compose_haproxy_1 ... done
Stopping compose_web1_1    ... done
Stopping compose_web2_1    ... done
[root@server1 compose]# docker-compose rm
Going to remove compose_haproxy_1, compose_web1_1, compose_web2_1
Are you sure? [yN] y
Removing compose_haproxy_1 ... done
Removing compose_web1_1    ... done
Removing compose_web2_1    ... done

3:server1进行初始化
[root@server1 ~]# systemctl start docker
[root@server2 ~]# systemctl start docker
[root@server3 ~]# systemctl start docker
[root@server1 ~]# docker swarm init    ##初始化
Swarm initialized: current node (f3defun6m22govb3xu9ro8hx6) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-664e5jii8c2uqbyk4eopqjrctt5dkekwhatrpv49u6pat0gdpe-7nnkzyyiagofurcei15l6es08 172.25.60.1:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

4:server2和server3作为一个worker加入到server1上
[root@server2 ~]# docker swarm join --token SWMTKN-1-664e5jii8c2uqbyk4eopqjrctt5dkekwhatrpv49u6pat0gdpe-7nnkzyyiagofurcei15l6es08 172.25.60.1:2377
This node joined a swarm as a worker.

[root@server3 ~]# docker swarm join --token SWMTKN-1-664e5jii8c2uqbyk4eopqjrctt5dkekwhatrpv49u6pat0gdpe-7nnkzyyiagofurcei15l6es08 172.25.60.1:2377
This node joined a swarm as a worker.

5:server1上查看节点,server1作为leader
[root@server1 ~]# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
f3defun6m22govb3xu9ro8hx6 *   server1             Ready               Active              Leader              18.06.1-ce
9a0ogm0p3622h8bukq3pvz5em     server2             Ready               Active                                  18.06.1-ce
0rzoxvx6bnbh21isb2u1j0f9g     server3             Ready               Active                                  18.06.1-ce

此时一个简单的docker swarm集群就算是搭建完毕了

6:导入nginx镜像
[root@server1 ~]# docker load -i nginx.tar
[root@server2 ~]# docker load -i nginx.tar
[root@server3 ~]# docker load -i nginx.tar

7:此时发现多了一个网络模式
[root@server1 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
8f6867a123d2        bridge              bridge              local
cd59f23af298        docker_gwbridge     bridge              local
21aeb8ed91db        host                host                local
gggv3csckzhx        ingress             overlay             swarm
e2363758dfbb        none                null                local

8:创建节点
[root@server1 images]# docker service create --name web --replicas 3 -p 80:80 nginx     # 创建三个节点,会自动加入swarm
[root@server1 ~]# docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE               PORTS
pju8xi10dvir        web                 replicated          3/3                 nginx:latest        *:80->80/tcp
[root@server1 ~]# docker service ps web
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE                ERROR               PORTS
jalnx8ezw77l        web.1               nginx:latest        server3             Running             Running about a minute ago                       
egsww4odsckc        web.2               nginx:latest        server1             Running             Running about a minute ago                       
ldjxxbpjuovj        web.3               nginx:latest        server2             Running             Running about a minute ago 

9:此时浏览器进行访问,172.25.60.看到的是nginx默认页面

10:编写默认访问文件
[root@server1 ~]# vim index.html
<h1>server1</h1>
[root@server1 ~]# docker cp index.html 44510e0c8510:/usr/share/nginx/html
[root@server2 ~]# vim index.html
[root@server2 ~]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
e61307169d38        nginx:latest        "nginx -g 'daemon of…"   8 minutes ago       Up 8 minutes        80/tcp              web.3.ldjxxbpjuovjfxwhfsjprzp29
[root@server2 ~]# docker cp index.html e61307169d38:/usr/share/nginx/html
[root@server3 ~]# vim index.html
[root@server3 ~]# docker ps\
> 
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
3f1e6e809a32        nginx:latest        "nginx -g 'daemon of…"   9 minutes ago       Up 8 minutes        80/tcp              web.1.jalnx8ezw77l5snwu0m75sroo
[root@server3 ~]# docker cp index.html 3f1e6e809a32:/usr/share/nginx/html

11:真机模拟远程客户端进行访问
[root@foundation60 Desktop]# for i in {1..10};do curl 172.25.60.1;done
<h1>server3</h1>
<h1>server2</h1>
<h1>server1</h1>
<h1>server3</h1>
<h1>server2</h1>
<h1>server1</h1>
<h1>server3</h1>
<h1>server2</h1>
<h1>server1</h1>
<h1>server3</h1>

12:容器数量拉伸,会在swarm各节点上平均分配容器数量
[root@server1 ~]# docker service scale web=6
web scaled to 6
overall progress: 6 out of 6 tasks 
1/6: running   
2/6: running   
3/6: running   
4/6: running   
5/6: running   
6/6: running   
verify: Service converged 
[root@server1 ~]# docker service ps web
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE            ERROR               PORTS
jalnx8ezw77l        web.1               nginx:latest        server3             Running             Running 29 minutes ago                       
egsww4odsckc        web.2               nginx:latest        server1             Running             Running 29 minutes ago                       
ldjxxbpjuovj        web.3               nginx:latest        server2             Running             Running 29 minutes ago                       
amfhzhhz42s1        web.4               nginx:latest        server1             Running             Running 18 seconds ago                       
msg3itotlq8i        web.5               nginx:latest        server2             Running             Running 19 seconds ago                       
03kzi94wli1y        web.6               nginx:latest        server3             Running             Running 19 seconds ago  

server1上:
[root@server1 ~]# docker ps
CONTAINER ID        IMAGE                  COMMAND                  CREATED              STATUS              PORTS               NAMES
95ff75a0119d        nginx:latest           "nginx -g 'daemon of…"   About a minute ago   Up About a minute   80/tcp              web.4.amfhzhhz42s1twdw41tmow38e
44510e0c8510        nginx:latest           "nginx -g 'daemon of…"   30 minutes ago       Up 30 minutes       80/tcp              web.2.egsww4odsckc7qyf8ley6obyu
server2:
[root@server2 ~]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
e1d4e60df17d        nginx:latest        "nginx -g 'daemon of…"   44 seconds ago      Up 40 seconds       80/tcp              web.5.msg3itotlq8is7vtl2yzcou3w
e61307169d38        nginx:latest        "nginx -g 'daemon of…"   29 minutes ago      Up 29 minutes       80/tcp              web.3.ldjxxbpjuovjfxwhfsjprzp29
server3:
[root@server3 ~]# docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS               NAMES
0a317fca54a9        nginx:latest        "nginx -g 'daemon of…"   2 minutes ago       Up 2 minutes        80/tcp              web.6.03kzi94wli1yxxpbqiswiygjq
3f1e6e809a32        nginx:latest        "nginx -g 'daemon of…"   31 minutes ago      Up 31 minutes       80/tcp              web.1.jalnx8ezw77l5snwu0m75sroo
每个结点上都有两个

13:给新生成的容器导入默认访问文件
[root@server1 ~]# vim index.html 
<h1>server4</h1>
[root@server1 ~]# docker cp index.html 95ff75a0119d:/usr/share/nginx/html
[root@server2 ~]# vim index.html 
<h1>server5</h5>
[root@server2 ~]# docker cp index.html e1d4e60df17d:/usr/share/nginx/html
[root@server3 ~]# vim index.html
<h1>server6</h1>
[root@server3 ~]# docker cp index.html 0a317fca54a9:/usr/share/nginx/html

14:此时真机模拟远程客户端进行访问,发现新生成的容器也可以正常访问
[root@foundation60 Desktop]# for i in {1..10};do curl 172.25.60.1;done
<h1>server1</h1>
<h1>server3</h1>
<h1>server2</h1>
<h1>server4</h1>
<h1>server6</h1>
<h1>server5</h1>
<h1>server1</h1>
<h1>server3</h1>
<h1>server2</h1>
<h1>server4</h1>


15:设置web页面监控镜像
(1)导入镜像
[root@server1 ~]# docker load -i visualizer.tar 
(2)搭建visualizer容器
[root@server1 ~]# docker service create \
>   --name=viz \
>   --publish=8080:8080/tcp \
>   --constraint=node.role==manager \
>   --mount=type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \
>   dockersamples/visualizer
image dockersamples/visualizer:latest could not be accessed on a registry to record
its digest. Each node will access dockersamples/visualizer:latest independently,
possibly leading to different nodes running different
versions of the image.

pase42gm25p02vwyypurhf39q
overall progress: 0 out of 1 tasks  
1/1: running   
verify: Service converged 
[root@server1 ~]# docker ps  ##查看镜像
CONTAINER ID        IMAGE                             COMMAND                  CREATED             STATUS                        PORTS               NAMES
00b627fe7664        dockersamples/visualizer:latest   "npm start"              2 minutes ago       Up About a minute (healthy)   8080/tcp            viz.1.kc653w1ypvig5n8oiyn1o5cxq

16:172.25.60.1:8080
此时查看到的是监控镜像的一个web界面

17:容器数量拉神,在web页面上也可以查看到
[root@server1 ~]# docker service scale web=30
web scaled to 30
overall progress: 30 out of 30 tasks 
verify: Service converged 

18:生成的容器哦在各个结点上都是均分的
[root@server3 ~]# docker ps -aq|wc -l
10


19:使用自定义网络模式生成web
[root@server1 ~]# docker service rm web ##删除之前使用默认网络模式生成的web
web
此时发现web界面上的容器都没有了
[root@server3 ~]# docker ps -aq|wc -l
0




20:生成自定义网络
[root@server1 ~]# docker network create -d overlay my_net1
[root@server1 ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
8f6867a123d2        bridge              bridge              local
cd59f23af298        docker_gwbridge     bridge              local
21aeb8ed91db        host                host                local
gggv3csckzhx        ingress             overlay             swarm
zd161ws3w9vz        my_net1             overlay             swarm
e2363758dfbb        none                null                local

21:利用自定义网络生成镜像
[root@server1 ~]# docker service create --name web  --network my_net1 --publish 80:80 --replicas 3 nginx
image nginx:latest could not be accessed on a registry to record
its digest. Each node will access nginx:latest independently,
possibly leading to different nodes running different
versions of the image.

l3j83o37k9t717z7rvvwfv81y
overall progress: 3 out of 3 tasks 
1/3: running   
2/3: running   
3/3: running   
verify: Service converged 



22:此时浏览器上就能查看带了

23:滚动更新,镜像的更新
在各个节点都导入需要更新的镜像,该镜像不是什么镜像都可以的,必须是有启动脚本之类的完整的镜像
[root@foundation60 ~]# scp 66rhel7.tar [email protected]:
[email protected]'s password: 
66rhel7.tar                                   100%   24MB  24.1MB/s   00:00    

[root@foundation60 ~]# scp 66rhel7.tar [email protected]:
[email protected]'s password: 
66rhel7.tar                                   100%   24MB  24.1MB/s   00:01    

[root@foundation60 ~]# scp 66rhel7.tar [email protected]:
[email protected]'s password: 
66rhel7.tar                                   100%   24MB  24.1MB/s   00:00    


[root@server1 ~]# docker load -i 66rhel7.tar 
668afdbd4462: Loading layer  18.39MB/18.39MB
b3cc8face1a9: Loading layer  6.838MB/6.838MB
Loaded image: rhel7:v5
[root@server2 ~]# docker load -i 66rhel7.tar 
668afdbd4462: Loading layer  18.39MB/18.39MB
b3cc8face1a9: Loading layer  6.838MB/6.838MB
Loaded image: rhel7:v5
[root@server3 ~]# docker load -i 66rhel7.tar 
668afdbd4462: Loading layer  18.39MB/18.39MB
b3cc8face1a9: Loading layer  6.838MB/6.838MB
Loaded image: rhel7:v5

更新:
[root@server1 ~]# docker service update --image rhel7:v5 --update-delay 2s --update-parallelism 1 web
image rhel7:v5 could not be accessed on a registry to record
its digest. Each node will access rhel7:v5 independently,
possibly leading to different nodes running different
versions of the image.

web
overall progress: 3 out of 3 tasks 
1/3: running   
2/3: running   
3/3: running   
verify: Service converged 

24:此是在浏览器上镜像查看,发现镜像已经更新

二:k8s:##一点要注意内存的大小,7系统最少需要1024M,不然初始化过不去

k8s是一个编排容器的工具,其实也是管理应用的全生命周期的一个工具,从创建应用,应用的部署,应用提供服务,扩容缩容应用,应用更新,都非常的方便,而且可以做到故障自愈,例如一个服务器挂了,可以自动将这个服务器上的服务调度到另外一个主机上进行运行,无需进行人工干涉。那么,问题来了,要运维何用?

    k8s可以更快的更新新版本,打包应用,更新的时候可以做到不用中断服务,服务器故障不用停机,从开发环境到测试环境到生产环境的迁移极其方便,一个配置文件搞定,一次生成image,到处运行。。。

1:安装必须的插件,每个节点都要安装
[root@server1 k8s]# yum install -y kubeadm-1.12.2-0.x86_64.rpm kubelet-1.12.2-0.x86_64.rpm kubectl-1.12.2-0.x86_64.rpm kubernetes-cni-0.6.0-0.x86_64.rpm cri-tools-1.12.0-0.x86_64.rpm 
[root@server2 k8s]# yum install -y kubeadm-1.12.2-0.x86_64.rpm kubelet-1.12.2-0.x86_64.rpm kubectl-1.12.2-0.x86_64.rpm kubernetes-cni-0.6.0-0.x86_64.rpm cri-tools-1.12.0-0.x86_64.rpm
[root@server3 k8s]# yum install -y kubeadm-1.12.2-0.x86_64.rpm kubelet-1.12.2-0.x86_64.rpm kubectl-1.12.2-0.x86_64.rpm kubernetes-cni-0.6.0-0.x86_64.rpm cri-tools-1.12.0-0.x86_64.rpm  

2:每个节点关闭swap分区。并将设置开机自动启动的给注释
[root@server1 ~]# swapoff -a
[root@server1 ~]# vim /etc/fstab 
#/dev/mapper/rhel-swap   swap                    swap    defaults        0 0
[root@server2 ~]# swapoff -a
[root@server2 ~]# vim /etc/fstab 
#/dev/mapper/rhel-swap   swap                    swap    defaults        0 0
[root@server3 ~]# swapoff -a
[root@server3 ~]# vim /etc/fstab 
#/dev/mapper/rhel-swap   swap                    swap    defaults        0 0

3:kubelet服务设置开机自动启动
[root@server1 k8s]# systemctl enable kubelet
[root@server1 k8s]# systemctl start kubelet    ##此时查看该服务其实是没有启动的,当时没有关系,会自动启动
[root@server2 k8s]# systemctl enable kubelet
[root@server2 k8s]# systemctl start kubelet
[root@server3 k8s]# systemctl enable kubelet
[root@server3 k8s]# systemctl start kubelet

4:查看所需要的镜像
[root@server1 k8s]# kubeadm config images list
I0326 18:34:13.911184    1447 version.go:93] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0326 18:34:13.912283    1447 version.go:94] falling back to the local client version: v1.12.2
k8s.gcr.io/kube-apiserver:v1.12.2
k8s.gcr.io/kube-controller-manager:v1.12.2
k8s.gcr.io/kube-scheduler:v1.12.2
k8s.gcr.io/kube-proxy:v1.12.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.2

5:然后在各个节点导入镜像
[root@server2 ~]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/coredns                   1.2.2               95b66263fd52        4 months ago        39.2MB
k8s.gcr.io/etcd                      3.2.24              b57e69295df1        4 months ago        220MB
k8s.gcr.io/pause                     3.1                 6ce64a260657        4 months ago        742kB
k8s.gcr.io/kube-proxy                v1.12.2             96eaf5076bfe        4 months ago        96.5MB
k8s.gcr.io/kube-scheduler            v1.12.2             a84dd4efbe5f        4 months ago        58.3MB
k8s.gcr.io/kube-controller-manager   v1.12.2             b9a2d5b91fd6        4 months ago        164MB
k8s.gcr.io/kube-apiserver            v1.12.2             6e3fa7b29763        4 months ago        194MB
quay.io/coreos/flannel               v0.10.0-amd64       f0fad859c909        14 months ago       44.6MB


6:master上初始化
[root@server1 ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=172.25.60.1
Your Kubernetes master has initialized successfully!  ##看到这个则表示成功


To start using your cluster, you need to run the following as a regular user: ##master节点需要配置的

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 172.25.60.1:6443 --token cm8o9l.l9wufcegqa1sqwzy --discovery-token-ca-cert-hash sha256:2aa80a8dec5e2fc748a4139ae707f182fa445be116ccd76f9aabe7f40fc7c749  ##worker节点加入需要执行的命令


7:创建用户,给予权限
[root@server1 ~]# useradd k8s

[root@server1 ~]# vim /etc/sudoers
k8s     ALL=(ALL)       NOPASSWD: ALL

8:切换到k8s用户
[root@server1 ~]# su k8s
[k8s@server1 ~]$ mkdir -p $HOME/.kube
[k8s@server1 ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[k8s@server1 ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
[k8s@server1 ~]$ echo " source < (kubectl completion bash)" >> ./.bashrc

9:拷贝文件到k8s用户
[root@server1 k8s]# cp *.yml /home/k8s/
应用:
[k8s@server1 ~]$ kubectl apply -f kube-flannel.yml 
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

10:从节点上打开swap分区
[root@server2 ~]# swapon -s
加入:        ###在加入时如果加不进去
产生原因:

    有可能是时间不同步造成的
    在初始化后重启master
此时就可以了,但是需要重新导入文件
[k8s@server1 k8s]$ mkdir -p $HOME/.kube
[k8s@server1 k8s]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp: overwrite ‘/home/k8s/.kube/config’? y
[k8s@server1 k8s]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config


[root@server2 ~]#   kubeadm join 172.25.60.1:6443 --token 4xphl7.pj5bsgsz776pvnch --discovery-token-ca-cert-hash sha256:481e23679a9af4e8f46b9c7044267364482f193308f0898ad2e9d0302721ccc0
[preflight] running pre-flight checks
	[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[discovery] Trying to connect to API Server "172.25.60.1:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.25.60.1:6443"
[discovery] Failed to connect to API Server "172.25.60.1:6443": token id "4xphl7" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
[discovery] Trying to connect to API Server "172.25.60.1:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.25.60.1:6443"
[discovery] Failed to connect to API Server "172.25.60.1:6443": token id "4xphl7" is invalid for this cluster or it has expired. Use "kubeadm token create" on the master node to creating a new valid token
^C
[root@server2 ~]# kubeadm reset
[reset] WARNING: changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] are you sure you want to proceed? [y/N]: y
[preflight] running pre-flight checks
[reset] stopping the kubelet service
[reset] unmounting mounted directories in "/var/lib/kubelet"
[reset] no etcd manifest found in "/etc/kubernetes/manifests/etcd.yaml". Assuming external etcd
[reset] please manually reset etcd to prevent further issues
[reset] deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes]
[reset] deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[root@server2 ~]#   kubeadm join 172.25.60.1:6443 --token 4xphl7.pj5bsgsz776pvnch --discovery-token-ca-cert-hash sha256:481e23679a9af4e8f46b9c7044267364482f193308f0898ad2e9d0302721ccc0
[preflight] running pre-flight checks
	[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[discovery] Trying to connect to API Server "172.25.60.1:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.25.60.1:6443"
[discovery] Requesting info from "https://172.25.60.1:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.25.60.1:6443"
[discovery] Successfully established connection with API Server "172.25.60.1:6443"
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[preflight] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "server2" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.     ###此时可知节点添加完成



11:查看节点
[k8s@server1 root]$ kubectl get nodes
状态:
[k8s@server1 root]$ kubectl get pod --all-namespaces




 

猜你喜欢

转载自blog.csdn.net/yinzhen_boke_0321/article/details/88874681