Kubernetes入门篇(五)之Flannel网络部署和测试

Flannel网络部署和测试

K8S本身并不提供网络的功能,所以需要借助第三方网络插件进行部署K8S中的网络,以打通各个节点中容器的互通。 
POD,是K8S中的一个逻辑概念,K8S管理的是POD,一个POD中包含多个容器,容器之间通过localhost互通。而POD需要ip地址。每个POD都有一个标签 
POD–>RC–>RS–>Deployment 
Deployment,表示用户对K8S集群的一次更新操作。Deployment是一个比RS应用模式更广的API对象。 
可以是创建一个新的服务,更新一个新的服务,也可以是滚动升级一个服务。滚动升级一个服务。实际是创建一个新的RS,然后将新的RS中副本数增加到理想状态,将旧的RS中的副本数减小到0的复合操作; 
这样的一个复合操作用一个RS是不太好描述的,所以用一个更通用的Deployment来描述。 
RC、RS和Deployment只是保证了支撑服务的POD数量,但是没有解决如何访问这些服务的问题。一个POD只是一个运行服务的实例,随时可以能在一个节点上停止,在另一个节点以一个新的IP启动一个新的POD,因此不能以确定的IP和端口号提供服务。 
要稳定地提供服务需要服务发现和负载均衡能力。服务发现完成的工作,是针对客户端访问的服务,找到对应的后端服务实例。 
在K8S的集中当中,客户端需要访问的服务就是Service对象。每个Service会对应一个集群内部有效的虚拟IP,集群内部通过虚拟IP访问一个服务。

1、Flannel网络部署

(1)为Flannel生成证书
[root@linux-node1 ssl]# vim flanneld-csr.json
(2)生成证书
[root@linux-node1 ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \
>    -ca-key=/opt/kubernetes/ssl/ca-key.pem \
>    -config=/opt/kubernetes/ssl/ca-config.json \
>    -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld
[root@linux-node1 ssl]# ll flannel*
-rw-r--r-- 1 root root  997 May 31 11:13 flanneld.csr
-rw-r--r-- 1 root root  221 May 31 11:13 flanneld-csr.json
-rw------- 1 root root 1675 May 31 11:13 flanneld-key.pem
-rw-r--r-- 1 root root 1391 May 31 11:13 flanneld.pem
(3)分发证书
[root@linux-node1 ssl]# cp flanneld*.pem /opt/kubernetes/ssl/
[root@linux-node1 ssl]# scp flanneld*.pem 192.168.56.120:/opt/kubernetes/ssl/
flanneld-key.pem                                           100% 1675   127.2KB/s   00:00    
flanneld.pem                                               100% 1391   308.3KB/s   00:00    
[root@linux-node1 ssl]# scp flanneld*.pem 192.168.56.130:/opt/kubernetes/ssl/
flanneld-key.pem                                           100% 1675   291.1KB/s   00:00    
flanneld.pem                                               100% 1391    90.4KB/s   00:00    
(4)下载Flannel软件包
[root@linux-node1 ~]# cd /usr/local/src
# wget
 https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
[root@linux-node1 src]# tar zxf flannel-v0.10.0-linux-amd64.tar.gz
[root@linux-node1 src]# cp flanneld mk-docker-opts.sh /opt/kubernetes/bin/
复制到linux-node2和linux-node3节点
[root@linux-node1 src]# scp flanneld mk-docker-opts.sh 192.168.56.120:/opt/kubernetes/bin/
[root@linux-node1 src]# scp flanneld mk-docker-opts.sh 192.168.56.130:/opt/kubernetes/bin/

复制对应脚本到/opt/kubernetes/bin目录下。
[root@linux-node1 ~]# cd /usr/local/src/kubernetes/cluster/centos/node/bin/
[root@linux-node1 bin]# cp remove-docker0.sh /opt/kubernetes/bin/
[root@linux-node1 bin]# scp remove-docker0.sh 192.168.56.120:/opt/kubernetes/bin/
[root@linux-node1 bin]# scp remove-docker0.sh 192.168.56.130:/opt/kubernetes/bin/
(5)配置Flannel
[root@linux-node1 ~]# vim /opt/kubernetes/cfg/flannel
FLANNEL_ETCD="-etcd-endpoints=https://192.168.56.110:2379,https://192.168.56.120:2379,https://192.168.56.130:2379"
FLANNEL_ETCD_KEY="-etcd-prefix=/kubernetes/network"
FLANNEL_ETCD_CAFILE="--etcd-cafile=/opt/kubernetes/ssl/ca.pem"
FLANNEL_ETCD_CERTFILE="--etcd-certfile=/opt/kubernetes/ssl/flanneld.pem"
FLANNEL_ETCD_KEYFILE="--etcd-keyfile=/opt/kubernetes/ssl/flanneld-key.pem"
复制配置到其它节点上
[root@linux-node1 ~]# scp /opt/kubernetes/cfg/flannel 192.168.56.120:/opt/kubernetes/cfg/
[root@linux-node1 ~]# scp /opt/kubernetes/cfg/flannel 192.168.56.130:/opt/kubernetes/cfg/
(6)设置Flannel系统服务
[root@linux-node1 ~]# vim /usr/lib/systemd/system/flannel.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
Before=docker.service

[Service]
EnvironmentFile=-/opt/kubernetes/cfg/flannel
ExecStartPre=/opt/kubernetes/bin/remove-docker0.sh
ExecStart=/opt/kubernetes/bin/flanneld ${FLANNEL_ETCD} ${FLANNEL_ETCD_KEY} ${FLANNEL_ETCD_CAFILE} ${FLANNEL_ETCD_CERTFILE} ${FLANNEL_ETCD_KEYFILE}
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -d /run/flannel/docker

Type=notify

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
复制系统服务脚本到其它节点上
# scp /usr/lib/systemd/system/flannel.service 192.168.56.120:/usr/lib/systemd/system/
# scp /usr/lib/systemd/system/flannel.service 192.168.56.130:/usr/lib/systemd/system/

2、Flannel CNI集成

(1)下载CNI插件
https://github.com/containernetworking/plugins/releases
wget https://github.com/containernetworking/plugins/releases/download/v0.7.1/cni-plugins-amd64-v0.7.1.tgz
[root@linux-node1 ~]# mkdir /opt/kubernetes/bin/cni
[root@linux-node2 ~]# mkdir /opt/kubernetes/bin/cni
[root@linux-node3 ~]# mkdir /opt/kubernetes/bin/cni
[root@linux-node1 src]# tar zxf cni-plugins-amd64-v0.7.1.tgz -C /opt/kubernetes/bin/cni
[root@linux-node1 src]# scp -r /opt/kubernetes/bin/cni/* 192.168.56.120:/opt/kubernetes/bin/cni/
[root@linux-node1 src]# scp -r /opt/kubernetes/bin/cni/* 192.168.56.130:/opt/kubernetes/bin/cni/
(2)创建Etcd的key

此步的操作是为了创建POD的网段,并在ETCD中存储,而后FLANNEL从ETCD中取出并进行分配

[root@linux-node1 src]# /opt/kubernetes/bin/etcdctl --ca-file /opt/kubernetes/ssl/ca.pem --cert-file /opt/kubernetes/ssl/flanneld.pem --key-file /opt/kubernetes/ssl/flanneld-key.pem \
      --no-sync -C https://192.168.56.110:2379,https://192.168.56.120:2379,https://192.168.56.130:2379 \
mk /kubernetes/network/config '{ "Network": "10.2.0.0/16", "Backend": { "Type": "vxlan", "VNI": 1 }}' >/dev/null 2>&1
(3)启动flannel
[root@linux-node1 ~]# systemctl daemon-reload
[root@linux-node1 ~]# systemctl enable flannel
[root@linux-node1 ~]# chmod +x /opt/kubernetes/bin/*
[root@linux-node1 ~]# systemctl start flannel

[root@linux-node2 ~]# systemctl daemon-reload
[root@linux-node2 ~]# systemctl enable flannel
[root@linux-node2 ~]# chmod +x /opt/kubernetes/bin/*
[root@linux-node2 ~]# systemctl start flannel

[root@linux-node3 ~]# systemctl daemon-reload
[root@linux-node3 ~]# systemctl enable flannel
[root@linux-node3 ~]# chmod +x /opt/kubernetes/bin/*
[root@linux-node3 ~]# systemctl start flannel

可以看到每个节点上会多出一个flannel.1的网卡,不同的节点都在不同网段。

[root@linux-node1 ~]# ifconfig flannel.1
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.2.46.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::f4e6:1aff:fe7e:575b  prefixlen 64  scopeid 0x20<link>
        ether f6:e6:1a:7e:57:5b  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0

[root@linux-node2 ~]# ifconfig flannel.1
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.2.87.0  netmask 255.255.255.255  broadcast 0.0.0.0
        inet6 fe80::d4e5:72ff:fe3e:7309  prefixlen 64  scopeid 0x20<link>
        ether d6:e5:72:3e:73:09  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 8 overruns 0  carrier 0  collisions 0

[root@linux-node3 ~]# ifconfig flannel.1
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.2.33.0  netmask 255.255.255.255  broadcast 0.0.0.0
        ether be:cd:5a:4f:6b:d1  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 1 overruns 0  carrier 0  collisions 0
(4)遇到的问题:Flannel无法启动

检查/opt/kubernetes/cfg/etcd.conf配置文件中的ETCD_LISTEN_CLIENT_URLS是否配置监听127.0.0.1:2379。依旧无法启动flannel,重新输入了一遍,正常了,暂时没发现其他原因,至于etcdctl无法获取key值,有待研究!!!

3、配置Docker使用Flannel

[root@linux-node1 ~]# vim /usr/lib/systemd/system/docker.service
[Unit] #在Unit下面修改After和增加Requires
After=network-online.target firewalld.service flannel.service  #让docker在flannel网络后面启动
Wants=network-online.target
Requires=flannel.service 

[Service] #增加EnvironmentFile=-/run/flannel/docker
Type=notify
EnvironmentFile=-/run/flannel/docker #加载环境文件,设置docker0的ip地址为flannel分配的ip地址
ExecStart=/usr/bin/dockerd $DOCKER_OPTS
[root@linux-node1 ~]# systemctl daemon-reload
[root@linux-node1 ~]# systemctl restart docker
[root@linux-node1 ~]# ifconfig docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 10.2.46.1  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 02:42:1f:ef:9f:b5  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@linux-node2 ~]# ifconfig docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 10.2.87.1  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 02:42:8a:a5:42:d7  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@linux-node3 ~]# ifconfig docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 10.2.33.1  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 02:42:57:90:05:47  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

4、创建K8S第一个应用

[root@linux-node1 ~]# kubectl run net-test --image=alpine --replicas=2 sleep 36000
deployment.apps "net-test" created
[root@linux-node1 ~]# kubectl get pod -o wide
NAME                        READY     STATUS              RESTARTS   AGE       IP          NODE
net-test-7b949fc785-2v2qz   1/1       Running             0          56s       10.2.87.2   192.168.56.120
net-test-7b949fc785-6nrhm   0/1       ContainerCreating   0          56s       <none>      192.168.56.130

[root@linux-node1 ~]# ping -c 3 10.2.87.2
PING 10.2.87.2 (10.2.87.2) 56(84) bytes of data.
64 bytes from 10.2.87.2: icmp_seq=1 ttl=63 time=0.841 ms
64 bytes from 10.2.87.2: icmp_seq=2 ttl=63 time=0.346 ms
64 bytes from 10.2.87.2: icmp_seq=3 ttl=63 time=0.617 ms

--- 10.2.87.2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 0.346/0.601/0.841/0.203 ms

总结

kubectl get node时,会看到节点的状态READY,如果状态为NotReady,可以查看节点上的kubelet是否已经启动,如果未启动,进行启动。kubelet无法启动,要进行查看systemctl status kubeletjournalctl -xe看看是什么原因导致无法启动。遇到的一种情况是依赖docker,查看docker无法启动。再进一步排查docker无法启动的原因。

扫描二维码关注公众号,回复: 1909266 查看本文章

猜你喜欢

转载自www.cnblogs.com/linuxk/p/9272819.html