kubernetes实践指南(二)

一、部署前提
二、CA和私钥
三、部署kubectl
四、部署etcd
五、部署flannel

一、部署前提

接下来的几篇稳重重点介绍二进制安装kubernetes的1.14.3版本

1、版本信息

docker:v17.06.0-ce
etcd:v3.2.26
flannel:v0.11.0
Kubernetes:v1.14.3
OS:v7.3.1611
cfssl:v1.2.0

2、hosts规划

[root@master1 work]# vim /etc/hosts
192.168.192.222 master1 etcd www.mt.com
192.168.192.223 master2 etcd www.mt.com
192.168.192.224 master3 etcd www.mt.com
192.168.192.225 node1
192.168.192.226 node2
192.168.192.234 registry 

注:master1,master2,master3也是etcd的部署地址
主机名需要使用:hostnamectl setp-hostname 进行手动设置 #每个主机单独设置,这里不做说明
[root@master1 work]# ansible all -i /root/udp/hosts -a "src=/etc/hosts dest=/etc/hosts"

3、ansible的hosts

[root@master1 work]# cat /root/udp/hosts.ini 
[master]
192.168.192.222 ansible_ssh_user=root ansible_ssh_pass=自己的密码
192.168.192.223 ansible_ssh_user=root ansible_ssh_pass=自己的密码 
192.168.192.224 ansible_ssh_user=root ansible_ssh_pass=自己的密码 

[master1]
192.168.192.222 ansible_ssh_user=root ansible_ssh_pass=自己的密码

[master2]
192.168.192.222 ansible_ssh_user=root ansible_ssh_pass=自己的密码

[master3]
192.168.192.222 ansible_ssh_user=root ansible_ssh_pass=自己的密码

[node]
192.168.192.225 ansible_ssh_user=root ansible_ssh_pass=自己的密码 
192.168.192.226 ansible_ssh_user=root ansible_ssh_pass=自己的密码 

[all]
192.168.192.222 ansible_ssh_user=root ansible_ssh_pass=自己的密码 
192.168.192.223 ansible_ssh_user=root ansible_ssh_pass=自己的密码 
192.168.192.224 ansible_ssh_user=root ansible_ssh_pass=自己的密码 
192.168.192.225 ansible_ssh_user=root ansible_ssh_pass=自己的密码 
192.168.192.226 ansible_ssh_user=root ansible_ssh_pass=自己的密码 

4、初始化脚本init.sh

[root@master1 work]# vim init.sh 
#!/bin/bash 
echo 'PATH=/opt/k8s/bin:$PATH' >>/root/.bashrc
yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget
systemctl disable firewalld && systemctl stop firewalld
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
modprobe ip_vs_rr
modprobe br_netfilter

[root@master1 work]# ansible all -i /root/udp/hosts -m copy "src=./init.sh dest=/opt/k8s/bin/init.sh" 
[root@master1 work]# ansible all -i /root/udp/hosts -a "sh /opt/k8s/bin/init.sh"

需要稳定的时钟源,且集群内所有机器ntpq -np同步正常

5、内核优化

[root@master1 work]# vim /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.overcommit_memory=1 # 不检查物理内存是否够用
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=655360000
fs.nr_open=655360000
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720

[root@master1 work]# ansible all -i /root/udp/hosts.ini -m copy -a "src=/etc/sysctl.d/kubernetes.conf dest=/etc/sysctl.d/kubernetes.conf" 
[root@master1 work]# ansible all -i /root/udp/hosts.ini  -a "sysctl -p  /etc/sysctl.d/kubernetes.conf" 

6、配置规划信息

  • 集群node地址:192.168.192.222 192.168.192.223 192.168.192.224 192.168.192.225 192.168.192.226
  • registry地址:192.168.192.234
  • 主机名:master1 master2 master3 node1 node2 #和集群node地址ip对应
  • ETCD地址:https://192.168.192.222:2379,https://192.168.192.223:2379,https://192.168.192.224:2379
  • ETCD节点间通信端口:master1=https://192.168.192.222:2380,master2=https://192.168.192.223:2380,master3=https://192.168.192.224:2380
  • APIServer地址:https://127.0.0.1:8443
  • 使用的网卡接口:eth0
  • ETCD 数据目录:/data/k8s/etcd/data
  • ETCD wal目录:/data/k8s/etcd/wal
  • 服务网段:10.244.0.0/16 #不能直接访问的网段,用于service使用
  • 服务为nodePort模式使用的端口:30000-32767
  • 集群DNS服务IP:10.244.0.2 #
  • Pod网段:172.30.0.0/16
  • DNS域名:cluster.local
  • kubernetes服务 IP:10.244.0.1
  • 二进制程序存放目录:/opt/k8s/bin
  • 所有操作在master1节点上操作,然后分发到其他node上
  • /opt/k8s/work/cert #为证书内容
  • /opt/k8s/work/yaml #为yml内容
  • /opt/k8s/work/service #为service工作

二、CA和私钥

CFSSL是CloudFlare开源的一款PKI/TLS工具。CFSSL包含一个命令行工具和一个用于签名,验证并且捆绑TLS证书的HTTP API服务。使用Go语言编写。
下载地址:https://pkg.cfssl.org/
集群证书:

  • client certificate:用于服务端认证客户端 //kubelet只需要client证书即可
  • server certificate:服务端使用,客户端以此验证服务端身份
  • peer certificate:双向证书 //etcd集群成员之间通信
    证书编码格式:
  • PEM(Privacy Enhanced Mail),通常用于数字证书认证机构(Certificate Authorities,CA),扩展名为.pem,.crt,.cer,.key Base64编码的ASCII码文件
  • 内容:-----BEGIN CERTIFICATE-----" 和 "-----END CERTIFICATE-----
  • DER(Distinguished Encoding Rules),二进制格式。扩展名.der
  • CSR:证书签署请求

1、安装cfssl

下载地址:https://github.com/cloudflare/cfssl

#mv cfssl_linux-amd64 /opt/k8s/bin/cfssl
#mv cfssljson_linux-amd64 /opt/k8s/bin/cfssljson
#mv cfssl-certinfo_linux-amd64 /opt/k8s/bin/cfssl-certinfo

2、命令介绍

cfssl:

  • bundle: 创建包含客户端证书的证书包
  • genkey: 生成一个key(私钥)和CSR(证书签名请求)
  • scan: 扫描主机问题
  • revoke: 吊销证书
  • certinfo: 输出给定证书的证书信息, 跟cfssl-certinfo 工具作用一样
  • gencrl: 生成新的证书吊销列表
  • selfsign: 生成一个新的自签名密钥和 签名证书
  • print-defaults: 打印默认配置,这个默认配置可以用作模板
  • serve: 启动一个HTTP API服务
  • info: 获取有关远程签名者的信息
  • sign: 签名一个客户端证书,通过给定的CA和CA密钥,和主机名
  • gencert: 生成新的key(密钥)和签名证书
  • -ca:指明ca的证书
  • -ca-key:指明ca的私钥文件
  • -config:指明请求证书的json文件
  • -profile:与-config中的profile对应,是指根据config中的profile段来生成证书的相关信息

有兴趣的可以了解下: https://github.com/cloudflare/cfssl

3、创建证书生成策略

[root@master1 work]# cd /opt/k8s/work/
[root@master1 work]# vim ca-config.json
{
  "signing": {
    "default": {
      "expiry": "26280h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "26280h"
      }
    }
  }
}

策略说明:
ca-config.json:可以定义多个 profiles,分别指定不同的过期时间、使用场景等参数
signing:表示证书可用于签名其它证书,生成的 ca.pem 证书中CA=TRUE;
server auth:表示client可以用该该证书对 server 提供的证书进行验证;
client auth:表示server可以用该该证书对 client 提供的证书进行验证;

4、创建csr

[root@master1 work]# vim ca-csr.json
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "HangZhou",
      "L": "HangZhou",
      "O": "k8s",
      "OU": "FirstOne"
    }
  ],
  "ca": {
    "expiry": "26280h"
 }
}

[root@master1 cert]#
(CN)Common Name: kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name),浏览器使用该字段验证网站是否合法;
O:Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group); kube-apiserver 将提取的 User、Group 作为 RBAC 授权的用户标识;
C = <country> 国家
ST = <state> 州,省
L = <city> 城市
O = <organization> 组织名称/公司名称
OU = <organization unit> 组织单位/公司部门

5、生成ca

[root@master1 work]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
[root@master1 work]# ls ca*
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem //生成 ca-key.pem(私钥),ca.pem(公钥),ca.csr(签署请求)
[root@master1 work]# ansible all -i /root/udp/hosts.ini -a "mkdir /etc/kubernetes/cert -pv " 
[root@master1 work]# ansible all -i /root/udp/hosts.ini -m copy -a "src=./ca.pem dest=/etc/kubernetes/cert/" 
[root@master1 work]# ansible all -i /root/udp/hosts.ini -m copy -a "src=./ca-key.pem dest=/etc/kubernetes/cert/" 
[root@master1 cert]# ansible all -i /root/udp/hosts.ini -m copy -a "src=./ca-config.json dest=/etc/kubernetes/cert" 
[root@master1 cert]# cfssl certinfo -cert  ca.pem  //查看证书内容
[root@master1 cert]# cfssl certinfo -csr ca.csr   //查看证书请求内容

三、部署kubectl

1、拷贝二进制文件

[root@master1 work]# ansible all -i /root/udp/hosts.ini -m copy -a 'src=/opt/k8s/bin/kubelet dest=/opt/k8s/bin/'
[root@master1 work]# ansible all -i /root/udp/hosts.ini -m copy -a 'src=/opt/k8s/bin/kube-proxy dest=/opt/k8s/bin/'
[root@master1 work]# ansible master -i /root/udp/hosts.ini -m copy -a "src=/opt/k8s/bin/cloud-controller-manager dest=/opt/k8s/bin" 
[root@master1 work]# ansible master -i /root/udp/hosts.ini -m copy -a "src=/opt/k8s/bin/apiextensions-apiserver dest=/opt/k8s/bin" 
[root@master1 work]# ansible master  -i /root/udp/hosts.ini -m copy -a 'src=/opt/k8s/bin/kube-apiserver dest=/opt/k8s/bin/'
[root@master1 work]# ansible master  -i /root/udp/hosts.ini -m copy -a 'src=/opt/k8s/bin/kube-scheduler dest=/opt/k8s/bin/'
[root@master1 work]# ansible master  -i /root/udp/hosts.ini -m copy -a 'src=/opt/k8s/bin/kube-controller-manager dest=/opt/k8s/bin/'
[root@master1 work]# ansible master  -i /root/udp/hosts.ini -m copy -a 'src=/opt/k8s/bin/kubectl dest=/opt/k8s/bin/'

2、创建adimin的csr

kubectl 默认从 ~/.kube/config 文件读取 kube-apiserver 地址和认证信息
创建证书签署请求:

[root@master1 cert]# vim admin-csr.json 
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "HangZhou",
      "L": "HangZhou",
      "O": "system:masters",
      "OU": "FirstOne"
    }
  ]
}

[root@master1 cert]#
O:值system:masters,kube-apiserver 收到该证书后将请求的 Group 设置为 system:masters;
预定义的 ClusterRoleBinding cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予所有 API的权限;
该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空;

3、生成证书和私钥

[root@master1 cert]# cfssl gencert -ca=./ca.pem -ca-key=./ca-key.pem -config=./ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

生成:admin.csr admin-key.pem admin.pem

4、生成kubeconfig文件

kubeconfig为kubectl的配置文件,包含apiserver的信息和CA证书等信息
默认信息查看:

[root@master1 cert]# kubectl config view
apiVersion: v1
clusters: []  #配置要访问的kubernetes集群
contexts: []  #配置访问kubernetes集群的具体上下文环境 
current-context: "" #配置当前使用的上下文环境
kind: Config
preferences: {}
users: []  # 配置访问的用户信息,用户名以及证书信息

# 设置集群参数 //把ca.pem和server的内容添加到kubectl.config 文件cluster中
kubectl config set-cluster kubernetes --certificate-authority=./ca.pem --embed-certs=true --server=${KUBE_APISERVER}  --kubeconfig=kubectl.kubeconfig
# 设置客户端认证参数 //把admin账户的公钥和私钥添加到kubectl.config 文件users中
[root@master1 cert]# kubectl config set-credentials admin  --client-certificate=./admin.pem  --client-key=./admin-key.pem  --embed-certs=true  --kubeconfig=kubectl.kubeconfig
# 设置上下文参数 //把context内容添加到 kubectl.config 文件中
kubectl config set-context kubernetes  --cluster=kubernetes   --user=admin   --kubeconfig=kubectl.kubeconfig
# 设置默认上下文
kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig

参数:--embed-certs=true 表述将内容嵌套到 kubectl.kubeconfig 文件中
不加时,写入的是证书文件路径,后续拷贝 kubeconfig 到其它机器时,还需要单独拷贝证书文件

5、分发kubectl.config

分发kubectl.config文件到所有master节点

[root@master1 cert]# ansible master -i /root/udp/hosts.ini -m shell -a " mkdir ~/kube" 
[root@master1 cert]# ansible master -i /root/udp/hosts.ini -m copy -a "src=./kubectl.kubeconfig dest=~/.kube/config" 

四、部署etcd

Etcd用于务发现、共享配置以及并发控制(如 leader 选举、分布式锁等)。kubernetes 使用 etcd 存储所有运行数据

1、分发配置文件

[root@master1 cert]# ansible master -i /root/udp/hosts.ini -m copy -a "src=./etcdctl dest=/opt/k8s/bin" 
[root@master1 cert]# ansible master -i /root/udp/hosts.ini -m copy -a "src=./etcd dest=/opt/k8s/bin" 

2、为etcd创建证书

[root@master1 cert]# vim etcd-csr.json
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.192.222",
    "192.168.192.223",
    "192.168.192.224"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "HangZhou",
      "L": "HangZhou",
      "O": "k8s",
      "OU": "FirstOne"
    }
  ]
}

[root@master1 cert]# cfssl gencert -ca=./ca.pem -ca-key=./ca-key.pem  -config=./ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
[root@master1 cert]# ls etcd*
etcd.csr  etcd-csr.json  etcd-key.pem  etcd.pem

3、分发证书和私钥

[root@master1 cert]# ansible master -i /root/udp/hosts.ini -m copy -a "src=./etcd.pem dest=/etc/etcd/cert/" 
[root@master1 cert]# ansible master -i /root/udp/hosts.ini -m copy -a "src=./etcd-key.pem dest=/etc/etcd/cert/" 

4、配置service

[root@master1 service]# vim etcd.service.template
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory=/data/k8s/etcd/data
ExecStart=/opt/k8s/bin/etcd \
  --data-dir=/data/k8s/etcd/data \
  --wal-dir=/data/k8s/etcd/wal \
  --name=NODE_NAME \
  --cert-file=/etc/etcd/cert/etcd.pem \
  --key-file=/etc/etcd/cert/etcd-key.pem \
  --trusted-ca-file=/etc/kubernetes/cert/ca.pem \
  --peer-cert-file=/etc/etcd/cert/etcd.pem \
  --peer-key-file=/etc/etcd/cert/etcd-key.pem \
  --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth \
  --listen-peer-urls=https://NODE_IP:2380 \
  --initial-advertise-peer-urls=https://NODE_IP:2380 \
  --listen-client-urls=https://NODE_IP:2379,http://127.0.0.1:2379 \
  --advertise-client-urls=https://NODE_IP:2379 \
  --initial-cluster-token=etcd-cluster-0 \
  --initial-cluster=master1=https://192.168.192.222:2380,master2=https://192.168.192.223:2380,master3=https://192.168.192.224:2380 \
  --initial-cluster-state=new \
  --auto-compaction-retention=1 \
  --max-request-bytes=33554432 \
  --quota-backend-bytes=6442450944 \
  --heartbeat-interval=250 \
  --election-timeout=2000
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
[root@master1 service]# sed 's/NODE_NAME/master1/g;s/NODE_IP/192.168.192.222/g;s/\\//' etcd.service  &> ./etcd.service.master1
[root@master1 service]# sed 's/NODE_NAME/master2/g;s/NODE_IP/192.168.192.223/g' etcd.service  &> ./etcd.service.master2
[root@master1 service]# sed 's/NODE_NAME/master3/g;s/NODE_IP/192.168.192.224/g' etcd.service  &> ./etcd.service.master3
[root@master1 service]# ansible master -i /root/udp/hosts.ini -a "mkdir -pv /data/k8s/etcd/data" 
[root@master1 service]# ansible master -i /root/udp/hosts.ini -a "mkdir -pv /data/k8s/etcd/wal" 
[root@master1 service]# scp etcd.service.master1 root@master1:/etc/systemd/system/etcd.service
[root@master1 service]# scp etcd.service.master2 root@master2:/etc/systemd/system/etcd.service
[root@master1 service]# scp etcd.service.master3 root@master3:/etc/systemd/system/etcd.service

5、参数说明

name=NODE_NAME   #节点名称
cert-file=/etc/etcd/cert/etcd.pem #etcd的公钥
key-file=/etc/etcd/cert/etcd-key.pem   #etcd的私钥
trusted-ca-file=/etc/kubernetes/cert/ca.pem   #ca的私钥
peer-cert-file=/etc/etcd/cert/etcd.pem   #
peer-key-file=/etc/etcd/cert/etcd-key.pem   #etcd集群内部成员的ca私钥
peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem   #信任的证书
peer-client-cert-auth   #
client-cert-auth   #
listen-peer-urls=https://NODE_IP:2380   #监听用于节点通信的url
initial-advertise-peer-urls=https://NODE_IP:2380   #监听的用于客户端通信的url
listen-client-urls=https://NODE_IP:2379,http://127.0.0.1:2379   #建议用于监听客户端通信的url
advertise-client-urls=https://NODE_IP:2379   #
initial-cluster-token=etcd-cluster-0   #节点的token值,设置该值后集群将生成唯一id,并为每个节点也生成唯一id,当使用相同配置文件再启动一个集群时,只要该token值不一样,etcd集群就不会相互影响。
initial-cluster=master1 #
initial-cluster-state=new   #新建集群的标志
auto-compaction-retention=1   # MVCC密钥值存储的自动压缩保留时间(小时)。0表示禁用自动压缩。 
max-request-bytes=33554432   #
quota-backend-bytes=6442450944   #
heartbeat-interval=250   #心跳检测
election-timeout=2000     #选举超时时间

6、启动服务

[root@master1 service]# ansible master -i /root/udp/hosts.ini -m shell -a "chmod +x /opt/k8s/bin/* " 
[root@master1 service]# ansible master -i /root/udp/hosts.ini -a "systemctl start etcd.service" 
[root@master1 service]# ansible master -i /root/udp/hosts.ini -m shell -a "systemctl daemon-reload;systemctl restart etcd.service" 
[root@master1 service]# ansible master -i /root/udp/hosts.ini -m shell -a "systemctl enable etcd.service" 

7、etcd服务验证

[root@master1 service]# ETCDCTL_API=3 /opt/k8s/bin/etcdctl -w table --cacert=/etc/kubernetes/cert/ca.pem --cert=/etc/etcd/cert/etcd.pem  --key=/etc/etcd/cert/etcd-key.pem  --endpoints=https://192.168.192.222:2379,https://192.168.192.223:2379,https://192.168.192.224:2379 endpoint status
+------------------------------+------------------+---------+---------+-----------+-----------+------------+
|           ENDPOINT           |        ID        | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+------------------------------+------------------+---------+---------+-----------+-----------+------------+
| https://192.168.192.222:2379 | 2c7d8b7aa58766f3 |  3.2.26 |   25 kB |      true |        29 |         15 |
| https://192.168.192.223:2379 | 257fa42984b72360 |  3.2.26 |   25 kB |     false |        29 |         15 |
| https://192.168.192.224:2379 |  3410f89131d2eef |  3.2.26 |   25 kB |     false |        29 |         15 |
+------------------------------+------------------+---------+---------+-----------+-----------+------------+
[root@master1 cert]# etcdctl --endpoints=https://192.168.192.222:2379,https://192.168.192.223:2379,https://192.168.192.224:2379 --ca-file=/opt/k8s/work/cert/ca.pem   --cert-file=/opt/k8s/work/cert/flanneld.pem   --key-file=/opt/k8s/work/cert/flanneld-key.pem member list
3410f89131d2eef: name=master3 peerURLs=https://192.168.192.224:2380 clientURLs=https://192.168.192.224:2379 isLeader=false
257fa42984b72360: name=master2 peerURLs=https://192.168.192.223:2380 clientURLs=https://192.168.192.223:2379 isLeader=false
2c7d8b7aa58766f3: name=master1 peerURLs=https://192.168.192.222:2380 clientURLs=https://192.168.192.222:2379 isLeader=true

五、部署flannel

1、下发二进制文件

[root@master1 service]# ansible all -i /root/udp/hosts.ini -m copy -a "src=../../bin/flanneld dest=/opt/k8s/bin" 
[root@master1 service]# ansible all -i /root/udp/hosts.ini -m copy -a "src=/root/k8s/mk-docker-opts.sh dest=/opt/k8s/bin/" 
[root@master1 service]# ansible all -i /root/udp/hosts.ini -m shell -a "chmod +x /opt/k8s/bin/*" 

2、创建证书和私钥

[root@master1 cert]# vim flanneld-csr.json
{
  "CN": "flanneld",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "HangZhou",
      "L": "HangZhou",
      "O": "k8s",
      "OU": "FirstOne"
    }
  ]
}

[root@master1 cert]# cfssl gencert -ca=./ca.pem -ca-key=./ca-key.pem -config=./ca-config.json -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld
[root@master1 cert]# ansible all  -i /root/udp/hosts.ini -m shell -a "mkdir /etc/flanneld/cert" 
[root@master1 cert]# ansible all  -i /root/udp/hosts.ini -m copy -a "src=./flanneld.pem dest=/etc/flanneld/cert" 
[root@master1 cert]# ansible all  -i /root/udp/hosts.ini -m copy -a "src=./flanneld-key.pem dest=/etc/flanneld/cert/" 

3、写入pod网段信息

[root@master1 cert]# etcdctl --endpoints=https://192.168.192.222:2379,https://192.168.192.223:2379,https://192.168.192.224:2379 --ca-file=/opt/k8s/work/cert/ca.pem   --cert-file=/opt/k8s/work/cert/flanneld.pem   --key-file=/opt/k8s/work/cert/flanneld-key.pem   mk /kubernetes/network/config '{"Network":"'172.30.0.0/16'", "SubnetLen": 21, "Backend": {"Type": "vxlan"}}'
{"Network":"172.30.0.0/16", "SubnetLen": 21, "Backend": {"Type": "vxlan"}}
[root@master1 cert]# etcdctl --endpoints=https://192.168.192.222:2379,https://192.168.192.223:2379,https://192.168.192.224:2379 --ca-file=/opt/k8s/work/cert/ca.pem   --cert-file=/opt/k8s/work/cert/flanneld.pem   --key-file=/opt/k8s/work/cert/flanneld-key.pem get /kubernetes/network/config
{"Network":"172.30.0.0/16", "SubnetLen": 21, "Backend": {"Type": "vxlan"}}

4、service配置

[root@master1 service]# vim  flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
Before=docker.service

[Service]
Type=notify
ExecStart=/opt/k8s/bin/flanneld \
  -etcd-cafile=/etc/kubernetes/cert/ca.pem \
  -etcd-certfile=/etc/flanneld/cert/flanneld.pem \
  -etcd-keyfile=/etc/flanneld/cert/flanneld-key.pem \
  -etcd-endpoints=https://192.168.192.222:2379,https://192.168.192.223:2379,https://192.168.192.224:2379 \
  -etcd-prefix=/kubernetes/network \
  -iface=eth0 \
  -ip-masq
ExecStartPost=/opt/k8s/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=always
RestartSec=5
StartLimitInterval=0

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service

[root@master1 service]# ansible all -i /root/udp/hosts.ini -m copy -a "src=./flanneld.service dest=/etc/systemd/system/" 
[root@master1 ~]# ansible all -i /root/udp/hosts.ini -m shell -a "systemctl daemon-reload ;systemctl restart flanneld.service &&systemctl status flanneld.service"
[root@master1 ~]# ansible all -i /root/udp/hosts.ini -m shell -a "systemctl enable flanneld.service"

mk-docker-opts.sh 脚本将分配给 flanneld 的 Pod 子网段信息写入 /run/flannel/docker 文件,后续 docker 启动时使用这个文件中的环境变量(DOCKER_NETWORK_OPTIONS)配置 docker0 网桥;
ip-mqsq: flanneld 为访问 Pod 网络外的流量设置 SNAT 规则时将传递给 Docker 的变量 --ip-masq(/run/flannel/docker 文件中)设置为 false
flanneld 创建的 SNAT 规则相对于docker比较温和,只对访问非 Pod 网段的请求做 SNAT。

5、查看pod网段信息

[root@master1 cert]# etcdctl --endpoints=https://192.168.192.222:2379,https://192.168.192.223:2379,https://192.168.192.224:2379 --ca-file=/etc/kubernetes/cert/ca.pem --cert-file=/etc/flanneld/cert/flanneld.pem --key-file=/etc/flanneld/cert/flanneld-key.pem ls /kubernetes/network/subnets
/kubernetes/network/subnets/172.30.8.0-21
/kubernetes/network/subnets/172.30.96.0-21
/kubernetes/network/subnets/172.30.64.0-21
/kubernetes/network/subnets/172.30.32.0-21
/kubernetes/network/subnets/172.30.56.0-21

查看某一node申请的Pod网段对应的节点 IP 和 flannel 接口地址:

[root@master1 cert]# etcdctl --endpoints=https://192.168.192.222:2379,https://192.168.192.223:2379,https://192.168.192.224:2379 --ca-file=/etc/kubernetes/cert/ca.pem --cert-file=/etc/flanneld/cert/flanneld.pem --key-file=/etc/flanneld/cert/flanneld-key.pem get /kubernetes/network/subnets/172.30.8.0-21
{"PublicIP":"192.168.192.223","BackendType":"vxlan","BackendData":{"VtepMAC":"f6:e1:42:b9:35:70"}}

PublicIP:192.168.192.223 //被分配的节点是192.168.192.223,这个节点的网段是172.30.8.0/21
VtepMAC:f6:e1:42:b9:35:70为192.168.192.223节点的flannel.1网卡 MAC 地址

[root@master1 service]# cat  /run/flannel/docker 
DOCKER_OPT_BIP="--bip=172.30.96.1/21"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.30.96.1/21 --ip-masq=false --mtu=1450"

[root@master1 service]# cat  /run/flannel/subnet.env 
FLANNEL_NETWORK=172.30.0.0/16
FLANNEL_SUBNET=172.30.96.1/21
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true

flanneld将自己的网段信息写入到:/run/flannel/docker
docker 后续使用这个文件中的环境变量设置 docker0 网桥,从而从这个地址段为本节点的所有 Pod 容器分配 IP。

参考网址:
https://github.com/cloudflare/cfssl
https://kubernetes.io/docs/setup/
https://github.com/opsnull/follow-me-install-kubernetes-cluster/blob/master/04.%E9%83%A8%E7%BD%B2etcd%E9%9B%86%E7%BE%A4.md

猜你喜欢

转载自blog.51cto.com/hmtk520/2423253