kubernetes Practice Guidelines (b)

First, the deployment premise
two, CA and private
DEPLOYMENT kubectl
four deployment etcd
V. deployment flannel

First, the premise deployment

The next few steady focus on binary installation kubernetes version 1.14.3

1, version information

docker:v17.06.0-ce
etcd:v3.2.26
flannel:v0.11.0
Kubernetes:v1.14.3
OS:v7.3.1611
cfssl:v1.2.0

2, hosts planning

[root@master1 work]# vim /etc/hosts
192.168.192.222 master1 etcd www.mt.com
192.168.192.223 master2 etcd www.mt.com
192.168.192.224 master3 etcd www.mt.com
192.168.192.225 node1
192.168.192.226 node2
192.168.192.234 registry 

Note: master1, master2, master3 etcd also address the deployment of
host names need to use: hostnamectl setp-hostname be set manually for each host # individual settings are not described here
[root @ master1 work] # ansible all -i / root / udp / hosts -a "src = / etc / hosts dest = / etc / hosts"

3、ansible的hosts

[root@master1 work]# cat /root/udp/hosts.ini 
[master]
192.168.192.222 ansible_ssh_user=root ansible_ssh_pass=自己的密码
192.168.192.223 ansible_ssh_user=root ansible_ssh_pass=自己的密码 
192.168.192.224 ansible_ssh_user=root ansible_ssh_pass=自己的密码 

[master1]
192.168.192.222 ansible_ssh_user=root ansible_ssh_pass=自己的密码

[master2]
192.168.192.222 ansible_ssh_user=root ansible_ssh_pass=自己的密码

[master3]
192.168.192.222 ansible_ssh_user=root ansible_ssh_pass=自己的密码

[node]
192.168.192.225 ansible_ssh_user=root ansible_ssh_pass=自己的密码 
192.168.192.226 ansible_ssh_user=root ansible_ssh_pass=自己的密码 

[all]
192.168.192.222 ansible_ssh_user=root ansible_ssh_pass=自己的密码 
192.168.192.223 ansible_ssh_user=root ansible_ssh_pass=自己的密码 
192.168.192.224 ansible_ssh_user=root ansible_ssh_pass=自己的密码 
192.168.192.225 ansible_ssh_user=root ansible_ssh_pass=自己的密码 
192.168.192.226 ansible_ssh_user=root ansible_ssh_pass=自己的密码 

4, initialization script init.sh

[root@master1 work]# vim init.sh 
#!/bin/bash 
echo 'PATH=/opt/k8s/bin:$PATH' >>/root/.bashrc
yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget
systemctl disable firewalld && systemctl stop firewalld
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
modprobe ip_vs_rr
modprobe br_netfilter

[root@master1 work]# ansible all -i /root/udp/hosts -m copy "src=./init.sh dest=/opt/k8s/bin/init.sh" 
[root@master1 work]# ansible all -i /root/udp/hosts -a "sh /opt/k8s/bin/init.sh"

We need a stable clock source, all the machines in the cluster synchronization and ntpq -np normal

5, the kernel optimization

[root@master1 work]# vim /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.overcommit_memory=1 # 不检查物理内存是否够用
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=655360000
fs.nr_open=655360000
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720

[root@master1 work]# ansible all -i /root/udp/hosts.ini -m copy -a "src=/etc/sysctl.d/kubernetes.conf dest=/etc/sysctl.d/kubernetes.conf" 
[root@master1 work]# ansible all -i /root/udp/hosts.ini  -a "sysctl -p  /etc/sysctl.d/kubernetes.conf" 

6, configuration planning information

  • Cluster node address: 192.168.192.222 192.168.192.223 192.168.192.224 192.168.192.225 192.168.192.226
  • registry address: 192.168.192.234
  • Host Name: master1 master2 master3 node1 node2 # ip address and the corresponding cluster node
  • ETCD Address: https://192.168.192.222:2379,https://192.168.192.223:2379,https://192.168.192.224:2379
  • Inter-node communication port ETCD: Master1 = https://192.168.192.222:2380,master2=https://192.168.192.223:2380,master3=https://192.168.192.224:2380
  • APIServer Address: https://127.0.0.1:8443
  • Use the network interface: eth0
  • ETCD data directory: / data / k8s / etcd / data
  • ETCD wal directory: / data / k8s / etcd / wal
  • Services segment: 10.244.0.0/16 # can not directly access the network segment for service use
  • Port services for nodePort mode used: 30000-32767
  • DNS Cluster service IP: 10.244.0.2 #
  • Pod segments: 172.30.0.0/16
  • DNS domain: cluster.local
  • kubernetes service IP: 10.244.0.1
  • Binary storage directory: / opt / k8s / bin
  • All operations on master1 node operation, and then distributed to the other node
  • / Opt / k8s / work / cert # for the contents of the certificate
  • / Opt / k8s / work / yaml # is yml content
  • / Opt / k8s / work / service # for the service to work

Two, CA and private key

CFSSL CloudFlare is an open source PKI / TLS tool. CFSSL contains a command-line tool and HTTP API service for signature verification and bundled TLS certificate. Use the Go language.
Download: https://pkg.cfssl.org/
cluster Certificate:

  • client certificate: for service to authenticate the client // kubelet need only client certificate to
  • server certificate: the server, the client will be used to verify the identity of the server
  • peer certificate: two-way communication between etcd Certificate // cluster member
    certificate encoding format:
  • PEM (Privacy Enhanced Mail), is commonly used certificate authority (Certificate Authorities, CA), the extension .pem, .crt, .cer, .key Base64 encoded in ASCII file
  • Content: ----- BEGIN CERTIFICATE ----- "and" ----- END CERTIFICATE -----
  • DER (Distinguished Encoding Rules), binary format. Extension .der
  • CSR: Certificate Signing Request

1, the installation cfssl

Download: https://github.com/cloudflare/cfssl

#mv cfssl_linux-amd64 /opt/k8s/bin/cfssl
#mv cfssljson_linux-amd64 /opt/k8s/bin/cfssljson
#mv cfssl-certinfo_linux-amd64 /opt/k8s/bin/cfssl-certinfo

2, Commands

cfssl:

  • bundle: Create a certificate package contains the client certificate
  • genkey: generating a key (private key) and the CSR (Certificate Signing Request)
  • scan: Scan Host problems
  • revoke: Revoking a certificate
  • certinfo: output given certificate certificate information, with the same effect cfssl-certinfo tool
  • gencrl: generate a new certificate revocation list
  • selfsign: generate a new signing key and self-signed certificate
  • print-defaults: Print the default configuration, the default configuration can be used as a template
  • serve: Start a HTTP API service
  • info: obtaining information about a remote signer
  • sign: Signed a client certificate, given by CA and CA keys, and the host name
  • gencert: generate a new key (key) and signed certificate
  • -ca: certificate indicating the ca
  • The private key file specifies ca: -ca-key
  • -config: json file a certificate indicating request
  • -profile: -config and profile corresponding to the means-related information to generate the certificate in accordance with the profile section config

Interested can understand the next: https://github.com/cloudflare/cfssl

3, create a certificate generation strategy

[root@master1 work]# cd /opt/k8s/work/
[root@master1 work]# vim ca-config.json
{
  "signing": {
    "default": {
      "expiry": "26280h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "26280h"
      }
    }
  }
}

Strategy Description:
CA-config.json: Profiles can define multiple, specify different expiration time, and other parameters using the scene
signing: indicates that the certificate can be used to sign other certificates, certificate generation ca.pem = TRUE in the CA;
Server the auth: It represents a client certificate that can be verified with the certificate provided by the server;
client the auth: represents a server certificate that can be verified with the certificate provided by the client;

4, create csr

[root@master1 work]# vim ca-csr.json
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "HangZhou",
      "L": "HangZhou",
      "O": "k8s",
      "OU": "FirstOne"
    }
  ],
  "ca": {
    "expiry": "26280h"
 }
}

[the root @ Master1 CERT] #
(the CN) the Common the Name: Kube-apiserver extracted field from the certificate as the user name (User Name) request, the browser uses this field to verify the site is legitimate;
O: Organization, Kube-apiserver from certificate extracts the field as the requesting user's group (group); kube-apiserver extracted user, group as RBAC authorized user identifier;
C = <Country> Country
ST = <state> state, province,
L = <city> city
O = <organization> organization name / company name
OU = <organization unit> organizational unit / corporate sector

5, generate ca

[root@master1 work]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
[root@master1 work]# ls ca*
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem //生成 ca-key.pem(私钥),ca.pem(公钥),ca.csr(签署请求)
[root@master1 work]# ansible all -i /root/udp/hosts.ini -a "mkdir /etc/kubernetes/cert -pv " 
[root@master1 work]# ansible all -i /root/udp/hosts.ini -m copy -a "src=./ca.pem dest=/etc/kubernetes/cert/" 
[root@master1 work]# ansible all -i /root/udp/hosts.ini -m copy -a "src=./ca-key.pem dest=/etc/kubernetes/cert/" 
[root@master1 cert]# ansible all -i /root/udp/hosts.ini -m copy -a "src=./ca-config.json dest=/etc/kubernetes/cert" 
[root@master1 cert]# cfssl certinfo -cert  ca.pem  //查看证书内容
[root@master1 cert]# cfssl certinfo -csr ca.csr   //查看证书请求内容

Third, the deployment kubectl

1, copy the binaries

[root@master1 work]# ansible all -i /root/udp/hosts.ini -m copy -a 'src=/opt/k8s/bin/kubelet dest=/opt/k8s/bin/'
[root@master1 work]# ansible all -i /root/udp/hosts.ini -m copy -a 'src=/opt/k8s/bin/kube-proxy dest=/opt/k8s/bin/'
[root@master1 work]# ansible master -i /root/udp/hosts.ini -m copy -a "src=/opt/k8s/bin/cloud-controller-manager dest=/opt/k8s/bin" 
[root@master1 work]# ansible master -i /root/udp/hosts.ini -m copy -a "src=/opt/k8s/bin/apiextensions-apiserver dest=/opt/k8s/bin" 
[root@master1 work]# ansible master  -i /root/udp/hosts.ini -m copy -a 'src=/opt/k8s/bin/kube-apiserver dest=/opt/k8s/bin/'
[root@master1 work]# ansible master  -i /root/udp/hosts.ini -m copy -a 'src=/opt/k8s/bin/kube-scheduler dest=/opt/k8s/bin/'
[root@master1 work]# ansible master  -i /root/udp/hosts.ini -m copy -a 'src=/opt/k8s/bin/kube-controller-manager dest=/opt/k8s/bin/'
[root@master1 work]# ansible master  -i /root/udp/hosts.ini -m copy -a 'src=/opt/k8s/bin/kubectl dest=/opt/k8s/bin/'

2. Create adimin of csr

kubectl default read kube-apiserver address and authentication information from ~ / .kube / config file
to create a certificate signing request:

[root@master1 cert]# vim admin-csr.json 
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "HangZhou",
      "L": "HangZhou",
      "O": "system:masters",
      "OU": "FirstOne"
    }
  ]
}

[@ Master1 the root CERT] #
O: Value system: masters, kube-apiserver Group received certificate set after the request for the system: masters;
predefined ClusterRoleBinding cluster-admin the Group system: masters and tied Role cluster-admin given the Role grant permissions to all the API;
the certificate will only be as kubectl client certificate to use, so the hosts field is empty;

3, generate a certificate and private key

[root@master1 cert]# cfssl gencert -ca=./ca.pem -ca-key=./ca-key.pem -config=./ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

生成:admin.csr admin-key.pem admin.pem

4, document generation kubeconfig

kubeconfig to kubectl configuration file that contains the information apiserver and CA certificates
Default View:

[root@master1 cert]# kubectl config view
apiVersion: v1
clusters: []  #配置要访问的kubernetes集群
contexts: []  #配置访问kubernetes集群的具体上下文环境 
current-context: "" #配置当前使用的上下文环境
kind: Config
preferences: {}
users: []  # 配置访问的用户信息,用户名以及证书信息

# 设置集群参数 //把ca.pem和server的内容添加到kubectl.config 文件cluster中
kubectl config set-cluster kubernetes --certificate-authority=./ca.pem --embed-certs=true --server=${KUBE_APISERVER}  --kubeconfig=kubectl.kubeconfig
# 设置客户端认证参数 //把admin账户的公钥和私钥添加到kubectl.config 文件users中
[root@master1 cert]# kubectl config set-credentials admin  --client-certificate=./admin.pem  --client-key=./admin-key.pem  --embed-certs=true  --kubeconfig=kubectl.kubeconfig
# 设置上下文参数 //把context内容添加到 kubectl.config 文件中
kubectl config set-context kubernetes  --cluster=kubernetes   --user=admin   --kubeconfig=kubectl.kubeconfig
# 设置默认上下文
kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig

Parameters: - embed-certs = true representation nested kubectl.kubeconfig content file
is not added, the certificate file is written to the path, the subsequent copy-kubeconfig to other machines, but also need to copy the certificate file alone

5, distribution kubectl.config

Kubectl.config distribute files to all master node

[root@master1 cert]# ansible master -i /root/udp/hosts.ini -m shell -a " mkdir ~/kube" 
[root@master1 cert]# ansible master -i /root/udp/hosts.ini -m copy -a "src=./kubectl.kubeconfig dest=~/.kube/config" 

Fourth, deployment etcd

Etcd for service discovery, configuration and shared concurrency control (e.g., leader election, distributed lock). kubernetes use etcd stores all operating data

1. Distribution Profile

[root@master1 cert]# ansible master -i /root/udp/hosts.ini -m copy -a "src=./etcdctl dest=/opt/k8s/bin" 
[root@master1 cert]# ansible master -i /root/udp/hosts.ini -m copy -a "src=./etcd dest=/opt/k8s/bin" 

2, create a certificate for etcd

[root@master1 cert]# vim etcd-csr.json
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "192.168.192.222",
    "192.168.192.223",
    "192.168.192.224"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "HangZhou",
      "L": "HangZhou",
      "O": "k8s",
      "OU": "FirstOne"
    }
  ]
}

[root@master1 cert]# cfssl gencert -ca=./ca.pem -ca-key=./ca-key.pem  -config=./ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
[root@master1 cert]# ls etcd*
etcd.csr  etcd-csr.json  etcd-key.pem  etcd.pem

3, distribute certificates and private keys

[root@master1 cert]# ansible master -i /root/udp/hosts.ini -m copy -a "src=./etcd.pem dest=/etc/etcd/cert/" 
[root@master1 cert]# ansible master -i /root/udp/hosts.ini -m copy -a "src=./etcd-key.pem dest=/etc/etcd/cert/" 

4, configuration service

[root@master1 service]# vim etcd.service.template
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
WorkingDirectory=/data/k8s/etcd/data
ExecStart=/opt/k8s/bin/etcd \
  --data-dir=/data/k8s/etcd/data \
  --wal-dir=/data/k8s/etcd/wal \
  --name=NODE_NAME \
  --cert-file=/etc/etcd/cert/etcd.pem \
  --key-file=/etc/etcd/cert/etcd-key.pem \
  --trusted-ca-file=/etc/kubernetes/cert/ca.pem \
  --peer-cert-file=/etc/etcd/cert/etcd.pem \
  --peer-key-file=/etc/etcd/cert/etcd-key.pem \
  --peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem \
  --peer-client-cert-auth \
  --client-cert-auth \
  --listen-peer-urls=https://NODE_IP:2380 \
  --initial-advertise-peer-urls=https://NODE_IP:2380 \
  --listen-client-urls=https://NODE_IP:2379,http://127.0.0.1:2379 \
  --advertise-client-urls=https://NODE_IP:2379 \
  --initial-cluster-token=etcd-cluster-0 \
  --initial-cluster=master1=https://192.168.192.222:2380,master2=https://192.168.192.223:2380,master3=https://192.168.192.224:2380 \
  --initial-cluster-state=new \
  --auto-compaction-retention=1 \
  --max-request-bytes=33554432 \
  --quota-backend-bytes=6442450944 \
  --heartbeat-interval=250 \
  --election-timeout=2000
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
[root@master1 service]# sed 's/NODE_NAME/master1/g;s/NODE_IP/192.168.192.222/g;s/\\//' etcd.service  &> ./etcd.service.master1
[root@master1 service]# sed 's/NODE_NAME/master2/g;s/NODE_IP/192.168.192.223/g' etcd.service  &> ./etcd.service.master2
[root@master1 service]# sed 's/NODE_NAME/master3/g;s/NODE_IP/192.168.192.224/g' etcd.service  &> ./etcd.service.master3
[root@master1 service]# ansible master -i /root/udp/hosts.ini -a "mkdir -pv /data/k8s/etcd/data" 
[root@master1 service]# ansible master -i /root/udp/hosts.ini -a "mkdir -pv /data/k8s/etcd/wal" 
[root@master1 service]# scp etcd.service.master1 root@master1:/etc/systemd/system/etcd.service
[root@master1 service]# scp etcd.service.master2 root@master2:/etc/systemd/system/etcd.service
[root@master1 service]# scp etcd.service.master3 root@master3:/etc/systemd/system/etcd.service

5, Parameter Description

name=NODE_NAME   #节点名称
cert-file=/etc/etcd/cert/etcd.pem #etcd的公钥
key-file=/etc/etcd/cert/etcd-key.pem   #etcd的私钥
trusted-ca-file=/etc/kubernetes/cert/ca.pem   #ca的私钥
peer-cert-file=/etc/etcd/cert/etcd.pem   #
peer-key-file=/etc/etcd/cert/etcd-key.pem   #etcd集群内部成员的ca私钥
peer-trusted-ca-file=/etc/kubernetes/cert/ca.pem   #信任的证书
peer-client-cert-auth   #
client-cert-auth   #
listen-peer-urls=https://NODE_IP:2380   #监听用于节点通信的url
initial-advertise-peer-urls=https://NODE_IP:2380   #监听的用于客户端通信的url
listen-client-urls=https://NODE_IP:2379,http://127.0.0.1:2379   #建议用于监听客户端通信的url
advertise-client-urls=https://NODE_IP:2379   #
initial-cluster-token=etcd-cluster-0   #节点的token值,设置该值后集群将生成唯一id,并为每个节点也生成唯一id,当使用相同配置文件再启动一个集群时,只要该token值不一样,etcd集群就不会相互影响。
initial-cluster=master1 #
initial-cluster-state=new   #新建集群的标志
auto-compaction-retention=1   # MVCC密钥值存储的自动压缩保留时间(小时)。0表示禁用自动压缩。 
max-request-bytes=33554432   #
quota-backend-bytes=6442450944   #
heartbeat-interval=250   #心跳检测
election-timeout=2000     #选举超时时间

6, start the service

[root@master1 service]# ansible master -i /root/udp/hosts.ini -m shell -a "chmod +x /opt/k8s/bin/* " 
[root@master1 service]# ansible master -i /root/udp/hosts.ini -a "systemctl start etcd.service" 
[root@master1 service]# ansible master -i /root/udp/hosts.ini -m shell -a "systemctl daemon-reload;systemctl restart etcd.service" 
[root@master1 service]# ansible master -i /root/udp/hosts.ini -m shell -a "systemctl enable etcd.service" 

7, etcd service verification

[root@master1 service]# ETCDCTL_API=3 /opt/k8s/bin/etcdctl -w table --cacert=/etc/kubernetes/cert/ca.pem --cert=/etc/etcd/cert/etcd.pem  --key=/etc/etcd/cert/etcd-key.pem  --endpoints=https://192.168.192.222:2379,https://192.168.192.223:2379,https://192.168.192.224:2379 endpoint status
+------------------------------+------------------+---------+---------+-----------+-----------+------------+
|           ENDPOINT           |        ID        | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+------------------------------+------------------+---------+---------+-----------+-----------+------------+
| https://192.168.192.222:2379 | 2c7d8b7aa58766f3 |  3.2.26 |   25 kB |      true |        29 |         15 |
| https://192.168.192.223:2379 | 257fa42984b72360 |  3.2.26 |   25 kB |     false |        29 |         15 |
| https://192.168.192.224:2379 |  3410f89131d2eef |  3.2.26 |   25 kB |     false |        29 |         15 |
+------------------------------+------------------+---------+---------+-----------+-----------+------------+
[root@master1 cert]# etcdctl --endpoints=https://192.168.192.222:2379,https://192.168.192.223:2379,https://192.168.192.224:2379 --ca-file=/opt/k8s/work/cert/ca.pem   --cert-file=/opt/k8s/work/cert/flanneld.pem   --key-file=/opt/k8s/work/cert/flanneld-key.pem member list
3410f89131d2eef: name=master3 peerURLs=https://192.168.192.224:2380 clientURLs=https://192.168.192.224:2379 isLeader=false
257fa42984b72360: name=master2 peerURLs=https://192.168.192.223:2380 clientURLs=https://192.168.192.223:2379 isLeader=false
2c7d8b7aa58766f3: name=master1 peerURLs=https://192.168.192.222:2380 clientURLs=https://192.168.192.222:2379 isLeader=true

Fifth, the deployment of flannel

1, issued a binary file

[root@master1 service]# ansible all -i /root/udp/hosts.ini -m copy -a "src=../../bin/flanneld dest=/opt/k8s/bin" 
[root@master1 service]# ansible all -i /root/udp/hosts.ini -m copy -a "src=/root/k8s/mk-docker-opts.sh dest=/opt/k8s/bin/" 
[root@master1 service]# ansible all -i /root/udp/hosts.ini -m shell -a "chmod +x /opt/k8s/bin/*" 

2, create a certificate and private key

[root@master1 cert]# vim flanneld-csr.json
{
  "CN": "flanneld",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "HangZhou",
      "L": "HangZhou",
      "O": "k8s",
      "OU": "FirstOne"
    }
  ]
}

[root@master1 cert]# cfssl gencert -ca=./ca.pem -ca-key=./ca-key.pem -config=./ca-config.json -profile=kubernetes flanneld-csr.json | cfssljson -bare flanneld
[root@master1 cert]# ansible all  -i /root/udp/hosts.ini -m shell -a "mkdir /etc/flanneld/cert" 
[root@master1 cert]# ansible all  -i /root/udp/hosts.ini -m copy -a "src=./flanneld.pem dest=/etc/flanneld/cert" 
[root@master1 cert]# ansible all  -i /root/udp/hosts.ini -m copy -a "src=./flanneld-key.pem dest=/etc/flanneld/cert/" 

3, write pod segment information

[root@master1 cert]# etcdctl --endpoints=https://192.168.192.222:2379,https://192.168.192.223:2379,https://192.168.192.224:2379 --ca-file=/opt/k8s/work/cert/ca.pem   --cert-file=/opt/k8s/work/cert/flanneld.pem   --key-file=/opt/k8s/work/cert/flanneld-key.pem   mk /kubernetes/network/config '{"Network":"'172.30.0.0/16'", "SubnetLen": 21, "Backend": {"Type": "vxlan"}}'
{"Network":"172.30.0.0/16", "SubnetLen": 21, "Backend": {"Type": "vxlan"}}
[root@master1 cert]# etcdctl --endpoints=https://192.168.192.222:2379,https://192.168.192.223:2379,https://192.168.192.224:2379 --ca-file=/opt/k8s/work/cert/ca.pem   --cert-file=/opt/k8s/work/cert/flanneld.pem   --key-file=/opt/k8s/work/cert/flanneld-key.pem get /kubernetes/network/config
{"Network":"172.30.0.0/16", "SubnetLen": 21, "Backend": {"Type": "vxlan"}}

4, service configuration

[root@master1 service]# vim  flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
Before=docker.service

[Service]
Type=notify
ExecStart=/opt/k8s/bin/flanneld \
  -etcd-cafile=/etc/kubernetes/cert/ca.pem \
  -etcd-certfile=/etc/flanneld/cert/flanneld.pem \
  -etcd-keyfile=/etc/flanneld/cert/flanneld-key.pem \
  -etcd-endpoints=https://192.168.192.222:2379,https://192.168.192.223:2379,https://192.168.192.224:2379 \
  -etcd-prefix=/kubernetes/network \
  -iface=eth0 \
  -ip-masq
ExecStartPost=/opt/k8s/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=always
RestartSec=5
StartLimitInterval=0

[Install]
WantedBy=multi-user.target
RequiredBy=docker.service

[root@master1 service]# ansible all -i /root/udp/hosts.ini -m copy -a "src=./flanneld.service dest=/etc/systemd/system/" 
[root@master1 ~]# ansible all -i /root/udp/hosts.ini -m shell -a "systemctl daemon-reload ;systemctl restart flanneld.service &&systemctl status flanneld.service"
[root@master1 ~]# ansible all -i /root/udp/hosts.ini -m shell -a "systemctl enable flanneld.service"

mk-docker-opts.sh flanneld script allocated to the Pod subnet information writing / run / flannel / docker file, this file using environment variables (DOCKER_NETWORK_OPTIONS) docker subsequent launch configuration docker0 bridge;
IP-mqsq : pass when flanneld SNAT rule is set to flow to the outside network access Pod Docker variable --ip-masq (/ run / flannel / docker file) set to false
SNAT rules created flanneld more moderate relative to the docker, only access to non-Pod network requests do SNAT.

5, see the pod segment information

[root@master1 cert]# etcdctl --endpoints=https://192.168.192.222:2379,https://192.168.192.223:2379,https://192.168.192.224:2379 --ca-file=/etc/kubernetes/cert/ca.pem --cert-file=/etc/flanneld/cert/flanneld.pem --key-file=/etc/flanneld/cert/flanneld-key.pem ls /kubernetes/network/subnets
/kubernetes/network/subnets/172.30.8.0-21
/kubernetes/network/subnets/172.30.96.0-21
/kubernetes/network/subnets/172.30.64.0-21
/kubernetes/network/subnets/172.30.32.0-21
/kubernetes/network/subnets/172.30.56.0-21

Pod view a network node application node corresponding IP interface address and flannel:

[root@master1 cert]# etcdctl --endpoints=https://192.168.192.222:2379,https://192.168.192.223:2379,https://192.168.192.224:2379 --ca-file=/etc/kubernetes/cert/ca.pem --cert-file=/etc/flanneld/cert/flanneld.pem --key-file=/etc/flanneld/cert/flanneld-key.pem get /kubernetes/network/subnets/172.30.8.0-21
{"PublicIP":"192.168.192.223","BackendType":"vxlan","BackendData":{"VtepMAC":"f6:e1:42:b9:35:70"}}

PublicIP:192.168.192.223 //被分配的节点是192.168.192.223,这个节点的网段是172.30.8.0/21
VtepMAC:f6:e1:42:b9:35:70为192.168.192.223节点的flannel.1网卡 MAC 地址

[root@master1 service]# cat  /run/flannel/docker 
DOCKER_OPT_BIP="--bip=172.30.96.1/21"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.30.96.1/21 --ip-masq=false --mtu=1450"

[root@master1 service]# cat  /run/flannel/subnet.env 
FLANNEL_NETWORK=172.30.0.0/16
FLANNEL_SUBNET=172.30.96.1/21
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true

flanneld writes its own information to the network: / RUN / flannel / Docker
Docker subsequent use in the environment settings file docker0 bridge, so that from this address field of the current node Pod container for dispensing all IP.

Reference website:
https://github.com/cloudflare/cfssl
https://kubernetes.io/docs/setup/
https://github.com/opsnull/follow-me-install-kubernetes-cluster/blob/master/ 04.% E9% 83% A8% E7% BD% B2etcd% E9% 9B% 86% E7% BE% A4.md

Guess you like

Origin blog.51cto.com/hmtk520/2423253