k8s installation of the network

NetWork

cook
###flannel安装
yum install flannel -y
####启动命令

/usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/etc/sysconfig/flanneld
EnvironmentFile=-/etc/sysconfig/docker-network
ExecStart=/usr/bin/flanneld-start \
        -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS} \
        -etcd-prefix=${FLANNEL_ETCD_PREFIX} \
        $FLANNEL_OPTIONS
ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure
[Install]
WantedBy=multi-user.target
WantedBy=docker.service
####配置文件

cat /etc/sysconfig/flanneld
# Flanneld configuration options  
# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="https://172.16.20.206:2379,https://172.16.20.207:2379,https://172.16.20.208:2379"
# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"
# Any additional options that you want to pass
FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem"
#####说明
etcd的地址FLANNEL_ETCD_ENDPOINT
etcd查询的目录,包含docker的IP地址段配置。FLANNEL_ETCD_PREFIX
#####如果是多网卡(例如vagrant环境),则需要在FLANNEL_OPTIONS中增加指定的外网出口的网卡,例如iface=eth2
####在etcd中创建网络配置
#####只需要在master中执行一次

etcdctl --endpoints=https://172.16.20.206:2379,https://172.16.20.207:2379,https://172.16.20.208:2379 \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  mkdir /kube-centos/network

etcdctl --endpoints=https://172.16.20.206:2379,https://172.16.20.207:2379,https://172.16.20.208:2379  \
  --ca-file=/etc/kubernetes/ssl/ca.pem \
  --cert-file=/etc/kubernetes/ssl/kubernetes.pem \
  --key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
  mk /kube-centos/network/config  '{"Network":"10.30.0.0/16","SubnetLen":24,"Backend":{"Type":"vxlan"}}'

######验证flannel

/usr/local/bin/etcdctl \
--endpoints=https://172.16.20.206:2379,https://172.16.20.207:2379,https://172.16.20.208:2379 \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
ls /kube-centos/network/config
######删除etcd中docker网络配置信息

/usr/local/bin/etcdctl \
--endpoints=https://172.16.20.206:2379,https://172.16.20.207:2379,https://172.16.20.208:2379 \
--ca-file=/etc/kubernetes/ssl/ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
rm /kube-centos/network/config
Calico

Increase in kubelet set the following startup services file
dns address kubelet is the cluster of IP IP is not a pod of service service of IP, to the --service-cluster-ip-range segment in the IP
--network- plugin = cni --cni-conf-dir = / etc / cni / net.d --cni-bin-dir = / opt / cni / bin
performed on the authorization file rbac.yaml Kubernetes, then perform file deployment calico.yaml .
Calico to each operating system's protocol stack considered to be a router, then all containers considered a connected network terminal on the router, run standard routing protocol between routers --BGP agreement, and then let themselves to learn how the network topology that forward. So Calico program is actually a pure three-tier program, that allow each machine to ensure that the three-layer protocol stack containers two, three cross between host container connectivity.
Calico overlay network without the use of overlay networks such as flannel and libnetwork drive, it is a pure three-way, instead of using the virtual switching virtual routing, each virtual routing reachability information (route) through BGP to propagate to the remaining data center.
Calico each compute node using the Linux Kernel to achieve an efficient vRouter to be responsible for data forwarding, and each vRouter by BGP protocol is responsible for his own workload running on the routing information spreads like Calico throughout the network - small-scale deployments can be directly Internet, large scale can be accomplished by specifying a BGP route reflector.

calico# vim calico.yaml
data:
  # Configure this with the location of your etcd cluster.
  etcd_endpoints: "https://10.3.1.15:2379,https://10.3.1.16:2379,https://10.3.1.17:2379"

  # If you're using TLS enabled etcd uncomment the following.
  # You must also populate the Secret below with these files.  
  etcd_ca: "/calico-secrets/etcd-ca"   #取消原来的注释即可
  etcd_cert: "/calico-secrets/etcd-cert"
  etcd_key: "/calico-secrets/etcd-key"

apiVersion: v1
kind: Secret
type: Opaque
metadata:
  name: calico-etcd-secrets
  namespace: kube-system
data:  
etcd-key: (cat /etc/kubernetes/ssl/etcd-key.pem | base64 | tr -d '\n') #将输出结果填写在这里
  etcd-cert: (cat /etc/kubernetes/ssl/etcd.pem | base64 | tr -d '\n') #将输出结果填写在这里
  etcd-ca: (cat /etc/kubernetes/ssl/ca.pem | base64 | tr -d '\n') #将输出结果填写在这里
   #如果etcd没用启用tls则为null
  #上面是必须要修改的参数,文件中有一个参数是设置pod network地址的,根据实际情况做修改:
   - name: CALICO_IPV4POOL_CIDR
     value: "192.168.0.0/16"
calico-node服务的主要参数:
         CALICO_IPV4POOL_CIDR: Calico IPAM的IP地址池,Pod的IP地址将从该池中进行分配。
         CALICO_IPV4POOL_IPIP:是否启用IPIP模式,启用IPIP模式时,Calico将在node上创建一个tunl0的虚拟隧道。
         FELIX_LOGSEVERITYSCREEN: 日志级别。
         FELIX_IPV6SUPPORT : 是否启用IPV6。
      IP Pool可以使用两种模式:BGP或IPIP。使用IPIP模式时,设置 CALICO_IPV4POOL_IPIP="always",不使用IPIP模式时,设置为"off",此时将使用BGP模式。
      IPIP是一种将各Node的路由之间做一个tunnel,再把两个网络连接起来的模式,启用IPIP模式时,Calico将在各Node上创建一个名为"tunl0"的虚拟网络接口。

使用bgp模式
- name: CALICO_IPV4POOL_IPIP      #ipip模式关闭
value: "never"
- name: FELIX_IPINIPENABLED       #felix关闭ipip
value: "false"

Official website suggestions:

生产环境,node数量在50以内
typha_service_name: "none"
  replicas: 0
node数量为:100-200,
In the ConfigMap named calico-config, locate the typha_service_name, delete the none value, and replace it with calico-typha.
Modify the replica count in theDeployment named calico-typha to the desired number of replicas.
typha_service_name: "calico-typha"
  replicas: 3
node数量每增加200个实例:
We recommend at least one replica for every 200 nodes and no more than 20 replicas. In production, we recommend a minimum of three replicas to reduce the impact of rolling upgrades and failures.
我们建议每200个节点至少复制一个副本,不超过20个副本。 在生产中,我们建议至少使用三个副本来减少滚动升级和故障的影响。
Warning: If you set typha_service_name without increasing the replica count from its default of 0 Felix will try to connect to Typha, find no Typha instances to connect to, and fail to start.
警告:如果设置typha_service_name而不将副本计数从默认值0增加.Felix将尝试连接到Typha,找不到要连接的Typha实例,并且无法启动

Guess you like

Origin blog.51cto.com/phospherus/2445748