K8s binary installation of a network installation

First, install the network plug-in flannel

All node network nodes need to install the plug-in to allow all of the Pod added to the same LAN.

yum install -y flannel
vim /usr/lib/systemd/system/flanneld.service
	[Unit]
	Description=Flanneld overlay address etcd agent
	After=network.target
	After=network-online.target
	Wants=network-online.target
	After=etcd.service
	Before=docker.service
	
	[Service]
	Type=notify
	EnvironmentFile=/etc/sysconfig/flanneld
	EnvironmentFile=-/etc/sysconfig/docker-network
	ExecStart=/usr/bin/flanneld-start \
	  -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS} \
	  -etcd-prefix=${FLANNEL_ETCD_PREFIX} \
	  $FLANNEL_OPTIONS
	
	ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
	Restart=on-failure
	
	[Install]
	WantedBy=multi-user.target
	WantedBy=docker.service
	
vim /etc/sysconfig/flanneld
	# Flanneld configuration options  
	
	# etcd url location.  Point this to the server where etcd runs
	FLANNEL_ETCD_ENDPOINTS="https://192.168.80.112:2379,https://192.168.80.130:2379,https://192.168.80.146:2379"
	
	# etcd config key.  This is the configuration key that flannel queries
	# For address range assignment
	FLANNEL_ETCD_PREFIX="/kube-centos/network"
	
	# Any additional options that you want to pass
	FLANNEL_OPTIONS="-etcd-cafile=/opt/etcd_ca/ca.pem -etcd-certfile=/opt/etcd_ca/server.pem -etcd-keyfile=/opt/etcd_ca/server-key.pem"

#在etcd库中为flannel使用的网络配置
etcdctl mkdir /kube-centos/network
etcdctl mk /kube-centos/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"hostgw"}}'

systemctl daemon-reload
systemctl enable flanneld
systemctl start flanneld
systemctl status flanneld

#让docker使用网络插件所配置的网络
#在/usr/lib/systemd/system/docker.service中的[service]添加下面一行
EnvironmentFile=/run/flannel/docker
systemctl daemon-reload
systemctl enable docker
systemctl start docker
systemctl status docker

试着创建pod进行 node节点和pod互ping  pod之间互ping  

Second, the installation coreDNS

K8s cluster installation CoreDNS to implement DNS plug-in for work
performed on the master node

mkdir /opt/coredns  && cd /opt/coredns/
wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/deploy.sh
wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed
chmod +x deploy.sh
#修改$DNS_DOMAIN、$DNS_SERVER_IP变量为实际值,并修改image后面的镜像。这里直接用deploy.sh脚本进行修改:
./deploy.sh -s -r 10.0.0.0/16 -i 10.0.0.2 -d cluster.local > coredns.yaml

[root@192-168-80-112 coredns]# kubectl create -f coredns.yaml

serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created

[root@192-168-80-112 coredns]# kubectl get svc,pod -n kube-system
NAME               TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
service/kube-dns   ClusterIP   10.0.0.2     <none>        53/UDP,53/TCP,9153/TCP   19s

NAME                           READY   STATUS    RESTARTS   AGE
pod/coredns-55f46dd959-rdsf7   1/1     Running   0          19s
pod/coredns-55f46dd959-vwxcd   1/1     Running   0          19s

coredns node using the node configuration kubelet
add the following three lines in / opt / kubernetes / cfg / kubelet document
-cluster-DNS = 10.0.0.2
-cluster-Domain = cluster.local.
-resolv-the conf = / etc / Resolv.conf \

Verify coredns work

kubectl run busybox --replicas=2 --labels="run=load-balancer-example" --image=busybox  --port=80 --command sleep 3600

[root@192-168-80-112 coredns]# kubectl exec -it  busybox-654f446c66-92s79 /bin/sh

Published 40 original articles · won praise 2 · Views 2064

Guess you like

Origin blog.csdn.net/weixin_42155272/article/details/92633888