CentOS under Kubernetes cluster set up (a) pre-host environment

Although there Rancher OS and CoreOS releases of this kind, but the installation Kubernetes cluster is not too much trouble, therefore, start with the most basic experiments or under. The following are notes on the cluster installed Kubernetes I CentOS7.6, and sub-sections recorded continuously updated ....

Kubernetes host environment presets

Mainframe production environment Kubernete clusters also have a variety of options, as follows:

  • Program: Master three or five nodes, respectively mounted full roles: ETCD, Control Plane; other computer node is a container node, respectively mounted roles: worker;
  • Option II: Three nodes are attached roles: the ETCD; two nodes are installed roles: Control Plane; other computer node is a container node, respectively mounted roles: worker;

But I now have only a seventh-generation i7 notebook, although 16G memory, but this is really not enough to see double nuclear power plant thread, ah, single and minikube installation is not considered, and other good small experimental cluster experiments and then one by one to achieve. My laptop is installed fedora, using kvm virtual machine virtual three hosts will each host 1vcpu + 1G of memory, respectively, install a master node and two compute nodes.
Each node to be installed kubernetes components are as follows:

  • The master node:
    • kube-apiserver
    • kube-controller-manager
    • kube-scheduler
    • kube-proxy
    • pause
    • etcd
    • coredns
  • From node:
    • kube-proxy
    • pause
    • flannel (this experiment selected network plug)

1.2 Setting Time Synchronization

As chrony, profile /etc/chrony.conf (need to configure the network time server), start using the service systemctl command. Recommended synchronization server regardless of the network and can connect to the Internet environment for the cluster configuration time.

1.3 Configuring DNS to resolve host or hosts

Configuring the cluster in the / etc / hosts file IP and hostname resolution (while reducing latency to resolve the DNS)

1.4 turn off the firewall

Turn off the firewall on CentOS7

# systemctl stop firewalld.service

# systemctl disable firewalld.service

1.5 Close SELinux

# setenforce 0

At the same time configure / etc / selinux / config file

# vim /etc/selinux/config
...
SELINUX=disable

or

# sed -i 's@^\(SELINUX=\).*@\1disabled@' /etc/sysconfig/selinux

1.6 Disabling swap device

Kubernetes 1.8 Swap start need to shut down the system swap, if not closed, you can not start.
View

# free -m

Temporarily closed

# swapoff -a
# echo "vm.swappiness = 0" >> /etc/sysctl.conf

or

# swapoff -a && sysctl -w vm.swappiness=0

Edit the configuration file / etc / fstab permanently closed, i.e. swap device mounted comment line.

Note: this experiment is not closed due to limited resources, but the follow-up there is a solution (limited to the experimental environment)

1.7 Enable IPVS kernel module

kube-proxy support iptables and ipvs, if the conditions are met, default ipvs, otherwise use iptables.

cat <<EOF > /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
modprobe br_netfilter
sysctl -p /etc/sysctl.d/kubernetes.conf 或者 sysctl --system

Since ipvs has been added to the trunk of the kernel, so open ipvs is kube-proxy premise needs to load the kernel module:

  • ip_vs
  • ip_vs_rr
  • ip_vs_wrr
  • ip_vs_sh
  • nf_conntrack_ipv4

Execute the following script to load the kernel

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

Reference: https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/ipvs/README.md

The above script creates a /etc/sysconfig/modules/ipvs.modules file, ensure that the node restart automatically load the required modules. Use lsmod | grep -e ip_vs -e nf_conntrack_ipv4 command to check whether the required kernel module has been loaded correctly.

Next also need to ensure that each node ipset packages are installed. In order to facilitate viewing of ipvs proxy rules, it is best to install management tools ipvsadm.

yum install ipset ipvsadm

You may be used to check whether the open ipvsadm ipvs, examples are as follows:

# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.0.0.1:443 rr persistent 10800
  -> 192.168.0.1:6443             Masq    1      1          0

If the above prerequisites If not, even if kube-proxy configuration opens ipvs mode will return to iptables mode.

Guess you like

Origin blog.51cto.com/huanghai/2455344