kubeadm is Kubernetes official tool for quick installation Kubernetes cluster, along Kubernetes each version release will be updated simultaneously.
First, the preparatory work
1, system configuration
Before installation, you need to do first prepared as follows. Two CentOS 7.5 host
cat /etc/hosts 192.168.100.30 master 192.168.100.32 node2
Disable the firewall, selinux
systemctl stop firewalld systemctl disable firewalld setenforce 0 vi /etc/selinux/config SELINUX=disabled
Creating /etc/sysctl.d/k8s.conf file, modify kernel parameters:
NF-Call--net.bridge.bridge the ip6tables = . 1 net.bridge.bridge -nf-Call-iptables = . 1 is named net.ipv4.ip_forward and = . 1 # Run the changes. modprobe br_netfilter sysctl -p /etc/sysctl.d/k8s.conf
Need to be closed swap
, (originally due to server configuration is low, there is not shut down swap, swap ignore the error later in the deployment process)
swapoff - A temporary #
vim / etc / fstab # permanent
Time Synchronization
yum install ntpdate -y echo "*/20 * * * * /usr/sbin/ntpdate -u ntp.api.bz >/dev/null &" >> /var/spool/cron/root
2, kube-proxy open the precondition ipvs
Since ipvs has been added to the trunk of the kernel, so open ipvs is kube-proxy premise needs to load the kernel module:
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
Implemented in all nodes
cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
The above script creates a /etc/sysconfig/modules/ipvs.modules file, ensure that the node restart automatically load the required modules. Use lsmod | grep -e ip_vs -e nf_conntrack_ipv4 command to check whether the required kernel module has been loaded correctly.
Installation ipset and each node ipvsadm
yum -y install ipset ipvsadm
If the above prerequisites If not, even if kube-proxy configuration opens ipvs mode will return to iptables mode.
3, installation docker
The yum install docker Source:
yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Installation docker
yum -y install docker-ce-18.09.6 docker-ce-cli-18.09.6 containerd.io
Modify docker cgroup driver to systemd
Use systemd as cgroup driver docker can ensure more stable server nodes in a tight resource situation, so here modify cgroup driver on each node docker for systemd. Vim / etc / Docker / daemon.json # create if not present { " Exec-the opts " : [ " native.cgroupdriver = systemd " ] }
start up
systemctl restart docker # start Docker systemctl # enable boot from the start Docker Docker info | grep -E " Server \ Version | Cgroup " Server Version: 18.09 . 6 Cgroup Driver: systemd
Second, the installation kubeadm
Configuration kubenetes
of the yum
warehouse (Ali cloud warehouse)
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
Installation kubelat
, kubectl
,kubeadm
yum -y install kubelet- 1.15 . 2 kubeadm- 1.15 . 2 kubectl- 1.15 . 2
Will kubelet
join boot, (note:! Just joined boot, can not start oh)
systemctl enable omelet
Third, create a cluster
Configure ignored swap
vim /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--fail-swap-on=false"
1, initialization master node
kubeadm init --kubernetes-version=v1.15.2 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
# Parameter Description --kubernetes- Version # specify Kubernetes version --image-# Repository - Image- Repository designated as Ali cloud mirrored warehouse address --pod-Network- CIDR # specify pod network segment --service- CIDR # designated service network segment --ignore-preflight-errors = Swap # ignored swap error message
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.100.30:6443 --token c416qn.rdupmak2rhf5pqd8 \
--discovery-token-ca-cert-hash sha256:7c9f791d1008f061ea76ea1c8bae6b254246f6c92917a1fd0dcc4d0d8b4a1d51
According to the above tips to create a profile
mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config
Look at the state of the cluster, make sure components are in a healthy state:
kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"}
If you encounter problems initializing a cluster, you can use the following command to clean up:
kubeadm reset ifconfig cni0 down ip link delete cni0 ifconfig flannel.1 down ip link delete flannel.1 rm -rf /var/lib/cni/
Install network plug
mkdir -p ~ / K8S / # Create a directory to place yaml file cd ~ / K8S curl -O HTTPS: // raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml # need to try several times, -f kube-flannel.yml the Apply kubectl
# If the Node has multiple network cards, then you need to specify the name of the cluster host network card use -iface parameters kube-flannel.yml, otherwise may appear dns can not be resolved. Kube-flannel.yml need to download to the local, flanneld startup parameter plus -iface = <iface-name>
containers:
- name: kube-flannel image: quay.io/coreos/flannel:v0.11.0-amd64 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr - --iface=eth1 ......
Use kubectl get pod -all-namespaces -o wide to ensure that all of the Pod are in the Running state.
kubectl get pod --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-bccdc95cf-7md8h 1/1 Running 0 9m47s 10.244.0.2 master <none> <none> kube-system coredns-bccdc95cf-lff4h 1/1 Running 0 9m47s 10.244.0.3 master <none> <none> kube-system etcd-master 1/1 Running 0 8m43s 192.168.100.30 master <none> <none> kube-system kube-apiserver-master 1/1 Running 0 9m4s 192.168.100.30 master <none> <none> kube-system kube-controller-manager-master 1/1 Running 0 8m49s 192.168.100.30 master <none> <none> kube-system kube-flannel-ds-amd64-f5t5p 1/1 Running 0 8m10s 192.168.100.30 master <none> <none> kube-system kube-proxy-m8gjz 1/1 Running 0 9m47s 192.168.100.30 master <none> <none> kube-system kube-scheduler-master 1/1 Running 0 9m8s 192.168.100.30 master <none> <none>
2, Add Node node
The following node2 This host is added to Kubernetes cluster, perform join just recorded on node2:
# Due to off swap, swap need to add parameters to ignore the error, or add an unsuccessful
kubeadm the Join 192.168 . 100.30 : 6443 - token c416qn.rdupmak2rhf5pqd8 \ --discovery-token-CA-CERT-hash sha256: 7c9f791d1008f061ea76ea1c8bae6b254246f6c92917a1fd0dcc4d0d8b4a1d51 --ignore -preflight-errors = Swap
node2 very well join the cluster, the following command to view the cluster nodes executed on the master node:
kubectl get node NAME STATUS ROLES AGE VERSION master Ready master 49m v1.15.2 node2 Ready <none> 90s v1.15.2
3, how to remove Node node
Executed on the master node:
Drain node2 --delete-local kubectl the Data---force --ignore- daemonsets
kubectl the Delete the Node node2
# If there are other nodes need to be performed at other nodes
kubectl delete node node2
Executed on node2:
kubeadm reset ifconfig cni0 down ip link delete cni0 ifconfig flannel.1 down ip link delete flannel.1 rm -rf /var/lib/cni/
4, kube-proxy open ipvs
The modified ConfigMap kube-system / kube-proxy in config.conf, mode: "ipvs"
kubectl edit configmap kube-proxy -n kube-system
After the restart kube-proxy pod on each node:
kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
Check that you have enabled ipvs
GET pod -n Kube-kubectl System | grep kube- Proxy Kube -proxy-62pjd 1 / 1 Running 0 118S Kube -proxy-7mczc 1 / 1 Running 0 2M # view which logs a pod of kubectl logs Kube -proxy-62pjd - kube- n- System I1213 09 : 15 : 10.643663 . 1 server_others.go: 170. ] the Using IPVS Proxier. W1213 09 : 15 : 10.644248 1 proxier.go:401] IPVS scheduler not specified, use rr by default I1213 09:15:10.644851 1 server.go:534] Version: v1.15.2 I1213 09:15:10.675116 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I1213 09:15:10.675671 1 config.go:187] Starting service config controller I1213 09:15:10.675701 1 controller_utils.go:1029] Waiting for caches to sync for service config controller I1213 09:15:10.675806 1 config.go:96] Starting endpoints config controller I1213 09:15:10.675824 1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller I1213 09:15:10.776116 1 controller_utils.go:1036] Caches are synced for service config controller I1213 09:15:10.776311 1 controller_utils.go:1036] Caches are synced for endpoints config controller
Log print out Using ipvs Proxier, explained ipvs mode is turned on.