EDITORIAL
Because my nodes are at home, so a mirror image on k8s trouble downloading the public Internet, but others download Ali cloud, and sitting can not guarantee the security of the official mirrors.
Environment Introduction
Docker version
[root@k8s ~]# docker version Client: Version: 18.03.0 it API version: 1.37 Go version: go1.9.4 Git commit: 0520e24 Built: Wed Mar 21 23:09:15 2018 OS/Arch: linux/amd64 Experimental: false Orchestrator: swarm Server: Engine: Version: 18.03.0 it API version: 1.37 (minimum version 1.12) Go version: go1.9.4 Git commit: 0520e24 Built: Wed Mar 21 23:13:03 2018 OS/Arch: linux/amd64 Experimental: false
kubectl version
[root@k8s ~]# kubectl version Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
kubernetes version
v1.10.0
Installation process
Modify the host name and configure the hosts communication between two hosts, not described here
hostnamectl set-hostname k8s hostnamectl set-hostname k8s1
Stop firewall and disable the boot
systemctl stop firewalld && systemctl disable firewalld
Turn off the firewall and set the boot closed selinux
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux
Close swap partition and set the boot does not automatically mount
swapoff -a [root@k8s ~]# cat /etc/fstab # # /etc/fstab # Created by anaconda on Sat Jun 1 16:28:24 2019 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/centos-root / xfs defaults 0 0 UUID=e60e4d97-753a-4db1-a590-54ba91cd48db /boot xfs defaults 0 0 #/dev/mapper/centos-swap swap swap defaults 0 0
Modify iptables forwarding rules
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1
Validate the configuration
modprobe br_netfilter sysctl -p /etc/sysctl.d/k8s.conf
Installation docker rely tool
sudo yum install -y yum-utils \ device-mapper-persistent-data \ lvm2
Add docker repo file
sudo yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo
View version can be installed (optional)
yum list docker-ce --showduplicates | sort -r
Installation Docker
sudo yum install docker-ce-18.03.0.ce-1.el7.centos
Start docker and set the boot
systemctl start docker && systemctl enable docker
The above steps should be run on the master node and node
Mirror Download
Execute the following command on the master, you can also be written in a shell script which, directly run shell scripts, download the image because it is easy to get confused.
docker pull cnych/kube-apiserver-amd64:v1.10.0 docker pull cnych/kube-scheduler-amd64:v1.10.0 docker pull cnych/kube-controller-manager-amd64:v1.10.0 docker pull cnych/kube-proxy-amd64:v1.10.0 docker pull cnych/k8s-dns-kube-dns-amd64:1.14.8 docker pull cnych/k8s-dns-dnsmasq-nanny-amd64:1.14.8 docker pull cnych/k8s-dns-sidecar-amd64:1.14.8 docker pull cnych/etcd-amd64:3.1.12 docker pull cnych/flannel:v0.10.0-amd64 docker pull cnych/pause-amd64:3.1 docker tag cnych/kube-apiserver-amd64:v1.10.0 k8s.gcr.io/kube-apiserver-amd64:v1.10.0 docker tag cnych/kube-scheduler-amd64:v1.10.0 k8s.gcr.io/kube-scheduler-amd64:v1.10.0 docker tag cnych/kube-controller-manager-amd64:v1.10.0 k8s.gcr.io/kube-controller-manager-amd64:v1.10.0 docker tag cnych/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0 docker tag cnych/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8 docker tag cnych/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8 docker tag cnych/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8 docker tag cnych/etcd-amd64:3.1.12 k8s.gcr.io/etcd-amd64:3.1.12 docker tag cnych/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64 docker tag cnych/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1 docker rmi cnych/kube-apiserver-amd64:v1.10.0 docker rmi cnych/kube-scheduler-amd64:v1.10.0 docker rmi cnych/kube-controller-manager-amd64:v1.10.0 docker rmi cnych/kube-proxy-amd64:v1.10.0 docker rmi cnych/k8s-dns-kube-dns-amd64:1.14.8 docker rmi cnych/k8s-dns-dnsmasq-nanny-amd64:1.14.8 docker rmi cnych/k8s-dns-sidecar-amd64:1.14.8 docker rmi cnych/etcd-amd64:3.1.12 docker rmi cnych/flannel:v0.10.0-amd64 docker rmi cnych/pause-amd64:3.1
Execute the following command on the node node, the same way you can write a script inside
docker pull cnych/kube-proxy-amd64:v1.10.0 docker pull cnych/flannel:v0.10.0-amd64 docker pull cnych/pause-amd64:3.1 docker pull cnych/kubernetes-dashboard-amd64:v1.8.3 docker pull cnych/heapster-influxdb-amd64:v1.3.3 Docker pull cnych / heapster-grafana-AMD64: v4.4.3 docker pull cnych/heapster-amd64:v1.4.2 docker pull cnych/k8s-dns-kube-dns-amd64:1.14.8 docker pull cnych/k8s-dns-dnsmasq-nanny-amd64:1.14.8 docker pull cnych/k8s-dns-sidecar-amd64:1.14.8 docker tag cnych/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64 docker tag cnych/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1 docker tag cnych/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0 docker tag cnych/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8 docker tag cnych/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8 docker tag cnych/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8 docker tag cnych/kubernetes-dashboard-amd64:v1.8.3 k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3 docker tag cnych/heapster-influxdb-amd64:v1.3.3 k8s.gcr.io/heapster-influxdb-amd64:v1.3.3 Docker tag cnych / heapster-grafana-AMD64: v4.4.3 k8s.gcr.io/heapster-grafana-amd64:v4.4.3 docker tag cnych/heapster-amd64:v1.4.2 k8s.gcr.io/heapster-amd64:v1.4.2 docker rmi cnych/kube-proxy-amd64:v1.10.0 docker rmi cnych/flannel:v0.10.0-amd64 docker rmi cnych/pause-amd64:3.1 docker rmi cnych/kubernetes-dashboard-amd64:v1.8.3 docker rmi cnych/heapster-influxdb-amd64:v1.3.3 Docker RMI cnych / heapster-grafana-AMD64: v4.4.3 docker rmi cnych/heapster-amd64:v1.4.2 docker rmi cnych/k8s-dns-kube-dns-amd64:1.14.8 docker rmi cnych/k8s-dns-dnsmasq-nanny-amd64:1.14.8 docker rmi cnych/k8s-dns-sidecar-amd64:1.14.8
master node and nodes need to be performed
Installation kubelet, kubeadm, kubectl, kubernetes-cni
Add yum source
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [Kubernetes] name = Kubernetes baseurl = http: //mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
installation
yum makecache fast && yum install -y kubelet-1.10.0-0 yum -y install kubeadm-1.10.0-0 1.10.0-0 kubectl-kubernetes-cm-0.6.0-0.x86_64.rpm
View docker drive
docker info |grep Cgroup Cgroup Driver: cgroupfs
Note: The default drive kubectl is cgroupfs, but using yum installed, the yum to change the system. So we need to manually change kubectl drive.
Modify kubectl drive
we /etc/systemd/system/kubelet.service.d/10-kubeadm.conf # This file Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemed" Replaced Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
Reload kubectl daemon
systemctl daemon-reload
The following commands to be executed on the master
kubeadm init --kubernetes-version=v1.10.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.1.127
Explanation:
--kubernetes-version k8s version Network segment between the pod --pod-network-cidr generated --apiserver-advertise-address master node address Special Note: If the initialization k8s cluster is successful, the last line of the command will be displayed, as follows: Be sure to save the living, can not reproduce. # This command is the other node of the cluster nodes join k8s command. kubeadm join 192.168.1.127:6443 --token g8lrmt.1nhjcjqsk3a7l096 --discovery-token-ca-cert-hash sha256:0227ac613398b48b0d1949941c9f138b0444270cdd84ec9ef75e708ac14ca1cf
Follow the prompts to modify the configuration of kubectl
mkdir -p $HOME/.kube. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
On the node to perform commands added k8s
kubeadm join 192.168.1.127:6443 --token g8lrmt.1nhjcjqsk3a7l096 --discovery-token-ca-cert-hash sha256:0227ac613398b48b0d1949941c9f138b0444270cdd84ec9ef75e708ac14ca1cf
View cluster status on the master
node node # output at this time may appear as notread state, no matter what we put plug-pod communication network is installed on the line. kubectl get node
master installation flannel plug
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl Apply -f kube-flannel.yml
View of the startup state pod
kubectl get pods --all-namespaces [root@k8s ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-k8s 1/1 Running 0 4m kube-system kube-apiserver-k8s 1/1 Running 0 4m kube-system kube-controller-manager-k8s 1/1 Running 0 4m kube-system kube-dns-86f4d74b45-nj7p8 3/3 Running 0 5m kube-system kube-flannel-ds-amd64-7tpmz 1/1 Running 0 2m kube-system kube-proxy-r6bgh 1/1 Running 0 5m kube-system kube-scheduler-k8s 1/1 Running 0 4m
So far k8s cluster installed! ! ! !
Installation dashboard UI management tools
Download file yaml
wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
The installation process, see: https: //www.cnblogs.com/harlanzhang/p/10045975.html