Follow this tutorial to install Docker https://blog.csdn.net/u010606397/article/details/89816295
Note: docker-base must have been installed docker, modify the docker mirror warehouse, turn off the firewall
Configure a virtual machine template basis k8s
K8s-base replicate a virtual machine out. Copy -> generate a MAC address for all network cards -> full replication. FIG following steps:
Click the virtual machine settings, modify memory, cpu number, network
Modify the memory is 2048 MB
k8s node requires at least two cpu
3 in order to allow nodes to communicate with each other, it is necessary to set the network bridge. Set only a card, but do not set up multiple NICs
Start the virtual machine
As the name kubernetes cluster nodes can not be the same, so we have to repair the hostname centos
vim /etc/hostname
Instead k8sbase save and exit
View card information
ip addr
NIC name of my virtual machines to enp0s3, you may not be the name
Modify the network card configuration
vim / etc / sysconfig / network- scripts / ifcfg-enp0s3 behind # enp0s3 is the name of the card, you may not be the name
We entered on the main window ipconfig / all view the host network configuration
First comment out # BOOTPROTO = dhcp, # ONBOOT = no. Then fill
# Use a static address
BOOTPROTO = static
# boot card
ONBOOT = yes
different # 3 digits with the last host to
IPADDR = 192.168.1.200
# subnet mask
NETMASK = 255.255.255.0
# Use the host gateway
GATEWAY = 192.168.1.1
# host using DNS
the DNS1 = 116.116.116.116
Disable SELINUX
vim /etc/selinux/config
SELINUX=disabled
Close swap partition
vi /etc/fstab
Comment out # / dev / mapper / centos-swap swap XXXXX
Use Ali cloud kubernetes mirror source
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
Check the warehouse is set successfully yum repolist
Close k8sbase
shutdown -h now
New k8s01 virtual machine
After configuring k8s-base, we use this as a basis for the virtual machine, copy the three virtual machines.
First copy out k8s01 virtual machine, copy the steps with the above k8s-base virtual machine is the same, the main steps are as follows:
Copy -> generate a MAC address for all network cards -> Full Copy
Modify hostname is k8s01
vim /etc/hostname
Modify ip IPADDR = 192.168.XX.XX # may also be a host of other ip on the same subnet
vim /etc/sysconfig/network-scripts/ifcfg-enp0s3
Restart the virtual machine
reboot -h now
Due to changed host, you may want to delete the connection xshell in close xshell, and create a new connection in order to connect the virtual machine. Viewing package that can be installed
yum list | grep kubeadm
安装 omelet-1.14.1-0, kubectl-1.14.1-0, kubeadm-1.14.1-0
yum -y install omelet-1.14.1-0 1.14.1-0 kubectl-kubeadm-1.14.1-0
kubeadm version
Start, boot kubelet
systemctl Start kubelet
systemctl enable kubelet
kubernetes required components in k8s.gcr.io organization, but the continent is unable to download the image drops under k8s.gcr.io organization.
The solution is to download components from other organizations kubernetes mirror, and then generate k8s.gcr.io/XXX mirrored locally by docker tag. Such virtual machines have a local mirror k8s.gcr.io the organization.
Pulling the master node kubernetes desired mirror assembly, it is version 1.14.1
docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.14.1
docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.14.1
docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.14.1
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.14.1
docker pull mirrorgooglecontainers/etcd-amd64:3.3.10
docker pull mirrorgooglecontainers/pause-amd64:3.1
docker pull coredns/coredns:1.3.1
K8s.gcr.io tissue generated by the local mirror docker tag
docker tag mirrorgooglecontainers/kube-apiserver-amd64:v1.14.1 k8s.gcr.io/kube-apiserver:v1.14.1
docker tag mirrorgooglecontainers/kube-controller-manager-amd64:v1.14.1 k8s.gcr.io/kube-controller-manager:v1.14.1
docker tag mirrorgooglecontainers/kube-scheduler-amd64:v1.14.1 k8s.gcr.io/kube-scheduler:v1.14.1
docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.14.1 k8s.gcr.io/kube-proxy:v1.14.1
docker tag mirrorgooglecontainers/etcd-amd64:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause:3.1
docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
# Iptables packet forwarding problem sometimes, iptables configuration
vim /etc/sysctl.conf
net.bridge.bridge-of NF-Call-iptables = 1
net.ipv4.ip_forward = 1
Validate the configuration
sudo sysctl -p
Initialize the master node, 10.244.0.0/16 is a segment flannel network plug
kubeadm init --apiserver-advertise-address = 主机 ip --kubernetes-version = v1.14.1 --pod-network-cidr = 10.244.0.0 / 16
Representative initialize the master node shown in FIG success. Kubeadm join a copy of this order out, need to use when adding a new node
kubeadm join 192.168.1.201:6443 --token m8oXXXXXvp5c \
--discovery-token-ca-cert-hash sha256:XXXX
###########################################################
# If the jammed init process, open another terminal, run the following command to view the log
journalctl kubelet.service -f -u
# reset state before the first node before kubeadm init state, is performed again to return to kubeadm init
kubeadm reset
# token is valid for 24 hours, 24 newly added child node, re-create the token
kubeadm the create token
# View token
kubeadm token List
####################################################################
KUBECONFIG configuration environment variable, and then served restart command can be used kubectl
export KUBECONFIG=/etc/kubernetes/admin.conf
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/bashrc
Installation flannel network plug.
https://raw.githubusercontent.com may be a wall, can be downloaded on the local host kube-flannel.yml, and then copied to the virtual machine.
curl -based https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube flannel.yml
Check the operation of master node, wait a minute, then plug the network up. If you do not fit KUBECONFIG environment variables, execute this command error
kubectl get pods -n kube-system -o wide
The master node on the installation was successful.
New k8s02 virtual machine
Also k8s02 replicate virtual machines through k8s-base.
Copy -> generate a MAC address for all network cards -> Full Copy
Modify hostname is k8s02
vim /etc/hostname
Modify ip IPADDR = 192.168.XXX # may also be a host of other ip on the same subnet
vim /etc/sysconfig/network-scripts/ifcfg-enp0s3
# Iptables packet forwarding problem sometimes, iptables configuration
vim /etc/sysctl.conf
net.bridge.bridge-of NF-Call-iptables = 1
net.ipv4.ip_forward = 1
Validate the configuration
sudo sysctl -p
Restart the virtual machine
reboot -h now
安装 omelet-1.14.1-0, kubectl-1.14.1-0, kubeadm-1.14.1-0
yum -y install omelet-1.14.1-0 1.14.1-0 kubectl-kubeadm-1.14.1-0
kubeadm version
Start, boot kubelet
systemctl Start kubelet
systemctl enable kubelet
Desired node from the mirror less
docker pull mirrorgooglecontainers/pause-amd64:3.1
docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.14.1
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.14.1 k8s.gcr.io/kube-proxy:v1.14.1
Execution command obtained initialize the master node, the node is added to the cluster
kubeadm join initialization command when the 01-K8S: 6443 --token m8oXXXXXvp5c \
--discovery-token-CA-CERT-hash sha256: XXXX
Execution kubectl get nodes at the primary site to see all the nodes in the cluster
The master node forget to change the hostname, or k8sbase, no matter, just sauce.
##############################################################
# Delete nodes need to execute the following two commands
# Executed on the primary node
kubectl delete node k8s03
reset state of the node # k8s03
kubeadm reset
##############################################################
k8s03 nodes operating with k8s02 is the same, remember to change the hostname, ip