ready
Prepare 3 centos
To ensure that they can ping each other, that is, in the same network
Unified version
Docker 18.09.0
beadm-1.14.0-0
bead-1.14.0-0
beadl-1.14.0-0
k8s.gcr.io/kube-apiserver:v1.14.0
k8s.gcr.io/kube-controller-manager:v1.14.0
k8s.gcr.io/kube-scheduler:v1.14.0
k8s.gcr.io/kube-proxy:v1.14.0
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
calico:v3.9
Start operation
1 Update and install dependencies
All 3 machines need to be executed
yum -y update
yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp
2 Install Docker
Install Docker on every machine, version 18.09.0
Install the necessary dependencies
sudo yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
Set up docker warehouse
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.reposudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Set up Alibaba Cloud image accelerator
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["这边替换成自己的实际地址"]
}
EOF
sudo systemctl daemon-reload
Install docker
yum install -y docker-ce-18.09.0 docker-ce-cli-18.09.0 containerd.io
Start docker
sudo systemctl start docker && sudo systemctl enable docker
3 Modify the hosts file
master
# 设置master的hostname,并且修改hosts文件
sudo hostnamectl set-hostname m
vi /etc/hosts
192.168.8.51 m
192.168.8.61 w1
192.168.8.62 w2
Two workers
# 设置worker01/02的hostname,并且修改hosts文件
sudo hostnamectl set-hostname w1
sudo hostnamectl set-hostname w2
vi /etc/hosts
192.168.8.51 m
192.168.8.61 w1
192.168.8.62 w2
Use ping to test
4 System basic premise configuration
Turn off firewall
systemctl stop firewalld && systemctl disable firewalld
Close selinux
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
Close swap
swapoff -a
sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab
Configure ACCEPT rules for iptables
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
Set system parameters
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
5 安装 beadm, kubelet and kubectl
Configure Yum source
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
Install kubeadm&kubelet&kubectl
yum install -y kubeadm-1.14.0-0 kubelet-1.14.0-0 kubectl-1.14.0-0
docker and k8s set the same cgroup
# docker
vi /etc/docker/daemon.json
"exec-opts": ["native.cgroupdriver=systemd"],
systemctl restart docker
# kubelet,这边如果发现输出directory not exist,也说明是没问题的,大家继续往下进行即可
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl enable kubelet && systemctl start kubelet
6 domestic mirrors such as proxy/pause/scheduler
Solve the problem that foreign mirrors cannot be accessed
Create the kubeadm.sh script to pull the image/tag/delete the original image
#!/bin/bash
set -e
KUBE_VERSION=v1.14.0
KUBE_PAUSE_VERSION=3.1
ETCD_VERSION=3.3.10
CORE_DNS_VERSION=1.3.1
GCR_URL=k8s.gcr.io
ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers
images=(kube-proxy:${KUBE_VERSION}
kube-scheduler:${KUBE_VERSION}
kube-controller-manager:${KUBE_VERSION}
kube-apiserver:${KUBE_VERSION}
pause:${KUBE_PAUSE_VERSION}
etcd:${ETCD_VERSION}
coredns:${CORE_DNS_VERSION})
for imageName in ${images[@]} ; do
docker pull $ALIYUN_URL/$imageName
docker tag $ALIYUN_URL/$imageName $GCR_URL/$imageName
docker rmi $ALIYUN_URL/$imageName
done
Run script and view image
# 运行脚本
sh ./kubeadm.sh
# 查看镜像
docker images
7 kube init initializes master
Initialize the master node
注意
: This operation is performed on the master node
kubeadm init --kubernetes-version=1.14.0 --apiserver-advertise-address=192.168.8.51 --pod-network-cidr=10.244.0.0/16
Note: 192.168.8.51 is the main node ip
Remember to save the last kubeadm join information
According to the log prompt
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Check the pod to verify
Wait for a while, and at the same time, you can find that components like etc, controller, scheduler, etc. have been installed successfully in the form of pod
注意
: Coredns is not started, need to install network plug-in
kubectl get pods -n kube-system
health examination
curl -k https://localhost:6443/healthz
8 Deploy calico network plug-in
Select the network plug-in: https://kubernetes.io/docs/concepts/cluster-administration/addons/
calico network plugin: https://docs.projectcalico.org/v3.9/getting-started/kubernetes/
calico,同样在master节点上操作
# 在k8s中安装calico
kubectl apply -f https://docs.projectcalico.org/v3.9/manifests/calico.yaml
# 确认一下calico是否安装成功
kubectl get pods --all-namespaces -w
9 kube join
Execute the following commands on worker01 and worker02
Remember to save the final print information of initializing the master node [Note that everyone here needs your own, mine below is just a reference]
kubeadm join 192.168.0.51:6443 --token yu1ak0.2dcecvmpozsy8loh \
--discovery-token-ca-cert-hash sha256:5c4a69b3bb05b81b675db5559b0e4d7972f1d0a61195f217161522f464c307b0
Check cluster information on the master node
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master-kubeadm-k8s Ready master 19m v1.14.0
worker01-kubeadm-k8s Ready <none> 3m6s v1.14.0
worker02-kubeadm-k8s Ready <none> 2m41s v1.14.0
10 test
Define the pod.yaml file, such as pod_nginx.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: nginx
labels:
tier: frontend
spec:
replicas: 3
selector:
matchLabels:
tier: frontend
template:
metadata:
name: nginx
labels:
tier: frontend
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
EOF
Create pod based on pod_nginx.yml file
kubectl apply -f pod_nginx.yaml
View pod
kubectl get pods
kubectl get pods -o wide
kubectl describe pod nginx
Feel the pod expansion through rs
kubectl scale rs nginx --replicas=5
kubectl get pods -o wide
Delete pod
kubectl delete -f pod_nginx_rs.yaml