1、基础环境准备
准备三台centos机器(使用虚拟机),硬件配置信息如下:
节点名称 | IP地址 | CPU | Memory |
---|---|---|---|
master | 192.168.1.21 | 2核 | 2核 |
worker | 192.168.1.22 | 2核 | 2核 |
worker | 192.168.1.23 | 2核 | 2核 |
基础环境搭建可以参考本人之前的blog:使用vagrant创建多台centos7虚拟机,并使用Docker Swarm建立主从集群
(注意:搭建完成后,要测试三台centos之前要相互ping通。)
2、三台centos安装Docker
centos如何安装Docker,参考在Centos7中安装Docker
3、设置主机名及修改hosts文件
设置主机名
# master主机192.168.1.21
sudo hostnamectl set-hostname m
# worker主机192.168.1.22
sudo hostnamectl set-hostname w1
# worker主机192.168.1.23
sudo hostnamectl set-hostname w2
修改hosts文件,vi /etc/hosts
192.168.1.21 m
192.168.1.22 w1
192.168.1.23 w2
设置完成后,在master节点执行ping w1,就可以ping通了
4、系统基础设置
4.1 关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
4.2 关闭selinux
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
4.3 关闭swap
swapoff -a
sed -i '/swap/s/^\(.*\)$/#\1/g' /etc/fstab
4.4 配置iptables的ACCEPT规则
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
4.5 设置系统参数
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
5、安装kubeadm、kubelet、kubectl
5.1 配置yml源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
5.2 安装kubeadm、kubelet、kubectl
yum install -y kubeadm-1.14.0-0 kubelet-1.14.0-0 kubectl-1.14.0-0
5.3 设置docker和k8s设置同一个cgroup
修改daemon.json文件
vi /etc/docker/daemon.json
添加以下内容到上面的文件,
"exec-opts": ["native.cgroupdriver=systemd"],
重启docker
systemctl restart docker
设置kubelet、kubeadm
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
重启kubelet
systemctl enable kubelet && systemctl start kubelet
6、安装kubeadm使用到的镜像
# 查看kubeadm使用的镜像
kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.14.0
k8s.gcr.io/kube-controller-manager:v1.14.0
k8s.gcr.io/kube-scheduler:v1.14.0
k8s.gcr.io/kube-proxy:v1.14.0
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1
6.1 安装 8s.gcr.io/kube-apiserver:v1.14.0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.14.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.14.0 k8s.gcr.io/kube-apiserver:v1.14.0
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.14.0
6.2 安装k8s.gcr.io/kube-controller-manager:v1.14.0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.14.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.14.0 k8s.gcr.io/kube-controller-manager:v1.14.0
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.14.0
6.3 安装k8s.gcr.io/kube-scheduler:v1.14.0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.14.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.14.0 k8s.gcr.io/kube-scheduler:v1.14.0
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.14.0
6.4 安装k8s.gcr.io/kube-proxy:v1.14.0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.14.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.14.0 k8s.gcr.io/kube-proxy:v1.14.0
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.14.0
6.5 安装k8s.gcr.io/pause:3.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
6.6 安装k8s.gcr.io/etcd:3.3.10
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10
6.7 安装k8s.gcr.io/coredns:1.3.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1
7、kube init初始化master
在主节点中执行
kubeadm init --kubernetes-version=1.14.0 --apiserver-advertise-address=192.168.1.21 --pod-network-cidr=10.244.0.0/16
将下图中的红框2处的内容复制到记事本中保存(后面会用到)
# 红框2处的内容
kubeadm join 192.168.1.21:6443 --token cu8130.nrls96fbkla18qbn \
--discovery-token-ca-cert-hash sha256:bddc78ebac24e2b7029fb5884f85275e7287ab33348b134aaad64098fe9cf8f8
如红框1处,执行如下图的操作
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 查看pods
kubectl get pods
# 查看系统命名空间下的pod
kubectl get pods -n kube-system
8、安装calico网络插件
# 手动拉取calico相关的镜像
docker pull calico/pod2daemon-flexvol:v3.9.1
docker pull calico/kube-controllers:v3.9.1
docker pull calico/cni:v3.9.1
# 安装calico网络插件
kubectl apply -f https://docs.projectcalico.org/v3.9/manifests/calico.yaml
9、在worker节点中执行第7步中红框2的内容
# 红框2处的内容
kubeadm join 192.168.1.21:6443 --token cu8130.nrls96fbkla18qbn \
--discovery-token-ca-cert-hash sha256:bddc78ebac24e2b7029fb5884f85275e7287ab33348b134aaad64098fe9cf8f8
在主节点中查看集群所有节点的信息,如下图所示一个master节点,两个worker节点,ok已经搭建成功。
# 查看所有节点的信息
kubectl get nodes