kubernetes的云中漫步(三)--kubeadm安装kubernetes集群

环境准备(三台机器同时部署):

1.准备三台服务器

   k8s-master:192.168.122.201
   k8s-node-1:192.168.122.202
   k8s-node-2:192.168.122.203

2.设置本地解析

[root@k8s-master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.122.201 k8s-master
192.168.122.202 k8s-node-1
192.168.122.203 k8s-node-3

3.关闭防火墙和selinux

[root@k8s-master ~]# systemctl stop firewalld
[root@k8s-master ~]# systemctl disable firewalld
[root@k8s-master ~]# setenforce 0
[root@k8s-master ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config

4.开启路由转发

[root@k8s-master ~]# modprobe br_netfilter
[root@k8s-master ~]# cat /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
[root@k8s-master ~]# sysctl -p

5.关闭交换分区

临时关闭:
[root@k8s-master ~]# swapoff -a
永久关闭:
修改/etc/fstab的配置文件,将关于swap的挂载卸载掉

6.时间同步

[root@k8s-master ~]# yum install ntpdate -y
[root@k8s-master ~]# ntpdate  ntp.api.bz

服务部署:

1.安装docker.ce[master and nodes]

[root@k8s-master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 

[root@k8s-master ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
[root@k8s-master ~]# yum install docker-ce
[root@k8s-master ~]# systemctl start docker
[root@k8s-master ~]# systemctl enable docker

2.安装kubeadm,kubelet和kubectl[master and nodes]

[root@k8s-master ~]# cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@k8s-master ~]# ls /etc/yum.repos.d/
CentOS-Base.repo  docker-ce.repo  epel.repo  kubernetes.repo
[root@k8s-master ~]# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

文件驱动默认由systemd改成cgroupfs, 而我们安装的docker使用的文件驱动是systemd, 造成不一致, 导致镜像无法启动,修改docker的默认文件驱动,默认是cgroupfs则不用修改

docker info查看
Cgroup Driver: cgroupfs
修改docker:
修改或创建/etc/docker/daemon.json,加入下面的内容:
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
重启docker:
systemctl restart docker
systemctl status docker

使用Docker时,kubeadm会自动检查kubelet的cgroup驱动程序,并/var/lib/kubelet/kubeadm-flags.env在运行时将其设置在文件中。如果使用的其他CRI,则必须在/etc/default/kubelet中cgroup-driver值修改为cgroupfs:

[root@k8s-master ~]# mkdir /var/lib/kubelet/
[root@k8s-master ~]# vim /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --cni-bin-dir=/opt/cni/bin --cni-conf-dir=/etc/cni/net.d --network-plugin=cni

启动kubelet

扫描二维码关注公众号,回复: 10212017 查看本文章
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl enable kubelet && systemctl start kubelet

3.部署kubernetes[master and nodes]

脚本去下载安装各个节点(kube-apiserver.service、kube-controller-manager.service、kube-scheduler.servic、kube-proxy.service、DASHBOARD、DNS、FLANNEL、PAUSE)
#cat k8s.sh
K8S_VERSION=v1.17.0
ETCD_VERSION=3.4.3-0
DASHBOARD_VERSION=v1.8.3
FLANNEL_VERSION=v0.10.0-amd64
DNS_VERSION=1.6.5
PAUSE_VERSION=3.1
# 基本组件
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:$K8S_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:$K8S_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:$K8S_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:$K8S_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd-amd64:$ETCD_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$DNS_VERSION
# 网络组件
docker pull quay.io/coreos/flannel:$FLANNEL_VERSION
# 修改tag
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:$K8S_VERSION k8s.gcr.io/kube-apiserver:$K8S_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:$K8S_VERSION k8s.gcr.io/kube-controller-manager:$K8S_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:$K8S_VERSION k8s.gcr.io/kube-scheduler:$K8S_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:$K8S_VERSION k8s.gcr.io/kube-proxy:$K8S_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd-amd64:$ETCD_VERSION k8s.gcr.io/etcd:$ETCD_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION k8s.gcr.io/pause:$PAUSE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$DNS_VERSION k8s.gcr.io/coredns:$DNS_VERSION

4.master节点初始化

# kubeadm init --kubernetes-version=1.17.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.122.201
To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.122.201:6443 --token fmqvwn.6h11y2ayq23r7zmw \
    --discovery-token-ca-cert-hash sha256:42e125ef64f5aabc67ae0e0f14b58270be35fde8ff4f7b9a47d5d76a74a97c4a 

5.安装网络pod插件
下载1.17.0版本的flannel的yaml文件:

[root@k8s-master ~]# git clone https://github.com/blackmed/kubernetes-kubeadm.git

安装flannel组件

[root@k8s-master ~]# kubectl create -f flannel.yaml

6.node节点操作,将node加入工作节点 [node]

[root@k8s-master ~]# kubeadm join 192.168.122.201:6443 --token kzuy91.hyjp5o89jrxv48fg --discovery-token-ca-cert-hash sha256:fe5b4afd57358455d0afa23858e8d995debbc585de914db34d9e9b9db4df9989

7.验证节点是否加入成功[master]

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   31h   v1.17.0
k8s-node-1   Ready    <none>   33m   v1.17.0
k8s-node-2   Ready    <none>   33m   v1.17.0
发布了45 篇原创文章 · 获赞 26 · 访问量 4233

猜你喜欢

转载自blog.csdn.net/zy_xingdian/article/details/103853992