k8s双节点集群搭建详细教程

K8S v1.13.0 集群搭建

环境

两台centos主机:

Master:192.168.11.112 主机名:k8s-master

Node:192.168.11.111 主机名:k8s-node

编辑每台机子的/etc/hosts文件加入master、Node主机名与IP地址,确保两台主机能互相ping通

修改系统源:

wget  -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

系统更新

yum update

关闭防火墙

systemctl stop firewalld & systemctl disable firewalld

关闭swap:

sed -i '/ swap / s/^/#/' /etc/fstab

关闭selinux:

setenforce 0 && sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g"   /etc/selinux/config

同步时钟

编辑/etc/ntp.conf添加下列字段:

server 10.17.87.8 prefer

创建/etc/sysctl.d/k8s.conf文件,添加如下内容:

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1

执行命令使修改生效:

modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.con

k8s安装

2.1 安装docker

执行安装命令:

yum install docker-ce -y

查看版本:

docker --version

设置开机启动:

systemctl start docker & systemctl enable docker

验证docker安装:

docker run hello-world

如下显示则成功安装docker:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-mawWdABW-1584506197826)(/clip_image002.jpg)]

2.2 安装k8sv1.130

添加阿里源的仓库,编辑/etc/yum.repos.d/kubernetes.repo添加以下内容:

[kubernetes]

name=Kubernetes

baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=0

repo_gpgcheck=0

gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

安装kubelet:

yum install -y kubelet-1.13.0-0.x86_64

安装kubeadm、kubectl:

yum install -y kubeadm-1.13.0-0.x86_64 kubectl-1.13.0-0.x86_64

确保docker 的cgroup drive 和kubelet的cgroup drive一样:

docker info | grep -i cgroup
cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-gPkKoiOf-1584506197827)(/clip_image004.jpg)])

若kubeadm中没有,则添加如下:

Environment="KUBELET_CGROUP_ARGS=–cgroup-driver=cgroupfs

启动kubelet:

systemctl enable kubelet && systemctl start kubelet

下载依赖包:

执行下列命令

docker pull ww3122000/kube-apiserver:v1.13.0

docker pull ww3122000/kube-controller-manager:v1.13.0

docker pull ww3122000/kube-scheduler:v1.13.0

docker pull ww3122000/kube-proxy:v1.13.0

docker pull ww3122000/pause:3.1

docker pull ww3122000/etcd:3.2.24

docker pull ww3122000/coredns:1.2.6 

 

docker tag ww3122000/kube-apiserver:v1.13.0 k8s.gcr.io/kube-apiserver:v1.13.0

docker tag ww3122000/kube-controller-manager:v1.13.0 k8s.gcr.io/kube-controller-manager:v1.13.0

docker tag ww3122000/kube-scheduler:v1.13.0 k8s.gcr.io/kube-scheduler:v1.13.0

docker tag ww3122000/kube-proxy:v1.13.0 k8s.gcr.io/kube-proxy:v1.13.0

docker tag ww3122000/pause:3.1 k8s.gcr.io/pause:3.1

docker tag ww3122000/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24

docker tag ww3122000/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6

查看docker镜像库:

docker images

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-ADQtAOrM-1584506197827)(/clip_image006.jpg)]

编辑/etc/sysconfig/kubelet加入:

KUBELET_EXTRA_ARGS=–fail-swap-on=false

2.3 初始化集群

执行初始化命令:

kubeadm init --kubernetes-version=v1.13.0 --pod-network-cidr=10.244.0.0/16  --apiserver-advertise-address=192.168.11.112

含义:

1.选项–pod-network-cidr=10.244.0.0/1选择flannel作为Pod网络插件

2.选项–kubernetes-version=v1.13.0指定K8S版本,这里必须与之前导入到Docker镜像版本v1.13.0一致,否则会访问谷歌去重新下载K8S最新版的Docker镜像

3.选项–apiserver-advertise-address表示绑定的网卡IP,这里一定要绑定前面提到的enp0s8网卡,否则会默认使用enp0s3网卡

4.若执行kubeadm init出错或强制终止,则再需要执行该命令时,需要先执行kubeadm reset重置

返回一下内容则初始化成功,每个人的返回内容不同,下列内容需保存,后面会用:

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown ( i d u ) : (id -u): (id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:

https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node

as root:

kubeadm join 192.168.11.112:6443 --token hbpvny.5dprmhgqhbrxx6qc --discovery-token-ca-cert-hash sha256:8ea082913deeda1221efaecb14fdd3cf61a8fcccfda6bcee91c02fd2665fcaa1

其中kubeadm join 192.168.11.112:6443 --token hbpvny.5dprmhgqhbrxx6qc --discovery-token-ca-cert-hash sha256:8ea082913deeda1221efaecb14fdd3cf61a8fcccfda6bcee91c02fd2665fcaa1是在子节点中执行,使得node加入集群中。

初始化成功后执行正面的命令复制配置文件到 root 用户和普通用户的 home 目录下,以便连上集群使用,执行上面提示中的三行语句:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看集群状态:

Kubectl get cs

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-L76TkdVw-1584506197827)(/clip_image008.jpg)]

2.4 安装pod network

接下来安装flannel network add-on:

mkdir -p ~/k8s/

cd ~/k8s

wget <https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml>

kubectl apply -f  kube-flannel.yml

使用kubectl get pod -n kube-syste确保所有的Pod都处于Running状态:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Bx8XUNOG-1584506197828)(/clip_image010.jpg)]

查看节点:

Kubectl get nodes

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-bBz30vs3-1584506197828)(/clip_image012.jpg)]

如上,若提示notReady则表示节点尚未准备好,可能正在进行其他初始化操作,等待全部变为Ready即可。

2.5 安装可视化管理dashboard

拉取dashboard镜像:

docker pull gcrxio/kubernetes-dashboard-amd64:v1.10.1

docker tag gcrxio/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1

编辑dashboard.yaml添加:

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1beta1

metadata:

name: admin

annotations: rbac.authorization.kubernetes.io/autoupdate: “true”

roleRef:

​ kind: ClusterRole

​ name: cluster-admin

​ apiGroup: rbac.authorization.k8s.io

subjects:

- kind: ServiceAccount

name: admin

namespace: kube-system

apiVersion: v1

kind: ServiceAccount

metadata:

name: admin

namespace: kube-system

labels:

​ kubernetes.io/cluster-service: “true”

​ addonmanager.kubernetes.io/mode: Reconcile

执行下列命令:

wget <https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.0/src/deploy/recommended/kubernetes-dashboard.yaml> sed -i 's#k8s.gcr.io#gcrxio#g' kubernetes-dashboard.yaml

编辑kubernetes-dashboard.yaml在service处做如图修改:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-RDha27KL-1584506197828)(/clip_image014.jpg)]

执行部署命令:

kubectl apply -f kubernetes-dashboard.yaml

用火狐访问: https://{MasterIP}:30001

**dashboard ** 用户界面:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-ZwTydAc0-1584506197829)(/clip_image016.jpg)]

创建kubernetes-dashboard-admin.rbac.yaml文件,文件内容如下:

apiVersion: v1

kind: ServiceAccount

metadata:

labels:

​ k8s-app: kubernetes-dashboard

name: kubernetes-dashboard-admin

namespace: kube-system

apiVersion: rbac.authorization.k8s.io/v1beta1

kind: ClusterRoleBinding

metadata:

name: kubernetes-dashboard-admin

labels:

​ k8s-app: kubernetes-dashboard

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: ClusterRole

name: cluster-admin

subjects:

- kind: ServiceAccount

name: kubernetes-dashboard-admin

namespace: kube-system

执行命令:

kubectl create kubernetes-dashboard-admin.rbac.yaml

kubectl get secret -n kube-system | grep admin 

kubectl describe secret 文件名-n kube-system

保存token

利用令牌登录用户界面:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-biG0f59v-1584506197829)(/clip_image018.jpg)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-ILbIV5rt-1584506197829)(/clip_image020.jpg)]

猜你喜欢

转载自blog.csdn.net/qq_36269019/article/details/104942093