centos7 kubernetes 集群部署

centos 7 kubernetes 集群部署


主机(虚拟机)信息:

[root@k8s-master ~]# cat /etc/redhat-release 
CentOS Linux release 7.7.1908 (Core)
节点名称 IP
k8s-master 192.168.1.86
k8s-node1 192.168.1.87

注:

1、k8s版本可自行选择,此处以 1.16.2 为例。
2、除集群初始化仅master执行以外,其余部署步骤在所有节点上执行。

1. centos 7 配置


关闭防火墙、关闭selinux、更新源

#防火墙
systemctl disable firewalld.service

#关闭Selinux
	sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
## 或者
    /etc/selinux/config
    #将其中的 SELINUX=*处修改为如下
    SELINUX=disabled
#重启服务器
#运行命令getenforce 确保 selinux 为disable

#安装wget
yum install -y wget
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
#更新 源
yum upgrade

2. host配置


cat /etc/hosts
#k8s nodes
192.169.1.86	k8s-master
192.168.1.87	k8s-node1

cat /etc/hostname
 ## 节点名称 k8s-master或k8s-node1
 
# 重启
reboot

3. 创建/etc/sysctl.d/k8s.conf文件


#修改内核参数
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
#执行sysctl -p /etc/sysctl.d/k8s.conf生效(sysctl --system)
sysctl -p /etc/sysctl.d/k8s.conf

#如果有如下报错:
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
#解决方法:
#安装bridge-util软件,加载bridge模块,加载br_netfilter模块
yum install -y bridge-utils.x86_64
modprobe bridge
modprobe br_netfilter

#关闭swap
swapoff -a
echo "vm.swappiness=0" >> /etc/sysctl.d/k8s.conf
#使生效
sysctl -p /etc/sysctl.d/k8s.conf

4. 安装软件源配置


#配置k8s软件源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo 
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

5. 安装docker


##先校正时间 否则 无法运行docker!!!!
    # 1.安装ntpdate工具
    sudo yum -y install ntp ntpdate
    # 2.设置系统时间与网络时间同步
    sudo ntpdate cn.pool.ntp.org
    # 3.将系统时间写入硬件时间
    sudo hwclock --systohc
    # 4.查看系统时间
    timedatectl

#安装docker
yum install -y docker-io
#启动docker并设置开机启动
systemctl enable docker && systemctl start docker

6. 安装kubernetes——自选版本


此处指定版本——1.16.2

#查看软件包版本
yum list --showduplicates | grep 'kubeadm\|kubectl\|kubelet'
#安装软件 指定版本
yum install -y kubelet-1.16.2 kubeadm-1.16.2 kubectl-1.16.2

#启动服务并设置开机自启
systemctl start kubelet && systemctl enable kubelet

7. 修改配置


#kubernetes 配置
#/usr/bin 目录下 执行以下操作
## kubelet  kubeadm  kubectl更新权限
cd /usr/bin && chmod a+x kubelet  kubeadm  kubectl
export KUBECONFIG=/etc/kubernetes/admin.conf
iptables -P FORWARD ACCEPT

#docker 配置
##编辑 /lib/systemd/system/docker.service 在[Service] 下添加下面一行
ExecStartPost=/sbin/iptables -P FORWARD ACCEPT
##重启docker
systemctl daemon-reload
systemctl restart docker

8. 拉取镜像并tag


由于镜像默认从国外网站拉取,被墙,故自行从国内云拉取。
运行 kubeadm config images list 查看所需要的镜像以及版本号,再从阿里云拉取这些镜像。

[root@k8s-master bin]# kubeadm config   images  list
W0108 19:53:17.464386   10103 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W0108 19:53:17.464460   10103 version.go:102] falling back to the local client version: v1.16.2
k8s.gcr.io/kube-apiserver:v1.16.2
k8s.gcr.io/kube-controller-manager:v1.16.2
k8s.gcr.io/kube-scheduler:v1.16.2
k8s.gcr.io/kube-proxy:v1.16.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.15-0
k8s.gcr.io/coredns:1.6.2

拉取对应镜像docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/镜像名:版本号

## 对应上面版本号
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.16.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.16.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.16.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.16.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.15-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.2

tag镜像:

docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.16.2 k8s.gcr.io/kube-apiserver:v1.16.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.16.2 k8s.gcr.io/kube-controller-manager:v1.16.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.16.2 k8s.gcr.io/kube-scheduler:v1.16.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.16.2 k8s.gcr.io/kube-proxy:v1.16.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.15-0 k8s.gcr.io/etcd:3.3.15-0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.2 k8s.gcr.io/coredns:1.6.2

9. 使用kubeadm init初始化集群(仅master)


详细参数查询地址: https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/

--apiserver-advertise-address string
API 服务器所公布的其正在监听的 IP 地址。如果未设置,则使用默认网络接口。
--image-repository string     默认值:"k8s.gcr.io"
选择用于拉取控制平面镜像的容器仓库
--kubernetes-version string     默认值:"stable-1"
为控制平面选择一个特定的 Kubernetes 版本。
--service-cidr string     默认值:"10.96.0.0/12"
为服务的虚拟 IP 地址另外指定 IP 地址段
--pod-network-cidr string
指明 pod 网络可以使用的 IP 地址段。如果设置了这个参数,控制平面将会为每一个节点自动分配 CIDRs。
##部署Kubernetes Master
##在192.168.1.86(Master)执行
##由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址

kubeadm init \
--apiserver-advertise-address=192.168.1.86 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.16.2 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16

初始化成功,显示:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.86:6443 --token pwwmps.9cds2s34wlpiyznv \
    --discovery-token-ca-cert-hash sha256:a3220f1d2384fe5230cad2302a4ac1f233b03ea24c19c165adb5824f9c358336

然后在master执行如下命令:

扫描二维码关注公众号,回复: 8524949 查看本文章
# 等待命令执行完毕后执行如下命令
## 在master上执行以下命令  
mkdir -p $HOME/.kube  
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  
sudo chown $(id -u):$(id -g) $HOME/.kube/config 

##安装flannel网络组件
## 在master上执行以下命令
kubectl apply -f  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

## 若出现无法下载安装flannel组件

##查看节点
kubectl get node
##查看集群状态
kubectl get cs
# 可能会出现node notready的情况 在master执行
kubectl get pod --all-namespaces -o wide

Master节点初始化成功,状态可能是NotReady,要等一段时间
如果初始化不成功,可以参考博文:https://www.jianshu.com/p/f53650a85131 进行修复

初始化问题


  1. 由于没安装 flannel 组件
[root@k8s-master ~]# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE   IP             NODE         NOMINATED NODE   READINESS GATES
kube-system   coredns-58cc8c89f4-dwg8r             0/1     Pending   0          24m   <none>         <none>       <none>           <none>
kube-system   coredns-58cc8c89f4-jx7cw             0/1     Pending   0          24m   <none>         <none>       <none>           <none>
  1. 无法直接从官网下载 flannel 组件安装 yml 文件

    参考:https://blog.csdn.net/fuck487/article/details/102783300

[root@k8s-master ~]# kubectl apply -f  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port?
## 解决方法:自行创建 或 ftp 传输本地 kube-flannel.yml
vi $HOME/kube-flannel.yml
## 
	## 粘贴内容 kube-flannel.yml 
##

## 安装 
[root@k8s-master ~]# kubectl apply -f ./kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

10. 补充命令


## 然后在 master 和 node 上都执行此命令
[root@k8s-master bin]# modprobe ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh
modprobe: ERROR: could not insert 'ip_vs': Unknown symbol in module, or unknown parameter (see dmesg)
## 删去 ip_vs
[root@k8s-master bin]# modprobe ip_vs_rr ip_vs_wrr ip_vs_sh
modprobe: ERROR: could not insert 'ip_vs_rr': Unknown symbol in module, or unknown parameter (see dmesg)
## 在执行
[root@k8s-master bin]# modprobe ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh

##查看确保内核开启了ipvs模块
[root@k8s-master bin]# lsmod|grep ip_vs
ip_vs                 145497  0 
nf_conntrack          139224  7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack

11. 添加节点


获取 kubeadm join 命令

# 获得添加节点命令 master上执行  kubeadm token create --print-join-command
[root@k8s-master ~]# kubeadm token create --print-join-command
kubeadm join 192.168.1.86:6443 --token a1qmdh.d79exiuqbzdr616o     --discovery-token-ca-cert-hash sha256:a3220f1d2384fe5230cad2302a4ac1f233b03ea24c19c165adb5824f9c358336

node节点上执行 join 添加节点

[root@k8s-node1 bin]# kubeadm join 192.168.1.86:6443 --token otjfah.zta4yo0bexibbj52     --discovery-token-ca-cert-hash sha256:60535ebe96b6a4cceab70d551f2b2b507a3641c3dc421469320b915e01377e5c
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

12. 删除节点


master节点上:

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES    AGE    VERSION
k8s-master   Ready    master   3h9m   v1.16.2
k8s-node1    Ready    <none>   116s   v1.16.2
[root@k8s-master ~]# kubectl drain k8s-node1 --delete-local-data --force --ignore-daemonsets
node/k8s-node1 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-amd64-gmq2b, kube-system/kube-proxy-q9ppx
node/k8s-node1 drained
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS                     ROLES    AGE     VERSION
k8s-master   Ready                      master   3h10m   v1.16.2
k8s-node1    Ready,SchedulingDisabled   <none>   2m43s   v1.16.2
[root@k8s-master ~]# kubectl delete node k8s-node1
node "k8s-node1" deleted
[root@k8s-master ~]# 

删除节点(k8s-node1)上:

[root@k8s-node1 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y	## y 确认
[preflight] Running pre-flight checks
W0109 13:39:15.848313   79539 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

附录:查询命令


##查看节点 在master执行
kubectl get nodes

##查看集群状态 在master执行
kubectl get cs

# 可能会出现node notready的情况 在master执行
kubectl get pod --all-namespaces -o wide
发布了9 篇原创文章 · 获赞 1 · 访问量 7266

猜你喜欢

转载自blog.csdn.net/JakeLinfly/article/details/103938063