Linux:Kubernetes单master集群部署

Kubernetes单master集群部署

之前发了几篇k8s的文章,但是没有发怎么部署Kubernetes集群的文章。恰好最近因为测试需要部署一个单master的集群,顺便整理一下,将过程记录下来。
kubernetes集群的部署常见的有二进制安装,或者使用kubeadm安装,下面我使用的就是kubeadm的安装,具体的组件也不展开叙述了,本篇主要是操作。

环境准备
master 192.168.146.10
node1 192.168.146.11
node2 192.168.146.12
node3 192.168.146.13

所有机器进行初始化

 - 配置yum源,链接网络
 - 关闭防火墙,selinux
 - 设置好主机名,配置相互解析,传公钥
 - 配置时间同步
 - 关闭交换分区swap
 【都是基础操作了,传公钥的脚本之前也分享过】

关闭交换分区操作

[root@master ~]# swapoff -a
[root@master ~]# vim /etc/fstab
...
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=43f7c887-dc48-4323-b6dc-cc682e67e7e8 /boot                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap

【在自动挂载文件里面取消开机自动挂载】

配置内核参数

[root@master ~]# vim /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
[root@master ~]# sysctl -p /etc/sysctl.d/kubernetes.conf
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No suc                                                                              h file or directory
sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such                                                                               file or directory
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
【有些报错,没有问题】

加载ipvs模块

[root@master ~]# vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
[root@master ~]# chmod +x /etc/sysconfig/modules/ipvs.modules
[root@master ~]# /etc/sysconfig/modules/ipvs.module

检查内核是否支持

[root@master ~]# cat /etc/redhat-release
CentOS Linux release 7.8.2003 (Core)
#如果是7.6以前的版本,就要升级内核
[root@master ~]# yum -y update kernel   #升级时间会比较长

做完以上所有操作,所有机器的初始化就完成了,下面开始正式部署。
安装并配置Docker(所有节点)

[root@master ~]# cd /etc/yum.repos.d/
[root@master yum.repos.d]# curl -O http://mirrors.aliyun.com/docker-ce/li                                                                              nux/centos/docker-ce.repo
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  C                                                                              urrent
                                 Dload  Upload   Total   Spent    Left  S                                                                              peed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--                                                                               100  2640  100  2640    0     0  55824      0 --:--:-- --:--:-- --:--:--                                                                               56170
[root@master yum.repos.d]# yum -y install docker-ce-18.06.0.ce
[root@master yum.repos.d]# mkdir /etc/docker
[root@master yum.repos.d]# cd
[root@master ~]# vim /etc/docker/daemon.json
{
    
    
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    
    
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "registry-mirrors": ["https://pf5f57i3.mirror.aliyuncs.com"]
}
[root@master ~]# systemctl start docker
[root@master ~]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.s                                                                              ervice to /usr/lib/systemd/system/docker.service.

安装kubenetes组件(所有节点)

[root@master ~]# vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
[root@master ~]# yum install -y kubeadm-1.18.0-0 kubelet-1.18.0-0 kubectl-1.18.0-0 ipvsadm                                                                              
[root@master ~]# systemctl enable kubelet

安装负载均衡(master上操作)

[root@master ~]# vim haproxy.sh
#!/bin/bash
MasterIP1=192.168.146.10
MasterPort=6443
docker run -d --restart=always --name haproxy-k8s -p 6444:6444 \
           -e MasterIP1=$MasterIP1 \
           -e MasterPort=$MasterPort  wise2c/haproxy-k8s
[root@master ~]# bash haproxy.sh
Unable to find image 'wise2c/haproxy-k8s:latest' locally
latest: Pulling from wise2c/haproxy-k8s
f2aa67a397c4: Pull complete
fbe89b1fc408: Pull complete
3a697339b14f: Pull complete
31d658c6c91f: Pull complete
59a12f3595e1: Pull complete
911090d21624: Pull complete
Digest: sha256:68a5e494b996a63f79142fc5a383200e2c2c9bececf039771a8231b360ad2d4f
Status: Downloaded newer image for wise2c/haproxy-k8s:latest
732a7cfcc205599f90fec83fbbe046169a4e771a7b8c68366f7e349a8d4da5c4

初始化master

[root@master ~]# mkdir k8s
[root@master ~]# cd k8s/
[root@master k8s]# ls
[root@master k8s]# kubeadm config print init-defaults > init.yml
W1114 11:18:31.098315    2515 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[root@master k8s]# vim init.yml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.146.10    #masterIP
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {
    
    }
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers  #使用国内镜像库
kind: ClusterConfiguration
kubernetesVersion: v1.18.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16            #跟flannel网络一致
scheduler: {
    
    }
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

[root@master k8s]# kubeadm init --config=init.yml --upload-certs |tee kubeadm-init.log
W1114 11:24:28.103338   14184 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.conf                 ig.k8s.io]
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.loca                 l] and IPs [10.96.0.1 192.168.146.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [192.168.146.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [192.168.146.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W1114 11:28:02.606869   14184 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W1114 11:28:02.607542   14184 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up t                 o 4m0s
[apiclient] All control plane components are healthy after 22.502092 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
a8bb1650e307c902f81002680b591e5d23dede65d700187861abe77fa144bf26
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.146.10:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:1430adb26148158e26126e982f53908a825527bcd4ae16d5053df0d4a0d0b76d
[root@master k8s]#  mkdir -p $HOME/.kube
[root@master k8s]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master k8s]# chown $(id -u):$(id -g) $HOME/.kube/config

部署网络插件(master)

[root@master k8s]# curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  4813  100  4813    0     0    476      0  0:00:10  0:00:10 --:--:--  1505
[root@master k8s]# grep -i "flannel:" kube-flannel.yml
        image: quay.io/coreos/flannel:v0.13.0
        image: quay.io/coreos/flannel:v0.13.0
[root@master k8s]# sed -i 's#quay.io/coreos/flannel:v0.13.0#registry.cn-shenzhen.aliyuncs.com/leedon/flannel:v0.11.0-amd64#' kube-flannel.yml
[root@master k8s]# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

将node加入master节点(所有node节点)

[root@node1 ~]# kubeadm join 192.168.146.10:6443 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:1430adb26148158e26126e982f53908a825527bcd4ae16d5053df0d4a0d0b76d
W1114 11:35:29.798732   13580 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
(这句命令是初始化master的时候产生,不要直接复制这里的,去查看kubeadm-init.log)

检查集群状态

[root@master k8s]# kubectl get no
NAME     STATUS   ROLES    AGE    VERSION
master   Ready    master   112m   v1.18.0
node1    Ready    <none>   105m   v1.18.0
node2    Ready    <none>   105m   v1.18.0
node3    Ready    <none>   104m   v1.18.0
[root@master k8s]# kubectl get po -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-9f8sh         1/1     Running   0          112m
coredns-7ff77c879f-nzs5j         1/1     Running   0          112m
etcd-master                      1/1     Running   0          112m
kube-apiserver-master            1/1     Running   0          112m
kube-controller-manager-master   1/1     Running   0          112m
kube-flannel-ds-65z4q            1/1     Running   0          107m
kube-flannel-ds-8x8mg            1/1     Running   0          105m
kube-flannel-ds-qmldk            1/1     Running   4          105m
kube-flannel-ds-r4hsd            1/1     Running   0          105m
kube-proxy-dcnh6                 1/1     Running   0          112m
kube-proxy-k49rg                 1/1     Running   0          105m
kube-proxy-lz2gj                 1/1     Running   0          105m
kube-proxy-sxwj2                 1/1     Running   0          105m
kube-scheduler-master            1/1     Running   0          112m

(刚加入集群的node可能需要一些时间,不用着急,等等三四分钟后查看状态。)
集群部署完毕!

猜你喜欢

转载自blog.csdn.net/rookie23rook/article/details/109689131