准备服务器
ESXi6.5安装Ubuntu18.04 Server, 使用三台主机, 计划使用hostname为 kube01, kube02, kube03, 配置为2核4G/160G, K8s要求U为双核以上.
因为ESXi6.5存在Ubuntu虚机在Remote SSH时宕机的Bug, 根据 https://kb.vmware.com/s/article/2151480 中的解决方案, 需要SSH登录ESXi后修改配置, 对应的文件在 /vmfs/volumes/584f7xxx-7xx749b4-3461-x0... / 目录下, 将虚机关机后, 找到对应的虚机文件目录, 在下面找到vmx文件, 在最后添加
vmxnet3.rev.30 = FALSE
更新服务器
将Ubuntu的apt源设为国内
kube02:~$ more /etc/apt/sources.list deb https://mirrors.ustc.edu.cn/ubuntu bionic main deb https://mirrors.ustc.edu.cn/ubuntu bionic-security main deb https://mirrors.ustc.edu.cn/ubuntu bionic-updates main sudo apt update sudo apt upgrade
修改主机名
修改cloud.cfg
sudo vi /etc/cloud/cloud.cfg # 将 preserve_hostname: false # 修改为 preserve_hostname: true
否则hostnamectl set-hostname在重启后就被恢复了
修改hostname
sudo hostnamectl set-hostname kube01
关闭swap分区
1. 立即关闭swap
sudo swapoff -a
2. 在fstab中关闭swap
vi /etc/fstab
用#注释swap那一行
3. 在systemctl中禁用swap, 这一步如果不操作的话, 重启后依然会出现swap分区
# 也可能是sdb, sdc, 根据自己机器硬盘定, 看哪个分区是swap), 假定是/dev/sda2 sudo fdisk -lu /dev/sda # 根据上一步的结果, 执行下面的命令 sudo systemctl mask dev-sda2.swap
安装并配置 Docker
# 准备软件 sudo apt install apt-transport-https ca-certificates curl software-properties-common # 安装证书, 注意管道后面要用sudo curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add - # 添加当前发行版的apt源 lsb_release -cs sudo add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable" # 安装Docker sudo apt install docker-ce # 检查版本, 此次安装版本为 19.03.5 docker version # 将当前用户添加到docker group, 之后需要重新登录使其生效, 用id命令检查 sudo usermod -aG docker milton # 配置docker, 添加mirror及其他配置 sudo vi /etc/docker/daemon.json
daemon.json内容如下
{ "registry-mirrors": ["https://registry.docker-cn.com"], "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": {"max-size": "100m"}, "storage-driver": "overlay2" }
修改cgroup为systemd
sudo vi /etc/containerd/config.toml # 在 disabled_plugins = ["cri"] 下面添加一行 plugins.cri.systemd_cgroup = true # 重启 docker服务, 并检查Cgroup Driver和Registry Mirrors是否正确 sudo systemctl restart docker docker info
安装Kubernetes
# 安装证书, 注意管道后面的sudo curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add - # 添加apt源, 没有bionic的, 用xenial cd /etc/apt/sources.list.d/ sudo vi kubernetes.list
kubernetes.list文件的内容
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
更新并安装
sudo apt update sudo apt install kubelet kubeadm kubectl
kubeadm: the command to bootstrap the cluster.
kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
kubectl: the command line util to talk to your cluster.
拖取flannel容器镜像
# 查看版本 https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml # 查找里面containers.image下面的值, 此次安装是 quay.io/coreos/flannel:v0.11.0-amd64, 直接拖取 docker pull quay.io/coreos/flannel:v0.11.0-amd64
如果是配置node节点, 到这步就可以了, 如果是配置master节点, 就再往下走
拖取无法下载的k8s容器镜像
查看需要的镜像列表, 会得到以 k8s.gcr.io/ 开头的一堆结果
kubeadm config images list
写一个脚本, 将来源改为 registry.aliyuncs.com/google_containers/ , 拖取完再改回去, 脚本内容如下, 要根据上一步得到的列表修改, 然后执行.
#!/bin/bash # 下面的镜像应该去除"k8s.gcr.io/"的前缀,版本换成kubeadm config images list命令获取到的版本 images=( kube-apiserver:v1.17.0 kube-controller-manager:v1.17.0 kube-scheduler:v1.17.0 kube-proxy:v1.17.0 pause:3.1 etcd:3.4.3-0 coredns:1.6.5 ) for imageName in ${images[@]} ; do docker pull registry.aliyuncs.com/google_containers/$imageName docker tag registry.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName docker rmi registry.aliyuncs.com/google_containers/$imageName done
使用kubeadm init初始化Master主机
上面的准备工作都做好之后, 就可以初始化Master主机了
sudo kubeadm init --apiserver-advertise-address=0.0.0.0 --pod-network-cidr=172.16.0.0/16 --service-cidr=10.1.0.0/16
其中的参数说明
- --apiserver-advertise-address 用哪个IP(网口)提供api, 可以用当前主机的IP, 或者0.0.0.0不指定
- --pod-network-cidr Pod层的网络IP范围, 需要与后面要配置的kube-flannel.yml里的设置一致
- --service-cidr Service层的网络IP范围, 这个是虚拟IP不会体现在路由表上, 与前面的IP区分开就行
输出的信息
W1231 08:57:05.495224 11297 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) W1231 08:57:05.495416 11297 version.go:102] falling back to the local client version: v1.17.0 W1231 08:57:05.495703 11297 validation.go:28] Cannot validate kube-proxy config - no validator is available W1231 08:57:05.495735 11297 validation.go:28] Cannot validate kubelet config - no validator is available [init] Using Kubernetes version: v1.17.0 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kube01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.11.129] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kube01 localhost] and IPs [192.168.11.129 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kube01 localhost] and IPs [192.168.11.129 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" W1231 08:57:14.315543 11297 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [control-plane] Creating static Pod manifest for "kube-scheduler" W1231 08:57:14.318419 11297 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 37.004860 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node kube01 as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node kube01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: f3jgn2.5w8152dpifacihnj [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.11.129:6443 --token f3jgn2.5w8152dpifacihnj \ --discovery-token-ca-cert-hash sha256:cc1ae32e0924dffa587b5d94b61005ae892db289f1a59f1ef71b45a7eda65ca3
根据上面的提示, 创建.kube 目录, 复制config文件并修改owner属性.
检查
# 查看pods kubectl get pods -n kube-system # 输出 NAME READY STATUS RESTARTS AGE coredns-6955765f44-7dnqv 1/1 Running 0 71m coredns-6955765f44-pvlcp 1/1 Running 0 71m etcd-kube01 1/1 Running 0 71m kube-apiserver-kube01 1/1 Running 0 71m kube-controller-manager-kube01 1/1 Running 0 71m kube-proxy-7c8f5 1/1 Running 0 71m kube-scheduler-kube01 1/1 Running 0 71m
.
安装Flannel
# 下载 kube-flannel.yml wget https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml # 修改其中net-conf.json的Network参数使其与kubeadm init时指定的 --pod-network-cidr一致, 此次使用的是172.16.0.0/16 vi kube-flannel.yml # 安装 kubectl apply -f kube-flannel.yml 输出 podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds-amd64 created daemonset.apps/kube-flannel-ds-arm64 created daemonset.apps/kube-flannel-ds-arm created daemonset.apps/kube-flannel-ds-ppc64le created daemonset.apps/kube-flannel-ds-s390x created
查看flannel网络信息
more /run/flannel/subnet.env FLANNEL_NETWORK=172.16.0.0/16 FLANNEL_SUBNET=172.16.0.1/24 FLANNEL_MTU=1450 FLANNEL_IPMASQ=true
查看flannel网络配置
more /etc/cni/net.d/10-flannel.conflist { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] }
再次查看pods, 能看到新增加的flannel
kube-flannel-ds-amd64-kkxlm 1/1 Running 0 3m5s
查看pod日志
kubectl logs coredns-6955765f44-7dnqv -n kube-system .:53 [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.5 linux/amd64, go1.13.4, c2fd1b2
查看nodes, 此时只有master主机
kubectl get nodes NAME STATUS ROLES AGE VERSION kube01 Ready master 78m v1.17.0
Node主机加入集群
使用前面kubeadm init产生的命令, 需要sudo, 与网上查到的教程不同, 不需要从master主机复制配置文件, 实际测试直接运行下面的命令就加入集群了
sudo kubeadm join 192.168.11.129:6443 --token f3jgn2.5w8152dpifacihnj --discovery-token-ca-cert-hash sha256:cc1ae32e0924dffa587b5d94b61005ae892db289f1a59f1ef71b45a7eda65ca3
输出
W1231 10:42:36.665020 6229 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
在master主机上检查新加入的node主机
kubectl get nodes # 会看到NotReady NAME STATUS ROLES AGE VERSION kube01 Ready master 105m v1.17.0 kube02 NotReady <none> 10s v1.17.0 # 过一段时间后就Ready了 NAME STATUS ROLES AGE VERSION kube01 Ready master 107m v1.17.0 kube02 Ready <none> 109s v1.17.0
参考
https://kubernetes.io/docs/setup/production-environment/container-runtimes/
http://pwittrock.github.io/docs/admin/kubeadm/
https://github.com/coreos/flannel
https://www.latelee.org/kubernetes/k8s-deploy-1.17.0-detail.html
https://blog.csdn.net/liukuan73/article/details/83116271