k8s kubeadm 集群安装

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/huaishu/article/details/88822030

服务器列表 

主机名节点 IP地址 内存 hostName
iz94m4komqtz 172.18.11.126 8G Master
iZwz92up3fg0iz4ryf7v1uZ 172.18.11.128 4G Slave1
iZwz9evsidoafzcicmva9nZ 172.18.20.14 2G Slave2

版本

  • 操作系统:CentOS Linux release 7.4.1708 (Core)
  • Docker :18.09.3
  • kubectl:1.13.4
  • kubelet:1.13.4
  • kubeadm:1.13.4
  • K8s 版本:1.13.4

 安装 kubeadm, kubelet 和 kubectl

  • kubeadm: 用来初始化集群的指令。

  • kubelet: 在集群中的每个节点上用来启动 pod 和 container 等。

  • kubectl: 用来与集群通信的命令行工具。

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 将 SELinux 设置为 permissive 模式(将其禁用)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

#关闭 swap
swapoff -a

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

systemctl enable kubelet && systemctl start kubelet

 kubelet 现在每隔几秒就会重启,因为它陷入了一个等待 kubeadm 指令的死循环。

在 Master 节点上配置 kubelet 所需的 cgroup 驱动

请注意,您需要在您的 cgroup driver 不是 cgroupfs 时这么做,因为 cgroupfs 已经是 kubelet 的默认值了。

确保docker 的cgroup drive 和kubelet的cgroup drive一样:

docker info | grep -i cgroup

cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

#若显示不一样,则执行:
sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

systemctl daemon-reload

 配置kubectl:

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile 
echo $KUBECONFIG

创建集群

初始化 k8s 集群

主节点执行 (其他节点也需要执行,必须获取镜像)

docker pull mirrorgooglecontainers/kube-apiserver:v1.13.4
docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.4
docker pull mirrorgooglecontainers/kube-scheduler:v1.13.4
docker pull mirrorgooglecontainers/kube-proxy:v1.13.4
docker pull mirrorgooglecontainers/pause-amd64:3.1
docker pull mirrorgooglecontainers/etcd-amd64:3.2.24
docker pull coredns/coredns:1.2.6

docker tag mirrorgooglecontainers/kube-apiserver:v1.13.4 k8s.gcr.io/kube-apiserver:v1.13.4
docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.4 k8s.gcr.io/kube-controller-manager:v1.13.4
docker tag mirrorgooglecontainers/kube-scheduler:v1.13.4 k8s.gcr.io/kube-scheduler:v1.13.4
docker tag mirrorgooglecontainers/kube-proxy:v1.13.4 k8s.gcr.io/kube-proxy:v1.13.4
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd-amd64:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6

docker rmi mirrorgooglecontainers/kube-apiserver:v1.13.4
docker rmi mirrorgooglecontainers/kube-controller-manager:v1.13.4
docker rmi mirrorgooglecontainers/kube-scheduler:v1.13.4
docker rmi mirrorgooglecontainers/kube-proxy:v1.13.4
docker rmi mirrorgooglecontainers/pause-amd64:3.1
docker rmi mirrorgooglecontainers/etcd-amd64:3.2.24
docker rmi coredns/coredns:1.2.6

kubeadm 创建集群

在Master主节点上执行:
kubeadm init --kubernetes-version=v1.13.4 --apiserver-advertise-address 172.18.11.126 --pod-network-cidr=172.18.0.0/16

  • –kubernetes-version: 用于指定 k8s 版本。
  • –apiserver-advertise-address:用于指定使用 Master 的哪个 network interface 进行通信,若不指定,则 kubeadm 会自动选择具有默认网关的 interface。
  • –pod-network-cidr:用于指定Pod 的网络范围。该参数使用依赖于使用的网络方案,本文将使用经典的 flannel 网络方案
[root@iZ94m4komqtZ opt]# kubeadm init --kubernetes-version=v1.13.4 --apiserver-advertise-address 172.18.11.126 --pod-network-cidr=172.18.0.0/16
invalid version "v1.13.4\u00a0--apiserver-advertise-address"
[root@iZ94m4komqtZ opt]# kubeadm init --kubernetes-version=v1.13.4 --apiserver-advertise-address ^C2.18.11.126 --pod-network-cidr=172.18.0.0/16
[root@iZ94m4komqtZ opt]# ^C
[root@iZ94m4komqtZ opt]# ^C
[root@iZ94m4komqtZ opt]# kubeadm init --kubernetes-version=v1.13.4 --apiserver-advertise-address 172.18.11.126 --pod-network-cidr=172.18.0.0/16
[init] Using Kubernetes version: v1.13.4
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.3. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [iz94m4komqtz localhost] and IPs [172.18.11.126 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [iz94m4komqtz localhost] and IPs [172.18.11.126 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [iz94m4komqtz kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.18.11.126]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 23.503389 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "iz94m4komqtz" as an annotation
[mark-control-plane] Marking the node iz94m4komqtz as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node iz94m4komqtz as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: e1a7uv.i3b74b8bzeqb6chq
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 172.18.11.126:6443 --token e1a7uv.i3b74b8bzeqb6chq --discovery-token-ca-cert-hash sha256:1eaaeb3be525cd4fe814b5f2f87893915322348207c0e222c0c002d338eaf086

执行结果要保留,加入集群时需要

kubeadm join 172.18.11.126:6443 --token e1a7uv.i3b74b8bzeqb6chq --discovery-token-ca-cert-hash sha256:1eaaeb3be525cd4fe814b5f2f87893915322348207c0e222c0c002d338eaf086

执行一下操作 

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@iZ94m4komqtZ opt]# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGE     IP              NODE           NOMINATED NODE   READINESS GATES
kube-system   coredns-86c58d9df4-2grzn               0/1     Pending   0          5m45s   <none>          <none>         <none>           <none>
kube-system   coredns-86c58d9df4-w2hf5               0/1     Pending   0          5m45s   <none>          <none>         <none>           <none>
kube-system   etcd-iz94m4komqtz                      1/1     Running   0          5m6s    172.18.11.126   iz94m4komqtz   <none>           <none>
kube-system   kube-apiserver-iz94m4komqtz            1/1     Running   0          4m48s   172.18.11.126   iz94m4komqtz   <none>           <none>
kube-system   kube-controller-manager-iz94m4komqtz   1/1     Running   0          5m7s    172.18.11.126   iz94m4komqtz   <none>           <none>
kube-system   kube-proxy-v7nrp                       1/1     Running   0          5m45s   172.18.11.126   iz94m4komqtz   <none>           <none>
kube-system   kube-scheduler-iz94m4komqtz            1/1     Running   0          5m1s    172.18.11.126   iz94m4komqtz   <none>           <none>

创建网络

k8s 支持很多种网络方案,在这里使用的是经典的 flannel 方案,还有Calico方式

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF
sysctl --system

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

查看Pod信息

[root@iZ94m4komqtZ opt]# kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGE   IP              NODE           NOMINATED NODE   READINESS GATES
kube-system   coredns-86c58d9df4-2grzn               1/1     Running   0          22m   172.18.0.2      iz94m4komqtz   <none>           <none>
kube-system   coredns-86c58d9df4-w2hf5               1/1     Running   0          22m   172.18.0.3      iz94m4komqtz   <none>           <none>
kube-system   etcd-iz94m4komqtz                      1/1     Running   0          21m   172.18.11.126   iz94m4komqtz   <none>           <none>
kube-system   kube-apiserver-iz94m4komqtz            1/1     Running   0          21m   172.18.11.126   iz94m4komqtz   <none>           <none>
kube-system   kube-controller-manager-iz94m4komqtz   1/1     Running   0          21m   172.18.11.126   iz94m4komqtz   <none>           <none>
kube-system   kube-flannel-ds-amd64-7m8nc            1/1     Running   0          10m   172.18.11.126   iz94m4komqtz   <none>           <none>
kube-system   kube-proxy-v7nrp                       1/1     Running   0          22m   172.18.11.126   iz94m4komqtz   <none>           <none>
kube-system   kube-scheduler-iz94m4komqtz            1/1     Running   0          21m   172.18.11.126   iz94m4komqtz   <none>           <none>

配置集群

将Master作为工作节点

K8S集群默认不会将Pod调度到Master上,这样Master的资源就浪费了。在Master(即k8s-node1)上,可以运行以下命令使其作为一个工作节点:
kubectl taint nodes --all node-role.kubernetes.io/master-

添加其他节点加入集群

在其他节点上执行加入集群命令:

kubeadm join 172.18.11.126:6443 --token e1a7uv.i3b74b8bzeqb6chq --discovery-token-ca-cert-hash sha256:1eaaeb3be525cd4fe814b5f2f87893915322348207c0e222c0c002d338eaf086

[root@iZwz92up3fg0iz4ryf7v1uZ db]# kubeadm join 172.18.11.126:6443 --token e1a7uv.i3b74b8bzeqb6chq --discovery-token-ca-cert-hash sha256:1eaaeb3be525cd4fe814b5f2f87893915322348207c0e222c0c002d338eaf086
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.3. Latest validated version: 18.06
[discovery] Trying to connect to API Server "172.18.11.126:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.18.11.126:6443"
[discovery] Requesting info from "https://172.18.11.126:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.18.11.126:6443"
[discovery] Successfully established connection with API Server "172.18.11.126:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "izwz92up3fg0iz4ryf7v1uz" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

验证集群是否正常

在主节点上运行kubectl get nodes,全部Ready集群算成功。

[root@iZ94m4komqtZ ~]# kubectl get nodes
NAME                      STATUS     ROLES    AGE   VERSION
iz94m4komqtz              Ready      master   53m   v1.13.4
izwz92up3fg0iz4ryf7v1uz   NotReady   <none>   15m   v1.13.4
izwz9evsidoafzcicmva9nz   NotReady   <none>   15s   v1.13.4

工作节点也需要下载镜像,必须等待,通过docker pull获取镜像。

查看所有pod状态,运行kubectl get pods -n kube-system

拆卸集群

#先删除掉节点(在主节点执行)
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>

#清理从节点
kubeadm reset
rm -rf /etc/kubernetes/

排查命令

kubectl describe pod <POD NAME> -n kube-system 查看 Pod 部署信息。排错使用
kubectl logs <POD NAME> -n kube-system 查看 Pod Log 信息。排错使用

Dashboard安装

docker pull k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml

猜你喜欢

转载自blog.csdn.net/huaishu/article/details/88822030