K8s 集群(Kubernetes 集群)

一,准备环境

实体机 或者 虚拟机 3台

每台机子上需要装docker,Kubernetes,etcd,建议先装一台,后面两台做克隆。这样可以保证docker版本、Kubernetes版本和etcd版本一致。

etcd安装请参考:《CentOS8 安装/测试 etcd》

docker安装请参考:  《CentOS 上 安装Docker》

Kubernetes安装详情参考 :《Kubernetes 安装(基础)》

我这儿三台虚拟主机

master:192.168.137.139

minion1:192.168.137.138

minion2:192.168.137.137

1.1 配置hosts(master&minion都需要处理):

vim /etc/hosts
192.168.134.139 master.oopxiajun.com  k8s-master
192.168.134.138 minion1.oopxiajun.com k8s-minion1
192.168.134.137 minion2.oopxiajun.com k8s-minion2

配置好后,测试下

ping k8s-master
PING k8s-master (192.168.134.139) 56(84) bytes of data.
64 bytes from k8s-master (192.168.134.139): icmp_seq=1 ttl=64 time=0.341 ms
64 bytes from k8s-master (192.168.134.139): icmp_seq=2 ttl=64 time=1.19 ms
64 bytes from k8s-master (192.168.134.139): icmp_seq=3 ttl=64 time=1.13 ms
64 bytes from k8s-master (192.168.134.139): icmp_seq=4 ttl=64 time=0.512 ms
^C
--- k8s-master ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 21ms
rtt min/avg/max/mdev = 0.341/0.793/1.189/0.372 ms

1.2 关闭防火墙(master&minion都需要处理):

$ systemctl stop firewalld
$ systemctl disable firewalld

1.3 关闭selinux(master&minion都需要处理):

vim /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted

将 SELINUX=enforcing 改为 SELINUX=disabled

1.4 关闭swap(master&minion都需要处理):

vim /etc/fstab

注释掉SWAP分区项

#
# /etc/fstab
# Created by anaconda on Thu Apr  9 22:39:56 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/cl-root     /                       xfs     defaults        0 0
UUID=e4976f5b-c44e-4fba-b0c7-3b10bb939db2 /boot                   ext4    defaults        1 2
/dev/mapper/cl-home     /home                   xfs     defaults        0 0
#/dev/mapper/cl-swap     swap                    swap    defaults        0 0

1.5 docker 的daemon设置(master&minion都需要处理):

vim /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}

1.6 主机时间同步

请参考:《CentOS 8时间网络同步设置》

二,初始化(master执行)

2.1 初始化环境

kubeadm init   --image-repository registry.aliyuncs.com/google_containers   --kubernetes-version v1.18.1  --pod-network-cidr 10.244.0.0/16   --dry-run


kubeadm init   --image-repository registry.aliyuncs.com/google_containers   --kubernetes-version v1.18.1    --pod-network-cidr 10.244.0.0/16   

以上第一行命令是干跑,初始化试运行,看看会不会抛异常。无异常后在去掉--dry-run执行。

W0411 00:44:21.649138   15286 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.1
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.134.139]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.134.139 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.134.139 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0411 00:47:50.626620   15286 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0411 00:47:50.628479   15286 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 67.346768 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: pzgjng.br2fhu3df6u84l2g
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.134.139:6443 --token pzgjng.br2fhu3df6u84l2g \
    --discovery-token-ca-cert-hash sha256:74dfe4c325e794e10acb6a8db95045ac7dcb2b754e71957fa40c719267354f2e 

看到  

Your Kubernetes control-plane has initialized successfully!

这句出现,就初始化成功了。这儿要记住下token哦。

kubeadm join 192.168.134.139:6443 --token pzgjng.br2fhu3df6u84l2g \
    --discovery-token-ca-cert-hash sha256:74dfe4c325e794e10acb6a8db95045ac7dcb2b754e71957fa40c719267354f2e 

2.2 看看 Kubernetes 初始化后的目录结构

tree /etc/kubernetes/
/etc/kubernetes/
├── admin.conf
├── controller-manager.conf
├── kubelet.conf
├── manifests
│   ├── etcd.yaml
│   ├── kube-apiserver.yaml
│   ├── kube-controller-manager.yaml
│   └── kube-scheduler.yaml
├── pki
│   ├── apiserver.crt
│   ├── apiserver-etcd-client.crt
│   ├── apiserver-etcd-client.key
│   ├── apiserver.key
│   ├── apiserver-kubelet-client.crt
│   ├── apiserver-kubelet-client.key
│   ├── ca.crt
│   ├── ca.key
│   ├── etcd
│   │   ├── ca.crt
│   │   ├── ca.key
│   │   ├── healthcheck-client.crt
│   │   ├── healthcheck-client.key
│   │   ├── peer.crt
│   │   ├── peer.key
│   │   ├── server.crt
│   │   └── server.key
│   ├── front-proxy-ca.crt
│   ├── front-proxy-ca.key
│   ├── front-proxy-client.crt
│   ├── front-proxy-client.key
│   ├── sa.key
│   └── sa.pub
└── scheduler.conf

2.3 做权限处理(必须

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

2.4查看节点

kubectl get nodes
NAME                    STATUS     ROLES    AGE   VERSION
k8s-master              NotReady   master   69m   v1.18.1

2.5安装插件

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

我们可以利用yaml来部署插件了

这里我们 进入 https://github.com/coreos/flannel  找到 这句

执行下

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

我们看看k8s pod 信息

kubectl get pods -n kube-system
NAME                                 READY   STATUS              RESTARTS   AGE
coredns-7ff77c879f-bxtpj             0/1     ContainerCreating   0          62s
coredns-7ff77c879f-l9hsp             0/1     ContainerCreating   0          62s
etcd-k8s-master                      1/1     Running             0          76s
kube-apiserver-k8s-master            1/1     Running             0          76s
kube-controller-manager-k8s-master   1/1     Running             0          76s
kube-flannel-ds-amd64-jxgbh          1/1     Running             0          6s
kube-proxy-c7gdq                     1/1     Running             0          62s
kube-scheduler-k8s-master            1/1     Running             0          76s

等容器创建完成全部running 起来就ok了

kubectl get pods -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-bxtpj             1/1     Running   0          78s
coredns-7ff77c879f-l9hsp             1/1     Running   0          78s
etcd-k8s-master                      1/1     Running   0          92s
kube-apiserver-k8s-master            1/1     Running   0          92s
kube-controller-manager-k8s-master   1/1     Running   0          92s
kube-flannel-ds-amd64-jxgbh          1/1     Running   0          22s
kube-proxy-c7gdq                     1/1     Running   0          78s
kube-scheduler-k8s-master            1/1     Running   0          92s

三,子节点加入

在节点(minion1(192.168.134.138)/2 (192.168.134.137)上)加入主节点

kubeadm join 192.168.134.139:6443 --token pzgjng.br2fhu3df6u84l2g \
    --discovery-token-ca-cert-hash sha256:74dfe4c325e794e10acb6a8db95045ac7dcb2b754e71957fa40c719267354f2e 

在master上查看节点情况

[root@k8s-master xiajun]# kubectl get nodes
NAME                    STATUS   ROLES    AGE   VERSION
k8s-master              Ready    master   87m   v1.18.1
minion1.oopxiajun.com   Ready    <none>   8s    v1.18.1
[root@k8s-master xiajun]# kubectl get nodes
NAME                    STATUS     ROLES    AGE   VERSION
k8s-master              Ready      master   88m   v1.18.1
minion1.oopxiajun.com   Ready      <none>   40s   v1.18.1
minion2.oopxiajun.com   NotReady   <none>   8s    v1.18.1
[root@k8s-master xiajun]# kubectl get nodes
NAME                    STATUS   ROLES    AGE   VERSION
k8s-master              Ready    master   88m   v1.18.1
minion1.oopxiajun.com   Ready    <none>   44s   v1.18.1
minion2.oopxiajun.com   Ready    <none>   12s   v1.18.1

四,测试

4.1 创建yaml

vim first-pod.yml
apiVersion: v1 #版本号
kind: Pod
metadata:
  name: my-first-pod
  labels:
    app: bash
    tir: backend
spec:
  containers:
  - name: bash-container
    image: docker.io/busybox
    command: ['sh','-c','echo Hello myFirstPod! && sleep 3600']

4.2 构建pod

kubectl create -f first-pod.yml 
pod/my-first-pod created

查看pod

kubectl get pod
NAME           READY   STATUS              RESTARTS   AGE
my-first-pod   0/1     ContainerCreating   0          31s

kubectl get pod
NAME           READY   STATUS    RESTARTS   AGE
my-first-pod   1/1     Running   0          4m7s

详细信息查询

#YAML方式显示Pod的完整信息
kubectl get pod my-first-pod -o yaml

#JSON格式显示Pod的完整信息
kubectl get pod my-first-pod -o json

状态和生命周期

kubectl describe pod my-first-pod
Name:         my-first-pod
Namespace:    default
Priority:     0
Node:         minion1.oopxiajun.com/192.168.134.138
Start Time:   Sat, 11 Apr 2020 23:00:12 +0800
Labels:       app=bash
              tir=backend
Annotations:  <none>
Status:       Running
IP:           10.244.1.3
IPs:
  IP:  10.244.1.3
Containers:
  bash-container:
    Container ID:  docker://39721475cec1c2060b78be6601a3991b1d3a008ce442a3aaff1762adc9a48903
    Image:         docker.io/busybox
    Image ID:      docker-pullable://busybox@sha256:b26cd013274a657b86e706210ddd5cc1f82f50155791199d29b9e86e935ce135
    Port:          <none>
    Host Port:     <none>
    Command:
      sh
      -c
      echo Hello myFirstPod! && sleep 3600
    State:          Running
      Started:      Sat, 11 Apr 2020 23:02:36 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-wkm7m (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-wkm7m:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-wkm7m
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                  From                            Message
  ----     ------     ----                 ----                            -------
  Normal   Scheduled  <unknown>            default-scheduler               Successfully assigned default/my-first-pod to minion1.oopxiajun.com
  Warning  Failed     10m                  kubelet, minion1.oopxiajun.com  Failed to pull image "docker.io/busybox": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: TLS handshake timeout
  Warning  Failed     10m                  kubelet, minion1.oopxiajun.com  Failed to pull image "docker.io/busybox": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/library/busybox/manifests/latest: Get https://auth.docker.io/token?scope=repository%3Alibrary%2Fbusybox%3Apull&service=registry.docker.io: net/http: TLS handshake timeout
  Warning  Failed     9m42s                kubelet, minion1.oopxiajun.com  Failed to pull image "docker.io/busybox": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
  Warning  Failed     9m42s (x3 over 10m)  kubelet, minion1.oopxiajun.com  Error: ErrImagePull
  Normal   BackOff    9m3s (x5 over 10m)   kubelet, minion1.oopxiajun.com  Back-off pulling image "docker.io/busybox"
  Warning  Failed     9m3s (x5 over 10m)   kubelet, minion1.oopxiajun.com  Error: ImagePullBackOff
  Normal   Pulling    8m52s (x4 over 10m)  kubelet, minion1.oopxiajun.com  Pulling image "docker.io/busybox"
  Normal   Pulled     8m30s                kubelet, minion1.oopxiajun.com  Successfully pulled image "docker.io/busybox"
  Normal   Created    8m30s                kubelet, minion1.oopxiajun.com  Created container bash-container
  Normal   Started    8m29s                kubelet, minion1.oopxiajun.com  Started container bash-container

查看日志

kubectl logs my-first-pod
Hello myFirstPod!
发布了28 篇原创文章 · 获赞 0 · 访问量 2637

猜你喜欢

转载自blog.csdn.net/oopxiajun2011/article/details/105432264