Linux操作文档——Kubeadm部署k8s集群


官方文档
Kubernetes中文社区

一、k8s架构

1、架构图

在这里插入图片描述

2、集群组件

组件名称 作用
kubectl k8s是命令行端,用来发送客户的操作指令
APIs 即API server,是k8s 集群的前端接口,各种客户端工具以及k8s的其他组件可以通过它管理k8s集群的各种资源。它提供了HTTP/HTTPS RESTful API,即K8S API
authentication,authorization 认证,授权
scheduling actuator 即Scheduler,负责决定将Pod放在哪个Node上运行。在调度时,会充分考虑集群的拓扑结构,当前各个节点的负载情况,以及应对高可用、性能、数据亲和性和需求
REST Replica Set,主要被 Deployments用作pod 机制的创建、删除和更新,能确保运行指定数量的pod;services会对应集群内部有效的虚拟IP,集群内部通过虚拟IP访问服务;pods是在K8s集群中运行部署应用或服务的最小单元,支持多容器。大多数情况下,一个Pod内只有一个Container容器
Scheduler 负责决定将Pod放在哪个Node上运行。在调度时,会充分考虑集群的拓扑结构,当前各个节点的负载情况,以及应对高可用、性能、数据亲和性和需求
Controller Manager 负责维护集群的状态,比如故障检测、自动扩展、滚动更新等,保证资源处于预期的状态。它由多种Controller 组成,包括 Replication Controller 、Endpoints Controller 、 Namespace Controller 、 Serviceaccounts Controller 等等,
Etcd 负责保存k8s集群的配置信息和各种资源的状态信息。当数据发生变化时,etcd会快速的通知k8s相关组件。第三方组件,它有可替换方案。Consul、zookeeper
Kubelet 它是Node的agent(代理),当Scheduler确定某个Node上运行Pod之后,会将Pod的具体配置信息发送给该节点的kubelet,kubelet会根据这些信息创建和运行容器,管理Pod进行增删改查,并向Master报告运行状态
kube-proxy 负责将访问service的TCP/UDP数据流转发到后端的容器。为Service提供cluster内部的服务发现和负载均衡
cAdvisor 对Node机器上的资源及容器进行实时监控和性能数据采集,包括CPU使用情况、内存使用情况、网络吞吐量及文件系统使用情况,cAdvisor集成在Kubelet中,当kubelet启动时会自动启动cAdvisor,即一个cAdvisor仅对一台Node机器进行监控
container 容器

二、全局部署

master node01 node02
192.168.1.10 192.168.1.20 192.168.1.30

1、基础环境

一台或多台运行兼容 deb/rpm 的 Linux 操作系统的计算机;例如:Ubuntu 或 CentOS。
每台机器 2 GB 以上的内存,内存不足时应用会受限制。
用作控制平面节点的计算机上至少有2个 CPU。
集群中所有计算机之间具有完全的网络连接。你可以使用公共网络或专用网络。

2、关闭防火墙,iptables清空,禁用selinux

[root@master ~]# systemctl stop firewalld
[root@master ~]# systemctl disable firewalld
[root@master ~]# iptables -F
[root@master ~]# iptables-save
[root@master ~]# vim /etc/selinux/config
SELINUX=disabled

3、禁用swap

[root@master ~]# swapoff -a
[root@master ~]# vim /etc/fstab
#/dev/mapper/centos-swap swap                    swap    defaults        0 0
[root@master ~]# free -h
              total        used        free      shared  buff/cache   available
Mem:           1.8G        802M         66M         24M        950M        831M
Swap:            0B          0B          0B

4、编辑对应域名解析

[root@master ~]# vim /etc/hosts
192.168.1.10 master
192.168.1.20 node01
192.168.1.30 node02
[root@master ~]# scp /etc/hosts node01:/etc/hosts
[root@master ~]# scp /etc/hosts node02:/etc/hosts

5、开启无密码传送

[root@master ~]# ssh-keygen -t rsa
[root@master ~]# ssh-copy-id root@node01
[root@master ~]# ssh-copy-id root@node02

6、安装docker且配置docker加速器

[root@master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@master ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@master ~]# yum repolist
[root@master ~]# yum makecache
[root@master ~]# yum -y install docker-ce-18.09.0-3.el7 docker-ce-cli-18.09.0-3.el7
[root@master ~]# systemctl start docker
[root@master ~]# systemctl enable docker
[root@master ~]# sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://12azv802.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker

7、打开iptables桥接功能

[root@master ~]# vim /etc/sysctl.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
[root@master ~]# sysctl -p

8、添加kubernetes的yum源

[root@master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@master ~]# scp /etc/yum.repos.d/kubernetes.repo node01:/etc/yum.repos.d/
[root@master ~]# scp /etc/yum.repos.d/kubernetes.repo node02:/etc/yum.repos.d/

三、部署master节点

1、安装所需组件

[root@master ~]# yum -y install kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0 
[root@master ~]# systemctl enable kubelet

2、导入初始化镜像

docker pull registry.aliyuncs.com/google_containers/coredns:1.3.1
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.15.0
docker pull registry.aliyuncs.com/google_containers/etcd:3.3.10
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.15.0
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.15.0
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.15.0
docker pull registry.aliyuncs.com/google_containers/pause:3.1
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.15.0 k8s.gcr.io/kube-apiserver:v1.15.0
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.15.0 k8s.gcr.io/kube-proxy:v1.15.0
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.15.0 k8s.gcr.io/kube-controller-manager:v1.15.0
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.15.0 k8s.gcr.io/kube-scheduler:v1.15.0
docker tag registry.aliyuncs.com/google_containers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag registry.aliyuncs.com/google_containers/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
docker tag registry.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1

3、初始化集群

[root@master ~]# kubeadm init --kubernetes-version=v1.15.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
......
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.10:6443 --token 24bj0y.67cd6dsp5bao7ypu \
    --discovery-token-ca-cert-hash sha256:668f9ee00d17a77b81d47e792f71aa32dc9750a604875793a4eea97b55b0f50e 

4、使非 root 用户可以运行 kubectl

[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

5、添加网络组件(flannel)

[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

如果网络不佳可使用本地文件

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
    
    
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
    
    
          "type": "flannel",
          "delegate": {
    
    
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
    
    
          "type": "portmap",
          "capabilities": {
    
    
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
    
    
      "Network": "10.244.0.0/16",
      "Backend": {
    
    
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.13.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.13.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

无法获取镜像

[root@master ~]# wget https://github.com/coreos/flannel/releases/download/v0.13.0/flanneld-v0.13.0-amd64.docker
[root@master ~]# docker load < flanneld-v0.13.0-amd64.docker

6、查看状态

1、查看集群节点状态

[root@master ~]# kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
master   Ready    master   12m   v1.15.0

2、查看pod状态(确保都为Running)

[root@master ~]# kubectl get pod --all-namespaces
NAMESPACE     NAME                             READY   STATUS    RESTARTS   AGE
kube-system   coredns-5c98db65d4-mvb6q         1/1     Running   0          14m
kube-system   coredns-5c98db65d4-zs5mr         1/1     Running   0          14m
kube-system   etcd-master                      1/1     Running   0          13m
kube-system   kube-apiserver-master            1/1     Running   0          13m
kube-system   kube-controller-manager-master   1/1     Running   0          13m
kube-system   kube-flannel-ds-v8xqk            1/1     Running   0          3m43s
kube-system   kube-proxy-jp5h8                 1/1     Running   0          14m
kube-system   kube-scheduler-master            1/1     Running   0          13m

四、添加node节点(node1、node2一样)

1、安装组件

[root@node01 ~]# yum -y install kubelet-1.15.0 kubeadm-1.15.0
[root@node01 ~]# systemctl enable kubelet

2、导入镜像

[root@master ~]# docker save quay.io/coreos/flannel > flannel.tar
[root@master ~]# scp flannel.tar node01:
[root@master ~]# scp flannel.tar node02:
docker pull registry.aliyuncs.com/google_containers/pause:3.1
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.15.0
docker tag registry.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.15.0 k8s.gcr.io/kube-proxy:v1.15.0
[root@node01 ~]# docker load < flannel.tar

3、加入群集

[root@node01 ~]# kubeadm join 192.168.1.10:6443 --token 24bj0y.67cd6dsp5bao7ypu \
>     --discovery-token-ca-cert-hash sha256:668f9ee00d17a77b81d47e792f71aa32dc9750a604875793a4eea97b55b0f50e

4、master查看集群点状态

[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES    AGE   VERSION
master   Ready      master   18m   v1.15.0
node01   NotReady   <none>   41s   v1.15.0
node02   NotReady   <none>   49s   v1.15.0

5、添加新node节点

[root@master ~]# kubeadm token create --print-join-command
kubeadm join 192.168.1.10:6443 --token sib4y9.tddwkdxptqedihzx     --discovery-token-ca-cert-hash sha256:64167ce991c5e34fca1be1c93d333c2fe3c4f08d634eeafd33174af0c66981bc 
[root@node03 ~]# kubeadm join 192.168.1.10:6443 --token sib4y9.tddwkdxptqedihzx     --discovery-token-ca-cert-hash sha256:64167ce991c5e34fca1be1c93d333c2fe3c4f08d634eeafd33174af0c66981bc

五、设置辅助功能

1、设置kubectl命令行工具自动补全功能

[root@master ~]# yum install -y bash-completion
[root@master ~]# source /usr/share/bash-completion/bash_completion
[root@master ~]# source <(kubectl completion bash)
[root@master ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc

2、设置tab键空格个数

[root@master ~]# vim .vimrc
set tabstop=2

六、部署Kubernetes-1.18版本方法

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
yum --disablerepo=\* --enablerepo=elrepo-kernelrepolist
yum --disablerepo=\* --enablerepo=elrepo-kernel install -y kernel-ml.x86_64
yum -y remove kernel-tools-libs.x86_64 kernel-tools.x86_64
yum --disablerepo=\* --enablerepo=elrepo-kernel install -y kernel-ml-tools.x86_64
yum remove -y kernel-3.10.0-957.el7.x86_64
yum remove docker
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum -y install docker-ce
yum -y install kubeadm-1.18.0 kubelet-1.18.0 kubectl-1.18.0
yum -y install kubeadm-1.18.0 kubelet-1.18.0
vim images.txt
registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.18.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.0
registry.aliyuncs.com/google_containers/etcd:3.4.3-0
registry.aliyuncs.com/google_containers/coredns:1.6.7
registry.aliyuncs.com/google_containers/pause:3.2
[root@master images]# for i in `cat images.txt` ; do docker pull $i;done
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.0 k8s.gcr.io/kube-apiserver:v1.18.0
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.18.0 k8s.gcr.io/kube-proxy:v1.18.0
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.0 k8s.gcr.io/kube-controller-manager:v1.18.0 
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.0 k8s.gcr.io/kube-scheduler:v1.18.0
docker tag registry.aliyuncs.com/google_containers/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
docker tag registry.aliyuncs.com/google_containers/coredns:1.6.7 k8s.gcr.io/coredns:1.6.7
docker tag registry.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2
kubeadm init --kubernetes-version=v1.18.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
[root@master images]# mkdir -p $HOME/.kube
[root@master images]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master images]# chown $(id -u):$(id -g) $HOME/.kube/config
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

猜你喜欢

转载自blog.csdn.net/g950904/article/details/108808765