Build a k8s cluster through kubeadm


====== Build k8s cluster through kubeadm ======

1. Unified version
  Docker 18.09.0
  ---
  kubeadm-1.14.0-0 
  kubelet-1.14.0-0 
  kubectl-1.14.0-0
  ---
  k8s.gcr.io/kube-apiserver:v1.14.0
  k8s. gcr.io/kube-controller-manager:v1.14.0
  k8s.gcr.io/kube-scheduler:v1.14.0
  k8s.gcr.io/kube-proxy:v1.14.0
  k8s.gcr.io/pause:3.1
  k8s. gcr.io/etcd:3.3.10
  k8s.gcr.io/coredns:1.3.1
  ---
  calico:v3.9
  
2. Prepare 3 centos to ensure that they can ping each other, that is, they are on the same network In, the configuration requirements of the virtual machine are also described above.

3. Update and install dependencies, all 3 machines need to execute
  [yum -y update]
  [yum install -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp]
  
4. Install Docker, install all three machines, see docker installation notes for installation steps

5. Modify the hosts file

  1), master node: [sudo hostnamectl set-hostname m]
  
  2), two workers respectively
  [sudo hostnamectl set-hostname w1]
  [sudo hostnamectl set-hostname w2]
  
3. All three nodes are added to the hosts file: [Vi /etc/hosts]
  192.168.8.51 m
  192.168.8.61 w1
  192.168.8.62 w2
  
4. Use ping to test, [ping w1]

5. System basic premise configuration
  # (1) Turn off the firewall
  systemctl stop firewalld && systemctl disable firewalld
  
  # (2) Turn off selinux
  setenforce 0
  sed -i's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
  
  # (3) Close swap
  swapoff -a
  sed -i'/ swap /s/^\
  
  (.*\) $/ # \1/g' /etc/fstab # (4) Configure ACCEPT rules of
  iptables iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT
  
  # (5) Set system parameters
  cat <<EOF> /etc/sysctl.d/k8s.conf
  net.bridge.bridge -nf-call-ip6tables = 1
  net.bridge.bridge-nf-call-iptables = 1
  EOF
  
  sysctl --system
  
6, Installing kubeadm (build k8s cluster),kubelet (run pod) and kubectl (tool for client to deal with cluster)

  1) Configure the yum source and execute
  cat <<EOF> /etc/yum.repos.d/kubernetes.repo
  [kubernetes]
  name=Kubernetes
  baseurl=http://mirrors.aliyun.com/kubernetes/yum/ on the three nodes repos/kubernetes-el7-x86_64
  enabled=1
  gpgcheck=0
  repo_gpgcheck=0
  gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
         http://mirrors.aliyun.com/kubernetes /yum/doc/rpm-package-key.gpg
  EOF
  
  2), install kubeadm&kubelet&kubectl: [yum install -y kubeadm-1.14.0-0 kubelet-1.14.0-0 kubectl-1.14.0-0]

  3), docker and k8s set the same cgroup
    (1), [vi /etc/docker/daemon.json], add in the /daemon.json file: ["exec-opts": ["native.cgroupdriver=systemd" ],]
     [systemctl restart docker]
    
    (3), # kubelet, if you find that the output directory not exist here, it also means that there is no problem, everyone can continue to proceed
     [sed -i "s/cgroup-driver=systemd /cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf]
     
    (4), the installation is successful, set the boot to start: [systemctl enable kubelet && systemctl start kubelet]

7. Domestic mirrors such as proxy/pause/scheduler

  1) Check the mirror image used by kubeadm: [kubeadm config images list], the content is as follows, 7
    k8s.gcr.io/kube-apiserver:v1.14.0
    k8s.gcr.io/kube-controller-manager:v1.14.0
    k8s .gcr.io/kube-scheduler:v1.14.0
    k8s.gcr.io/kube-proxy:v1.14.0
    k8s.gcr.io/pause:3.1
    k8s.gcr.io/etcd:3.3.10
    k8s.gcr.io /coredns:1.3.1
    
  2). Solve the problem that foreign mirrors cannot be accessed
    (1). Create a kubeadm.sh script to pull the mirror/tag/delete the original mirror, and create a save and change script in each node : [Vi kubeadm.sh], the content is as follows
        #!/bin/bash
        
        set -e
        
        KUBE_VERSION=v1.14.0
        KUBE_PAUSE_VERSION=3.1
        ETCD_VERSION=3.3.10
        CORE_DNS_VERSION=1.3.1
        
        GCR_URL=k8s.gcr.io
        ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/google_containers
        
        images=(kube-proxy:${KUBE_VERSION}
        kube-scheduler:${KUBE_VERSION}
        kube-controller-manager:${KUBE_VERSION}
        kube-apiserver:${KUBE_VERSION}
        pause:${KUBE_PAUSE_VERSION}
        etcd:${ETCD_VERSION}
        coredns:${CORE_DNS_VERSION})
        
        for imageName in ${images[@]} ; do
          docker pull $ALIYUN_URL/$imageName
          docker tag  $ALIYUN_URL/$imageName $GCR_URL/$imageName
          docker rmi $ALIYUN_URL/$imageName
        done
        
    (2)、运行脚本和查看镜像:【sh ./kubeadm.sh】,【docker images】

  3). Push these images to your own
    Alibaba Cloud warehouse [optional, according to your actual situation] [docker login --username=xxx registry.cn-hangzhou.aliyuncs.com], the push script is as follows
    
    #!/bin/ bash
    
    set -e
    
    KUBE_VERSION=v1.14.0
    KUBE_PAUSE_VERSION=3.1
    ETCD_VERSION=3.3.10
    CORE_DNS_VERSION=1.3.1
    GCR_URL=k8s.gcr.io
    ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/tiger2019
    images=(kube-proxy:$ {KUBE_VERSION}
    kube-scheduler:${KUBE_VERSION}
    kube-controller-manager:${KUBE_VERSION}
    kube-apiserver:${KUBE_VERSION}
    pause:${KUBE_PAUSE_VERSION}
    etcd:${ETCD_VERSION}
    coredns:${CORE_DNS_VERSION})
    for in ${images[@]}; do
        docker tag $GCR_URL/$imageName $ALIYUN_URL/$imageName
        docker push $ALIYUN_URL/$imageName
        docker rmi $ALIYUN_URL/$imageName
    done

    Run the script [sh ./kubeadm-push-aliyun.sh]

8. kube init initializes the master

  1), kube init process
  
    01- conduct a series of checks to determine that this machine can deploy kubernetes
    
    02- generate various certificates required by kubernetes to provide services to the outside, corresponding to the directory
    /etc/kubernetes/pki/*
    
    03- for other components Generate configuration files required to access kube-ApiServer
        ls /etc/kubernetes/
        admin.conf controller-manager.conf kubelet.conf scheduler.conf
        
    04-Generate Pod configuration files for the Master component.
        ls /etc/kubernetes/manifests/*.yaml
        kube-apiserver.yaml 
        kube-controller-manager.yaml
        kube-scheduler.yaml
        
    05-Generate etcd Pod YAML file.
        ls /etc/kubernetes/manifests/*.yaml
        kube-apiserver.yaml 
        kube-controller-manager.yaml
        kube-scheduler.yaml
        etcd.yaml
        
    06-Once these YAML files appear in the /etc/kubernetes/manifests/ directory monitored by kubelet, kubelet will automatically create pods defined by these yaml files, that is, the container of the master component. After the master container is started, kubeadm will check the health status check URL of the master component localhost:6443/healthz, and wait for the master component to be fully operational.
    
    07-Generate a bootstrap token for the cluster.
    
    08-Add important information about the master node such as ca.crt. Stored in etcd by means of ConfigMap, the subsequent deployment of node node using
    
    09-the last step is to install the default plug-in, kubernetes default kube-proxy and DNS two plug-ins must be installed

  2)、【kubeadm init --kubernetes-version=1.14.0 --apiserver-advertise-address=192.168.3.51 --pod-network-cidr=10.244.0.0/16】
    1.执行成功信息
  Your Kubernetes control-plane has initialized successfully!
  
  To start using your cluster, you need to run the following as a regular user:
  
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
  
  You should now deploy a pod network to the cluster.
  Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    https://kubernetes.io/docs/concepts/cluster-administration/addons/
  
  Then you can join any number of worker nodes by running the following on each as root:
  
  kubeadm join 192.168.3.51:6443 --token fvxofu.fxdgrb9f6l5mvgmd \
      --discovery-token-ca-cert-hash sha256:db32c745032468fada5f50c218246951e098e3b09ef46696099a732d93cb92d5 

    
    2. Execute according to the log prompt:
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $ HOME/.kube/config
      
    3. Check the pod, wait for a while, at the same time you can find that components like etcd, controller, scheduler have been installed successfully in the way
    
    of pods 4. Check the pods of kube-system: [kubectl get pods -n kube- system -w], note: coredns is not started, you need to install a network plug-in to
      view all pods: [kubectl get pods --all-namespaces]

    5. Health check: [curl -k https://localhost:6443/healthz]

9. Deploy and install the calico network plug-in [https://kubernetes.io/docs/concepts/cluster-administration/addons/]

  1), calico network plug-in: [https://docs.projectcalico.org/v3.9/getting-started/kubernetes/], calico, also operate on the master node
  
  2), [kubectl apply -f https:// docs.projectcalico.org/v3.9/manifests/calico.yaml]
  
        Steps can be omitted:
      [wget https://docs.projectcalico.org/v3.9/manifests/calico.yaml]
      View the mirror used: [ cat calico.yaml | grep image】
      

  3) Confirm whether calico is installed successfully: [kubectl get pods --all-namespaces], view all pods
  
  4), master node query node: [kubectl get node]

10. kube join, execute the previously secured command on woker01 and worker02, as follows
  [kubeadm join 192.168.3.51:6443 --token 9s9lxo.zeijg5g35x1aex7m \
    --discovery-token-ca-cert-hash sha256:e7916a19e5c055acdf8f26aff67fb953dedf10d7258
  
  if add a node 783 ) If it fails, or if you want to add it again, you can use the command on the slave node: [kubeadm reset]
  
11. Check the master node again: [kubectl get nodes]. At this time, it is found that there is more node information

12、体验Pod,定义pod.yml文件,比如pod_nginx_rs.yaml
cat > pod_nginx_rs.yaml <<EOF
apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: nginx
  labels:
    tier: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      tier: frontend
  template:
    metadata:
      name: nginx
      labels:
        tier: frontend
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
EOF

13. Create a pod according to the pod_nginx_rs.yml file: [kubectl apply -f pod_nginx_rs.yaml]

14. View pod:
  [kubectl get pods]
  [kubectl get pods -o wide], you can go to the corresponding node to check and verify: [docker ps | grep nginx]
  [kubectl describe pod nginx]


  
  
 

Guess you like

Origin blog.csdn.net/qq_36336332/article/details/103654015