Use kubeadm deployment k8s (2, k8s cluster deployment)

1, pre-conditions kube-proxy ipvs open the
case, Kube-proxy cluster will be deployed in kubeadm run iptables to default mode , should be noted that, when the kernel version greater than 4.19, removed nf_conntrack_ipv4 module, kubernetes official recommended nf_conntrack in place, otherwise an error can not be found nf_conntrack_ipv4 module mode to lvs robin fashion, kube-proxy main solution is svc (service), ipvs scheduling mode can greatly increase the scheduling relationship between the pod and its access efficiency, so this approach is now essential that we kind.

modprobe br_netfilter # load the netfilter module
#yum install -y ipset ipvsadm (ipvs installation)
# write a boot file, this file will guide us loads of dependencies lvs, pay attention to here is not dependent on rpm contains also module dependencies
CAT> /etc/sysconfig/modules/ipvs.modules << EOF
# / bin / bash!
modprobe - ip_vs
modprobe - ip_vs_rr
modprobe - ip_vs_wrr
modprobe - ip_vs_sh
modprobe - nf_conntrack
EOF

755 /etc/sysconfig/modules/ipvs.modules chmod
bash /etc/sysconfig/modules/ipvs.modules  
lsmod | grep -e -e nf_conntrack_ipv4 # ip_vs use lsmod command to see whether these files are directed.

[root@k8smaster yum]# modprobe br_netfilter
[root@k8smaster yum]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF
> #!/bin/bash
> modprobe -- ip_vs
> modprobe -- ip_vs_rr
> modprobe -- ip_vs_wrr
> modprobe -- ip_vs_sh
> modprobe -- nf_conntrack
> EOF

[root@k8smaster yum]# chmod 755 /etc/sysconfig/modules/ipvs.modules
[root@k8smaster yum]# bash /etc/sysconfig/modules/ipvs.modules 
[root@k8smaster yum]# lsmod |grep -e ip_vs -e nf_conntrack_ipv4
ip_vs_sh               16384  0 
ip_vs_wrr              16384  0 
ip_vs_rr               16384  0 
ip_vs                 147456  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack_ipv4      20480  0 
nf_defrag_ipv4         16384  1 nf_conntrack_ipv4
nf_conntrack          114688  3 ip_vs,xt_conntrack,nf_conntrack_ipv4
libcrc32c              16384  2 xfs,ip_vs
[root@k8smaster yum]#

2、安装docker
依赖 yum install yum-utils device-mapper-persistent-data lvm2 -y
yum remove yum-utils  lvm2 -y
yum remove device-mapper-persistent-data -y
yum remove lvm2 -y

Importing Ali cloud Docker-ce mirrored warehouse
wget -P /etc/yum.repos.d/ http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum install -y docker-ce  #安装docker

Creating / etc / docker directory
[! -D / etc / docker] && mkdir / etc / docker

Configuration daemon
CAT> << /etc/docker/daemon.json the EOF
{
  "Exec-the opts": [ "native.cgroupdriver = systemd"],
  "log-Driver": "JSON-File",
  "log-the opts": {
    "max-size": "100m"
  }
}
the EOF

Exec-the opts, set the default systemd group, there are default Centos two groups, a fs, a systemd management for unified, we managed to systemd .
log-driver, let us store the log mode to json file in the form of
log-opts, storage of up to 100Mb, so that we can at a later stage to find the log information corresponding to the container by war / log / content /, so that you can EFK go search for the corresponding information

# 重启docker服务
systemctl daemon-reload && systemctl restart docker && systemctl enable docker

3, mounting kubeadm (master-slave configuration)
mounted kubelet, kubeadm and kubectl, kubelet Cluster running on all nodes, and is responsible for starting Pod container. kubeadm used to initialize the Cluster. kubectl is Kubernetes command-line tool. And can be deployed by kubectl management applications, view a variety of resources, create, delete, and update various components.

# Yum source added Ali cloud
CAT << EOF> /etc/yum.repos.d/kubernetes.repo
[Kubernetes]
name = Kubernetes
baseurl = HTTPS: //mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7 -x86_64
Enabled. 1 =
gpgcheck = 0
repo_gpgcheck = 0
gpgkey = HTTPS: //mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/ Package Penalty for-key.gpg-RPM
EOF

# default install the latest version here 1.15.1
yum install -y kubeadm-kubelet-1.15.1 1.15.1 1.15.1 kubectl-
systemctl enable kubelet && systemctl Start kubelet
because kubelet need to talk our container interface to interact, to start our container and our k8s after passing out kubeadm are installed in a pod exists in the way, that is, the bottom of the container by way of running, so it must be kubelet boot from the start, otherwise after the restart k8s cluster does not start.

 

4, kubectl enable auto-completion command
#-Completion install and configure the bash
yum the install the bash -Y-Completion
echo 'Source / usr / Share / the bash-Completion / bash_completion' >> / etc / Profile
Source / etc / Profile
echo "Source <(kubectl Completion the bash)" >> ~ / .bashrc
Source ~ / .bashrc

5, initialization Master
use kubeadm config print init-defaults can print cluster initialize the default configuration used
to create the default kubeadm-config.yaml file by the following command: # kubernetes-version and the previous version installed kubelet and kubectl consistent
kubeadm config print init -defaults> kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.23.100  #master的ip
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8smaster
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: / etc / Kubernetes / PKI
the clusterName: Kubernetes
ControllerManager: {}
DNS:
  type: CoreDNS
ETCD:
  local:
    dataDir: / var / lib / ETCD
imageRepository: # 192.168.23.100:5000 local private repository
kind: ClusterConfiguration
kubernetesVersion: V1. install version 15.1 # k8s
Networking:
  DNSDomain: cluster.local
  podSubnet: 10.244.0.0/16 # declaration pod of which segment [Note that you must add to this content] By default we will install a plug-flnode network to achieve coverage network, its default pod segments on such a network, if the network is inconsistent, then we need to enter the pod late one modification
  serviceSubnet: 10.96.0.0/12
scheduler: {}
--- # the default scheduling mode ipvs scheduling mode to
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
 SupportIPVSProxyMode: true
mode: ipvs

 

kubeadm-config.yaml composition deployment Description:
InitConfiguration: used to define the initial configuration, such as initialization token and used apiserver address
ClusterConfiguration: used to define apiserver, etcd, network, scheduler, controller-manager configuration items and other related components master
KubeletConfiguration : used to define the configuration items related components kubelet
KubeProxyConfiguration: used to define kube-proxy-related components configuration items
can be seen in the default kubeadm-config.yaml file only InitConfiguration, ClusterConfiguration two parts. We can generate additional exemplary file operation by two parts:

# Exemplary file generating KubeletConfiguration 
kubeadm config print init-defaults --component- configs KubeletConfiguration

# Exemplary file generating KubeProxyConfiguration 
kubeadm config print init-defaults --component- configs KubeProxyConfiguration
 

# Yaml file using the specified initial installation automatically issue the certificate (support after 1.13) all the information is written to the kubeadm-init.log in
kubeadm init --config = kubeadm-config.yaml --upload -certs | tee kubeadm -init.log
--experimental the Upload-certs-has been abandoned, the official recommended --upload-certs alternative, the official announcement: https: //v1-15.docs.kubernetes.io/docs/setup/release/notes /         

[the init] the Using Kubernetes Version: v1.15.1    # installation logging beginning to tell us kubernetes version
[Preflight] Running Preflight Checks   # detect the current operating environment
[Preflight] Pulling ImagesRF Royalty Free required for Setting up A Kubernetes Cluster  # cluster is k8s Download mirror
[Preflight] This Might Take A minute or TWO, DEPENDING ON The Speed of your Internet Connection
[Preflight] by You CAN Also Perform the this Action in beforehand the using 'kubeadm config Images pull'  # begin installation image
[kubelet-start] Writing kubelet file with flags to file environment "/var/lib/kubelet/kubeadm-flags.env" # in /var/lib/kubelet/kubeadm-flags.env files store kubelet environment variables
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"# /Var/lib/kubelet/config.yaml file stored in the configuration file kubelet
[kubelet-Start] Activating The kubelet-Service
[certs] certificateDir the Using Folder "/ etc / kubernetes / pki"  # in / etc / kubernetes / pki directory holds all certificates k8s used as k8s using C http protocol development / S structure, it takes into account for security used when all components communicate with https two-way authentication scheme, so k8s requires a lot of certificates and private key CE
[certs] Generating "ETCD / CA" and key certificate
[certs] Generating "ETCD / Healthcheck-Client" and key certificate
[certs] Generating "apiserver-Client-ETCD" certificate and Key
[certs] Generating "ETCD / Use the peer" and Key Certificate
[certs] ETCD / Serving CERT IS Signed Use the peer for the DNS names [k8smaster localhost] IPs and [127.0.0.1 :: 192.168.23.100. 1]
[certs] Generating "ETCD / server "certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8smaster localhost] and IPs [192.168.23.100 127.0.0.1 ::1]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8smaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.23.100] #配置DNS以及当前默认的域名【svc(service)的默认名称】
[certs] Generating "apiserver-kubelet-client" certificate and key  #生成k8s组件的密钥
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"  #在/etc/kubernetes目录下生成k8s组件的配置文件
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/ Etc / kubernetes / Manifests "
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 29.006263 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
7daa5684ae8ff1835af697c3b3bd017c471adcf8bb7b28eee7e521b03694ef8c
[mark-control-plane] Marking the node k8smaster as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8smaster as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles #RBAC授权
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Control-Plane has Kubernetes Your initialized successfully!  # Successful initialization

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.23.100:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:dda2582a61959ec547f8156288a32fe2b1d04febeca03476ebd1b3a754244588 


命令直接初始化:kubeadm init --kubernetes-version=1.15.1  --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.23.100 --ignore-preflight-errors=NumCPU --image-repository=192.168.23.100:5000

 

Health status acquisition component
[the root k8smaster @ ~] # kubectl GET CS
NAME ERROR the STATUS the MESSAGE
Controller-Manager Healthy OK                  
Scheduler Healthy OK                  
ETCD-0 {Healthy "Health": "to true"}   
[k8smaster the root @ ~] # 

View node information
[root @ k8smaster ~] # kubectl the Node GET
NAME AGE the STATUS the ROLES VERSION
k8smaster NotReady Master 10H v1.15.1
[root @ k8smaster ~] # 
Here status is not ready, because there is no network plug-ins, such as flannel address https:. / /github.com/coreos/flannel can check flannel related projects on github, and execute the following command to automatically install flannel


Run the following commands, obtain the status of all operating systems currently on the pod, designated namespace kube-system, system-level pod, command following
[@ k8smaster the root ~] # kubectl GET PODS -n-System Kube
NAME READY RESTARTS of AGE the STATUS
coredns 75b6d67b6d-11H-9zznq 0/1 the Pending 0
coredns 75b6d67b6d-11H-r2hkz 0/1 the Pending 0
ETCD-k8smaster 1/1 0 Running 10H
Kube apiserver-10H-k8smaster 1/1 0 Running
Kube-Controller-Manager Running 10H 1/1 0 -k8smaster
Kube Proxy-11H-5nrvf 1/1 0 Running
Kube Scheduler-10H-k8smaster 1/1 Running 0
[k8smaster the root @ ~] #        

Run the following commands, obtain current system namespace
[the root @ k8smaster ~] # kubectl GET NS
NAME the STATUS of AGE
default the Active 11H
Kube-Node-Lease the Active 11H
Kube-public the Active 11H
Kube-System the Active 11H
[the root @ k8smaster ~] #

6, install flannel network plug

(1) files downloaded flannel yaml
[root @ k8smaster ~] # after wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml download, modify the local mirror image (from a private warehouse download, quay.io amended as 192.168.23.100:5000)

(2) create flannel
kubectl the Create -f Kube-flannel.yml
view flannel been deployed successfully in the default system components [kube-system namespace], using the same ip addr can see flannel [root @ k8smaster [root @ k8smaster test ] # kubectl GET Kube -n-POD System
NAME RESTARTS the STATUS READY of AGE
coredns-75b6d67b6d-9hmmd 1/1 0 Running 131m
coredns-75b6d67b6d-rf2q5 1/1 0 Running 131m
ETCD-k8smaster 1/1 0 Running 130m
Kube-apiserver- Running 130m 1/1 0 k8smaster
Kube-Controller-Manager-k8smaster 1/1 0 Running 130m
Kube-flannel-DS-0 Running AMD64-101M kvffl 1/1
kube-flannel-ds-amd64-trjfx         1/1     Running   0          105m
kube-proxy-5zkhj                    1/1     Running   0          131m
kube-proxy-h2r8g                    1/1     Running   0          101m
kube-scheduler-k8smaster            1/1     Running   0          130m
[root@k8smaster test]#

[root@k8smaster test]# ip addr
8: flannel.1: <BROADCAST,MULTICAST> mtu 1450 qdisc noqueue state DOWN group default 
    link/ether 06:18:40:93:ec:ff brd ff:ff:ff:ff:ff:ff


[root@k8smaster ~]# kubectl get node  #node状态为Ready
NAME        STATUS   ROLES    AGE    VERSION
k8smaster   Ready    master   134m   v1.15.1

7, the child node is added to the k8s k8s master node
in node machine running the following commands (in kubeadm-init.log lookup)
kubeadm the Join 192.168.23.100:6443 --token abcdef.0123456789abcdef --discovery-token-CA-CERT the hash- sha256: dda2582a61959ec547f8156288a32fe2b1d04febeca03476ebd1b3a754244588

[root@k8snode01 log]# kubeadm join 192.168.23.100:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:dda2582a61959ec547f8156288a32fe2b1d04febeca03476ebd1b3a754244588
[preflight] Running pre-flight checks
    [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.5. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet start] Activating the service kubelet
[kubelet start] Waiting for the kubelet to perform the TLS Bootstrap ...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8snode01 log]#
[root@k8snode01 log]# docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
192.168.23.100:5000/kube-proxy       v1.15.1             89a062da739d        6 months ago        82.4MB
192.168.23.100:5000/coreos/flannel   v0.11.0-amd64       ff281650a721        12 months ago       52.5MB
192.168.23.100:5000/pause            3.1                 da86e6ba6ca1        2 years ago         742kB
[root@k8snode01 log]# docker ps
CONTAINER ID        IMAGE                            COMMAND                  CREATED              STATUS              PORTS               NAMES
ba9d285b313f        ff281650a721                     "/opt/bin/flanneld -…"   About a minute ago   Up About a minute                       k8s_kube-flannel_kube-flannel-ds-amd64-kvffl_kube-system_f7f3aa12-fd16-41fa-a577-559156d545d0_0
677fe835f591        192.168.23.100:5000/kube-proxy   "/usr/local/bin/kube…"   About a minute ago   Up About a minute                       k8s_kube-proxy_kube-proxy-h2r8g_kube-system_a13b5efa-3e14-40e2-b109-7f067ba6ad82_0
357321f007c9        192.168.23.100:5000/pause:3.1    "/pause"                 About a minute ago   Up About a minute                       k8s_POD_kube-proxy-h2r8g_kube-system_a13b5efa-3e14-40e2-b109-7f067ba6ad82_0
01ab31239bfd        192.168.23.100:5000/pause:3.1    "/pause"                 About a minute ago   Up About a minute                       k8s_POD_kube-flannel-ds-amd64-kvffl_kube-system_f7f3aa12-fd16-41fa-a577-559156d545d0_0
[root@k8snode01 log]#


[root@k8smaster ~]# kubectl get node  #node已经加入集群
NAME        STATUS   ROLES    AGE    VERSION
k8smaster   Ready    master   134m   v1.15.1
k8snode01   Ready    <none>   103m   v1.15.1
[root@k8smaster ~]# 

8, the installation problems
(1) to pull the mirror problem
mirrored pull from quay.io and gcr.io, home access network is blocked in. It can be replaced quay-mirror.qiniu.com and registry.aliyuncs.com, playing tag and label re-treatment

(2) /var/lib/kubelet/config.yaml not found error can be ignored, when the kubeadm init will be generated automatically
in / var / log / messages log error occurs following:
On Feb. 11 05:17:44 k8smaster kubelet: F0211 05: 17: 44.750462 1547 server.go : 198] failed to load kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: Open /var/lib/kubelet/config.yaml: NO SUCH File or Directory

(3) the virtual machine memory and cpu failure causes the installation is set too small
in / var / log / messages error log from the following:
On Feb. 11 05:24:44 k8smaster kubelet: E0211 05: 24: 44.762078 eviction_manager.go 2876: 247] Manager eviction: failed The GET to the Summary stats: failed The GET to the Node info: the Node "k8smaster" not found
enough memory, 2G of memory is recommended.
 

Published 60 original articles · won praise 20 · views 4572

Guess you like

Origin blog.csdn.net/zhaikaiyun/article/details/104261106