Ubuntu18.04 Server deployment Flannel network Kubernetes

Prepare the servers

ESXi6.5 install Ubuntu18.04 Server, use three hosts, plans to use the hostname was kube01, kube02, kube03, configured as a 2-core 4G / 160G, K8s requirements U dual-core or more.

Because there ESXi6.5 Ubuntu virtual machine downtime in the Remote SSH Bug when, according https://kb.vmware.com/s/article/2151480 the solution, after SSH login ESXi need to modify the configuration, in the corresponding file under / vmfs / volumes / 584f7xxx-7xx749b4-3461 -x0 ... / directory, the virtual organ machine, find the corresponding virtual machine files directory, find vmx file below, add at the end

vmxnet3.rev.30 = FALSE

 

Update server

The Ubuntu's apt to domestic sources

kube02:~$ more /etc/apt/sources.list
deb https://mirrors.ustc.edu.cn/ubuntu bionic main
deb https://mirrors.ustc.edu.cn/ubuntu bionic-security main
deb https://mirrors.ustc.edu.cn/ubuntu bionic-updates main

sudo apt update
sudo apt upgrade

  

Modify the hostname

Modify cloud.cfg

vi /etc/cloud/cloud.cfg sudo 
# will 
preserve_hostname: false 
# amended as 
preserve_hostname: true

Otherwise, after hostnamectl set-hostname reboot was restored

Modify hostname

sudo hostnamectl set-hostname kube01

 

Close swap partition

1. Turn off immediately swap

sudo swapoff -a 

2. Turn off the swap in the fstab

vi /etc/fstab 

# Comment swap with the line

3. Disable swap in systemctl, this step is not operated, then after the restart will still appear swap partition

# May also be sdb, sdc, according to your machine's hard disk may be, to see which partition is swap), assumed to be / dev / sda2 
sudo fdisk -lu / dev / sda 
# According to the results of the previous step, execute the following command 
sudo systemctl mask dev -sda2.swap

 

Install and configure Docker

# Preparation software 
CA-Certificates Software-curl the Properties-install the Common sudo APT APT-HTTPS Transport- 
# install the certificate, pay attention to the back of the pipe to use sudo 
curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu / gpg | sudo apt-Key the Add - 
# add the current release apt source 
lsb_release -cs 
sudo apt-the Add-Repository "deb [Arch = AMD64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $ (lsb_release -cs) stable " 
# install Docker 
sudo install APT Docker-ce 
# to check the version, the installed version 19.03.5 
Docker version 
# Add the current user to docker group, need to log in again after its entry into force, with id check the command 
sudo the usermod -Ag Docker Milton 
# configure docker, add mirror and other configuration 
sudo vi /etc/docker/daemon.json

daemon.json follows

{
  "registry-mirrors": ["https://registry.docker-cn.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {"max-size": "100m"},
  "storage-driver": "overlay2"
}

Modify cgroup to systemd

vi /etc/containerd/config.toml sudo 
# in disabled_plugins = [ "cri"] add the following line 
plugins.cri.systemd_cgroup to true = 
# Restart docker service, and check Cgroup Driver and Registry Mirrors are correct 
sudo systemctl restart docker 
docker info

 

Installation Kubernetes

# Install the certificate, pay attention to the back of the pipe sudo 
curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-Key the Add - 
# apt source added, is not bionic, with xenial 
cd /etc/apt/sources.list.d/ 
sudo vi kubernetes.list

The contents of the file kubernetes.list

deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes great-main

Updates and installs

sudo apt update
sudo apt install kubelet kubeadm kubectl

kubeadm: the command to bootstrap the cluster.
kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
kubectl: the command line util to talk to your cluster.

 

Pulling them flannel container mirroring

# View version 
https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml 
# look inside containers.image following values, this installation is quay.io/coreos/flannel:v0. 11.0-amd64, directly pulling them 
docker pull quay.io/coreos/flannel:v0.11.0-amd64

If the node is the configuration node, to this step can be, if the master node is configured, it then go down

 

Pulling them unable to download k8s container mirror

View a list of needed mirror, you get a bunch of results that begin k8s.gcr.io/

kubeadm config images list

Write a script that will change the source registry.aliyuncs.com/google_containers/, pulling them finished and then change it back, the script reads as follows, according to a list obtained by modifying the previous step, and then executed.

! # / bin / bash 
# The following image should be removed "k8s.gcr.io/" prefix version with a kubeadm config images list command to get to version 
ImagesRF Royalty Free = ( 
    Kube-apiserver: v1.17.0 
    Kube-the Controller-Manager : v1.17.0 
    Kube-Scheduler: v1.17.0 
    Kube-Proxy: v1.17.0 
    PAUSE: 3.1 
    ETCD: 3.4.3-0 
    coredns: 1.6.5 
) 

for imageName Images in $ {[@]}; do 
    Docker pull Registry. aliyuncs.com/google_containers/$imageName 
    Docker Tag registry.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName 
    Docker rmi registry.aliyuncs.com/google_containers/$imageName 
DONE

 

Use kubeadm init initialize the Master Host

After the above preparations are done, you can initialize the Master Host

sudo kubeadm init --apiserver-advertise-address = 0.0.0.0 --pod-network-cidr = 172.16.0.0 / 16 --service-cidr = 10.1.0.0 / 16

Where the parameter description

  • --apiserver-advertise-address by which the IP (network interface) provides api, using the current IP host, or does not specify 0.0.0.0
  • Network IP range --pod-network-cidr Pod layer, kube-flannel.yml later needs to be configured in the same set
  • Network IP range --service-cidr Service layer, this is not reflected in the virtual IP routing table, separated from the preceding line area IP

Information output

W1231 08:57:05.495224   11297 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W1231 08:57:05.495416   11297 version.go:102] falling back to the local client version: v1.17.0
W1231 08:57:05.495703   11297 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1231 08:57:05.495735   11297 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kube01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.11.129]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kube01 localhost] and IPs [192.168.11.129 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kube01 localhost] and IPs [192.168.11.129 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W1231 08:57:14.315543   11297 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W1231 08:57:14.318419   11297 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/ Etc / kubernetes / Manifests "
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 37.004860 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kube01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kube01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: f3jgn2.5w8152dpifacihnj
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.11.129:6443 --token f3jgn2.5w8152dpifacihnj \
    --discovery-token-ca-cert-hash sha256:cc1ae32e0924dffa587b5d94b61005ae892db289f1a59f1ef71b45a7eda65ca3 

According to the above tips, create .kube directory, copy the config file and modify the property owner.

an examination

# 查看pods
kubectl get pods -n kube-system
# 输出
NAME                             READY   STATUS    RESTARTS   AGE
coredns-6955765f44-7dnqv         1/1     Running   0          71m
coredns-6955765f44-pvlcp         1/1     Running   0          71m
etcd-kube01                      1/1     Running   0          71m
kube-apiserver-kube01            1/1     Running   0          71m
kube-controller-manager-kube01   1/1     Running   0          71m
kube-proxy-7c8f5                 1/1     Running   0          71m
kube-scheduler-kube01            1/1     Running   0          71m

.

Installation Flannel

#-Flannel.yml Download Kube 
wget https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml 
# modify Network parameters net-conf.json makes it specified when the init and kubeadm consistent --pod-network-cidr, is the use of the 172.16.0.0/16 
vi Kube-flannel.yml 
# installation 
kubectl apply -f kube-flannel.yml 
output 
podsecuritypolicy.policy / psp.flannel.unprivileged the Created 
clusterrole.rbac .authorization.k8s.io / flannel Created 
clusterrolebinding.rbac.authorization.k8s.io/flannel Created 
ServiceAccount / flannel Created 
ConfigMap / Kube-flannel-CFG Created 
daemonset.apps / Kube-flannel-DS-AMD64 Created  
daemonset.apps / Kube Created-DS-arm64 -flannel 
daemonset.apps / Kube flannel-DS-ARM-Created
daemonset.apps / Kube Created-DS-ppc64le -flannel 
daemonset.apps / Kube-flannel-DS-s390x Created

View flannel network information

more /run/flannel/subnet.env 
FLANNEL_NETWORK=172.16.0.0/16
FLANNEL_SUBNET=172.16.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true

View flannel network configuration

more /etc/cni/net.d/10-flannel.conflist 
{
  "name": "cbr0",
  "cniVersion": "0.3.1",
  "plugins": [
    {
      "type": "flannel",
      "delegate": {
        "hairpinMode": true,
        "isDefaultGateway": true
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      }
    }
  ]
}

View pods again, to see the newly added flannel

kube-flannel-ds-amd64-kkxlm      1/1     Running   0          3m5s

View Log pod

kubectl logs coredns-6955765f44-7dnqv -n kube-system
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2

View nodes, only this time master host

kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
kube01   Ready    master   78m   v1.17.0

 

Node join the cluster host

Using the previous kubeadm init generated commands require sudo, unlike the tutorials found on the Internet, do not need to copy the configuration files from the master host, run the following command to direct the actual test would join the cluster

sudo kubeadm join 192.168.11.129:6443 --token f3jgn2.5w8152dpifacihnj --discovery-token-ca-cert-hash sha256:cc1ae32e0924dffa587b5d94b61005ae892db289f1a59f1ef71b45a7eda65ca3

Export

W1231 10:42:36.665020    6229 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

Check the node newly added host on the master host

GET Nodes kubectl 
#'ll see NotReady 
NAME AGE the STATUS the ROLES VERSION 
kube01 Ready Master 105m v1.17.0 
kube02 NotReady <none> 10s v1.17.0 

# Ready for some time after the 
NAME AGE the STATUS the ROLES VERSION 
kube01 Ready Master 107M v1.17.0 
kube02 Ready <none> 109s v1.17.0

 

reference

https://kubernetes.io/docs/setup/production-environment/container-runtimes/
http://pwittrock.github.io/docs/admin/kubeadm/
https://github.com/coreos/flannel
https://www.latelee.org/kubernetes/k8s-deploy-1.17.0-detail.html
https://blog.csdn.net/liukuan73/article/details/83116271

 

Guess you like

Origin www.cnblogs.com/milton/p/12127064.html