2. Detailed introduction of K8S - deployment method

Kubernetes (K8S) is an open source container orchestration platform that automates the deployment, scaling, and management of containerized applications. In modern cloud computing environments, K8S has become a very popular tool for managing and deploying containerized applications. This article will introduce the deployment methods of K8S in detail, including single-node deployment, multi-node deployment and high-availability deployment.

1. Single node deployment

Single-node deployment refers to installing and configuring K8S on one server, usually for testing or development purposes. Here are the steps for a single node deployment:

Install Docker
K8S uses Docker as the container runtime, so you need to install Docker first. Docker can be installed with the following command:

apt-get update
apt-get install docker.io

Install K8S components
K8S consists of multiple components, including kubelet, kube-proxy, kube-apiserver, kube-scheduler, and kube-controller-manager. K8S components can be installed by the following command:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl kubernetes-cni

Initialize K8S
In single-node deployment, K8S will run as Master and Node nodes. K8S can be initialized by the following command:

kubeadm init --pod-network-cidr=10.244.0.0/16

Among them, the --pod-network-cidr parameter specifies the IP address range of the Pod in the K8S cluster.

After K8S is configured
and initialized, K8S will generate some configuration files, and K8S can be configured by the following command:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Install the network plugin
K8S uses the network plugin to manage network communication between Pods. Network plug-ins such as Calico, Flannel, Weave Net, etc. can be used. Here we use Flannel as a network plugin:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

Deploying applications
In K8S, applications run in the form of Pods. A Pod can be created with the following command:

kubectl run nginx --image=nginx --port=80

The exposed service
Pod can only be accessed through the Cluster IP. If you need to access the Pod from the outside, you need to expose the service. The service can be exposed with the following command:

kubectl expose pod nginx --type=NodePort --port=80

2. Multi-node deployment

Multi-node deployment refers to installing and configuring K8S on multiple servers, which can achieve higher availability and scalability. Here are the steps for a multi-node deployment:

Installing Docker
is the same as deploying on a single node.

Installing K8S components
is the same as deploying on a single node.

Initialize K8S
In a multi-node deployment, one server needs to be used as the Master node and other servers as Node nodes. K8S can be initialized on the Master node by the following command:

kubeadm init --pod-network-cidr=10.244.0.0/16

After initialization, the following information will be displayed:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join <ip_address>:<port> --token <token> --discovery-token-ca-cert-hash <hash>

Among them, <ip_address> and is the IP address and port number of the Master node, and is the credentials used to join the Node node to the K8S cluster.

Configuring K8S
is the same as single-node deployment.

Join the Node node
Install Docker and K8S components on other servers, and then use the following command to join the Node node to the K8S cluster:

y
kubeadm join <ip_address>:<port> --token <token> --discovery-token-ca-cert-hash <hash>

Among them, <ip_address> and is the IP address and port number of the Master node, and is the credentials used to join the Node node to the K8S cluster.

Installing the network plug-in
is the same as deploying on a single node.

Deploying an application
is the same as deploying on a single node.

Exposed services
are the same as single-node deployment.

3. High availability deployment

High availability deployment refers to deploying multiple Master nodes on multiple servers to improve the availability and fault tolerance of K8S. The following are the steps for high availability deployment:

Installing Docker
is the same as deploying on a single node.

Installing K8S components
is the same as deploying on a single node.

To initialize K8S
in a high-availability deployment, multiple Master nodes need to be installed. K8S can be initialized on the first server with the following command:

kubeadm init --control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" --upload-certs --pod-network-cidr=10.244.0.0/16

Among them, LOAD_BALANCER_DNS and LOAD_BALANCER_PORT are the DNS name and port number of the load balancer, the --upload-certs parameter uploads the certificate to etcd, and the --pod-network-cidr parameter specifies the IP address range of the Pod in the K8S cluster.

After initialization, the following information will be displayed:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join <ip_address>:<port> --token <token> --discovery-token-ca-cert-hash <hash>

Among them, <ip_address> and is the IP address and port number of the first Master node, and is the credentials for adding other Master nodes to the K8S cluster.

Configuring K8S
is the same as single-node deployment.

Join other Master nodes
Install Docker and K8S components on other servers, and then use the following command to join other Master nodes to the K8S cluster:

kubeadm join LOAD_BALANCER_DNS:LOAD_BALANCER_PORT --token <token> --discovery-token-ca-cert-hash <hash> --control-plane --certificate-key <certificate_key>

Among them, LOAD_BALANCER_DNS and LOAD_BALANCER_PORT are the DNS name and port number of the load balancer, and are the credentials used to add other Master nodes to the K8S cluster, and <certificate_key> is the certificate key generated when the certificate is uploaded.

Installing the network plug-in
is the same as deploying on a single node.

Deploying an application
is the same as deploying on a single node.

Exposed services
are the same as single-node deployment.

Summarize

K8S is a very powerful container orchestration platform that can automatically deploy, scale and manage containerized applications. This article introduces the deployment methods of K8S in detail, including single-node deployment, multi-node deployment and high-availability deployment. Whether it is in a test or production environment, it is very important to choose the appropriate deployment method.

Guess you like

Origin blog.csdn.net/qq_27016363/article/details/130011816