[Cloud Native] [k8s] Day 83 of Learning Operations and Maintenance from Novice to Great Master ------- Managing Kubernetes applications based on Helm

Stage 4

Time: August 18, 2023

Participants: All members of the class

Contents:

Manage Kubernetes applications based on Helm

Table of contents

1. Kubernetes deployment method

(1) minikube

(2) Binary package

(3) Kubeadm

2. Deploy K8S cluster based on kubeadm

(1) Environmental preparation

(2) Deploy kubernetes cluster

(3) Install Dashboard UI

(4) Metrics-server service deployment

(5) Introduction to Helm application package manager

(6) Helm application package manager deployment


1. Kubernetes deployment method

Officially provides 3 ways to deploy Kubernetes

(1) minikube

        Minikube is a tool that can quickly run a single point of Kubernetes locally, for users trying Kubernetes or daily development. Not for use in production environments.

Official documentation: Install Tools | Kubernetes

(2) Binary package

        Download the distribution binary package from the official release and manually deploy each component to form a Kubernetes cluster. This method is currently mainly used in enterprise production environments.

download link:

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md#v1113

(3) Kubeadm

        Kubeadm is a tool launched by Google specifically for rapid deployment of kubernetes clusters. During the cluster deployment process, you can initialize the master node through kubeadm init, and then use kubeadm join to add other nodes to the cluster.

        1. Kubeadm can quickly run a minimally available cluster through simple configuration. Its initial design focus was on quickly installing and running the cluster, rather than on step-by-step preparation of each node environment. Similarly, the various plug-ins used in the kubernetes cluster are not the focus of kubeadm, such as kubernetes cluster WEB Dashboard, prometheus monitoring cluster business, etc. The purpose of the kubeadm application is to serve as the basis for all deployments and make deploying kubernetes clusters easier through kubeadm.

2. Kubeadm’s simple and fast deployment can be applied to the following three aspects:

·New users can quickly build and understand Kubernete starting from kubeadm.

·Users familiar with Kubernetes can use kubeadm to quickly build clusters and test their applications.

·Large-scale projects can use kubeadm together with other installation tools to form a more complex system.

·Official documents:

Kubeadm | Kubernetes
Installing kubeadm | Kubernetes

2. Deploy K8S cluster based on kubeadm

(1) Environmental preparation

IP address

CPU name

components

192.168.100.131

k8s-master

kubeadm、kubelet、kubectl、docker-ce

192.168.100.132

k8s-node01

kubeadm、kubelet、kubectl、docker-ce

192.168.100.133

k8s-node02

kubeadm、kubelet、kubectl、docker-ce

Note: Recommended CPU for all host configurations: 2C+ Memory: 2G+

1. Host initialization configuration

Disable firewall and selinux on all host configurations

[root@localhost ~]# setenforce 0

[root@localhost ~]# iptables -F

[root@localhost ~]# systemctl stop firewalld

[root@localhost ~]# systemctl disable firewalld

[root@localhost ~]# systemctl stop NetworkManager

[root@localhost ~]# systemctl disable NetworkManager

[root@localhost ~]# sed -i '/^SELINUX=/s/enforcing/disabled/' /etc/selinux/config

2. Configure the host name and bind hosts. Different host names are different.

[root@localhost ~]# hostname k8s-master

[root@localhost ~]# bash

[root@k8s-master ~]# cat << EOF >> /etc/hosts

192.168.100.131 k8s-master

192.168.100.132 k8s-node01

192.168.100.133 k8s-node02

EOF

[root@localhost ~]# hostname k8s-node01

[root@k8s-node01 ~]# cat /etc/hosts

[root@localhost ~]# hostname k8s-node02

[root@k8s-node02 ~]#cat /etc/hosts

3. Host configuration initialization

[root@k8s-master ~]# yum -y install vim wget net-tools lrzsz

[root@k8s-master ~]# swapoff -a

[root@k8s-master ~]# sed -i '/swap/s/^/#/' /etc/fstab

[root@k8s-master ~]# cat << EOF >> /etc/sysctl.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

[root@k8s-master ~]# modprobe br_netfilter

[root@k8s-master ~]# sysctl -p

4. Deploy docker environment

1) Deploy Docker environments on three hosts respectively, because the orchestration of containers by Kubernetes requires the support of Docker.

[root@k8s-master ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

[root@k8s-master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2

2) When using YUM to install Docker, it is recommended to use Alibaba's YUM source.

[root@k8s-master ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

3) Clear cache

[root@k8s-master ~]# yum clean all && yum makecache fast

4) Start docker

[root@k8s-master ~]# yum -y install docker-ce

[root@k8s-master ~]# systemctl start docker

[root@k8s-master ~]# systemctl enable docker

5) Image accelerator (all host configurations)

[root@k8s-master ~]# cat << END > /etc/docker/daemon.json

{     "registry-mirrors":[ "https://nyakyfun.mirror.aliyuncs.com" ]

}

END

6) Restart docker

[root@k8s-master ~]# systemctl daemon-reload

[root@k8s-master ~]# systemctl restart docker

(2) Deploy kubernetes cluster

1. Component introduction

All three nodes need to install the following three components:

kubeadm: Install tools so that all components will run as containers

kubectl: client connection K8S API tool

kubelet : a tool that runs on node nodes and is used to start containers

2. Configure Alibaba Cloud yum source

When using YUM to install Kubernetes, it is recommended to use Alibaba's YUM source.

[root@k8s-master ~]# ls /etc/yum.repos.d/

[root@k8s-master ~]# cat > /etc/yum.repos.d/kubernetes.repo

3. Install kubelet kubeadm kubectl

All host configurations

[root@k8s-master ~]# yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0

[root@k8s-master ~]# systemctl enable kubelet

[root@k8s-master ~]# kubectl version

        After the kubelet is just installed, it cannot be started through systemctl start kubelet. You need to join the node or initialize it as the master before it can be started successfully.

4. Configure init-config.yaml

        Kubeadm provides many configuration items. Kubeadm configurations are stored in ConfigMap in Kubernetes clusters. These configurations can also be written into configuration files to facilitate the management of complex configuration items. Kubeadm configuration content is written into the configuration file through the kubeadm config command.

Install on the master node. The master is set to 192.168.100.131. Create the default init-config.yaml file through the following instructions:

[root@k8s-master ~]# kubeadm config print init-defaults > init-config.yaml

init-config.yaml configuration

[root@k8s-master ~]# cat init-config.yaml

5. Install the master node

1) Pull the required image

[root@k8s-master ~]# kubeadm config images list --config init-config.yaml

[root@k8s-master ~]# kubeadm config images pull --config init-config.yaml

2) Install matser node

[root@k8s-master ~]# kubeadm init --config=init-config.yaml //Initialize installation of K8S

3) Follow the prompts

By default, kubectl will search for the config file in the .kube directory under the user's home directory where it is executed. Here is a copy of the admin.conf generated during the [kubeconfig] step during initialization to .kube/config

[root@k8s-master ~]# mkdir -p $HOME/.kube

[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config

        Kubeadm does not include network plug-ins through initial installation, which means that it does not have related network functions after initialization. For example, the node information viewed on the k8s-master node is in the "Not Ready" state, and the Pod's CoreDNS cannot provide services, etc.

6. Install node node

1) According to the prompt information during master installation

[root@k8s-node01 ~]# kubeadm join 192.168.100.131:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:78bdd0f01660f4e84355b70aa8807cf1d0d6325b0b28502b29c241563e93b4ae

[root@k8s-master ~]# kubectl get nodes

[root@k8s-node02 ~]# kubeadm join 192.168.100.131:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:78bdd0f01660f4e84355b70aa8807cf1d0d6325b0b28502b29c241563e93b4ae

Master operation:

[root@k8s-master ~]# kubectl get nodes

        As mentioned earlier, there is no network-related configuration when initializing k8s-master, so it cannot communicate with the node node, so the status is "NotReady". But the node added through kubeadm join can already be seen on k8s-master.

7. Install flannel

        The reason why the Master node is NotReady is because no network plug-in is used, and the connection between the Node and the Master is not normal at this time. Currently, the most popular Kubernetes network plug-ins include Flannel, Calico, Canal, and Weave. Here we choose to use flannel.

All hosts:

Master uploads kube-flannel.yml, all hosts upload flannel_v0.12.0-amd64.tar, cni-plugins-linux-amd64-v0.8.6.tgz

[root@k8s-master ~]# docker load < flannel_v0.12.0-amd64.tar

Upload plugin:

[root@k8s-master ~]# tar xf cni-plugins-linux-amd64-v0.8.6.tgz

[root@k8s-master ~]# cp flannel /opt/cni/bin/

Master uploads kube-flannel.yml

master host configuration:

[root@k8s-master ~]# kubectl apply -f kube-flannel.yml

[root@k8s-master ~]# kubectl get nodes

[root@k8s-master ~]# kubectl get pods -n kube-system

Already in ready state

(3) Install Dashboard UI

1. Deploy Dashboard

Dashboard's github warehouse address: https://github.com/kubernetes/dashboard

In the code warehouse, there are related deployment files that give installation examples. We can directly obtain them and deploy them directly.

[root@k8s-master ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended.yaml

        By default, in this deployment file, a separate namespace named kubernetes-dashboard will be created, and kubernetes-dashboard will be deployed under this namespace. The dashboard image comes from the official docker hub, so you don’t need to modify the image address, just get it directly from the official one.

2. Open port settings

        By default, the dashboard does not open access ports to the outside world. To simplify the operation here, directly use nodePort to expose its ports and modify the definition of the serivce part:

Download images for all hosts

[root@k8s-master ~]# docker pull kubernetesui/dashboard:v2.0.0

[root@k8s-master ~]# docker pull kubernetesui/metrics-scraper:v1.0.4

[root@k8s-master ~]# vim recommended.yaml

3. Permission configuration

Configure a super administrator privileges

[root@k8s-master ~]# vim recommended.yaml

[root@k8s-master ~]# kubectl apply -f recommended.yaml

[root@k8s-master ~]# kubectl get pods -n  kubernetes-dashboard

[root@k8s-master ~]# kubectl get pods -A  -o wide

4. Access Token configuration

Use Google Chrome to test access https://192.168.100.131:32443

        You can see that the screen shown above appears, and we need to enter a kubeconfig file or a token. In fact, when installing the dashboard, a serviceaccount was created for us by default, and a token was generated for kubernetes-dashboard.

We can obtain the sa token through the following command:

[root@k8s-master ~]# kubectl describe secret -n kubernetes-dashboard $(kubectl get secret -n kubernetes-dashboard |grep kubernetes-dashboard-token | awk '{print $1}') |grep token | awk '{print $2}'

Enter the obtained token

View the cluster overview:

View cluster roles:

View cluster namespaces:

View cluster nodes:

View cluster pods:

(4) Deployment of metrics-server service

1. Download the image on the Node node

heapster has been replaced by metrics-server, which is a resource indicator acquisition tool in K8S

On all node nodes

[root@k8s-node01 ~]# docker pull bluersw/metrics-server-amd64:v0.3.6

[root@k8s-node01 ~]# docker tag bluersw/metrics-server-amd64:v0.3.6 k8s.gcr.io/metrics-server-amd64:v0.3.6

2. Modify Kubernetes apiserver startup parameters

Add the following configuration options to the kube-apiserver item. After modification, the apiserver will automatically restart.

[root@k8s-master ~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml

3. Deploy on Master

[root@k8s-master ~]# wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml

Modify the installation script:

[root@k8s-master ~]# vim components.yaml

[root@k8s-master ~]# kubectl create -f components.yaml

Wait 1-2 minutes to see the results

[root@k8s-master ~]# kubectl top nodes

Go back to the dashboard interface and you can see the CPU and memory usage.

(5) Introduction to Helm application package manager

1. Why do you need Helm?

        Application services deployed on Kubernetes are composed of specific resource descriptions, including deployment, service, etc. Each resource object is saved in its own file or written collectively to a configuration file. Then deploy it through the kubectl apply –f demo.yaml command.

        If the business system only consists of one or several such services, then the above deployment management method is sufficient.

        For a complex business system, there will be many resource description files similar to the above. For example, a microservice architecture application may have as many as ten or dozens of services that make up the application. If there is a need to update or roll back an application, it may be necessary to modify and maintain a large number of resource object files involved, and this way of organizing and managing applications becomes insufficient.

        Moreover, due to the lack of management and control of published application versions, application maintenance and updates on Kubernetes face many challenges, mainly facing the following problems:

How to manage these services as a whole

How to reuse these resource files efficiently

Application-level version management is not supported

2. Introduction to Helm

        Helm is a Kubernetes package management tool. Just like package managers under Linux, such as yum/apt-get, etc., Helm can easily deploy previously packaged yaml files to kubernetes.

Helm has 3 important concepts:

helm: A command line client tool mainly used for the creation, packaging, publishing and management of Kubernetes application charts.

Chart: Directory or compressed package, used for application description, consisting of a series of files used to describe k8s resource objects.

Release: Chart-based deployment entity. After a chart is run by Helm, a corresponding release will be generated; a real running resource object will be created in k8s.

Helm features

        A package manager developed for kubernetes. Each package is called a Chart, and a Chart is a directory (generally, the directory will be packaged and compressed to form a single file in the name-version.tar.gz format for easy transmission. and storage)

        For application publishers, Helm can be used to package applications, manage application dependencies, manage application versions and publish applications to software warehouses.

        For users, after using Helm, they no longer need to understand the Yaml syntax of Kubernetes and write application deployment files. They can download and install the required applications on Kubernetes through Helm.

        Helm provides powerful functions for software deployment, deletion, upgrade, and rollback of applications on kubernetes

3. Helm V3 changes

On November 13, 2019, the Helm team released the first stable version of Helm v3. The main changes in this version are as follows:

1) Architecture changes

The most obvious change is the removal of Tiller

2) Release names can be reused in different namespaces

3) Support pushing Chart to Docker mirror warehouse Harbor

4) Use JSONSchema to verify chart values

5) Others

Helm CLI individual renames to better coordinate wording with other package managers

helm delete` was renamed to `helm uninstall

helm inspect` was renamed to `helm show

helm fetch` was renamed to `helm pull

However, the above old commands can still be used.

The helm serve command used to temporarily build Chart Repository locally has been removed.

Automatically create namespaces

        Helm 2 created the namespace when creating a distribution in a namespace that didn't exist. Helm 3 follows the behavior of other Kubernetes objects and returns an error if the namespace does not exist.

requirements.yaml is no longer needed, dependencies are defined directly in chart.yaml.

(6) Helm application package manager deployment

1. Deploy Helm client tool

Helm client download address: Releases · helm/helm · GitHub

Unzip the source code package and move it to the /usr/bin/ directory.

[root@k8s-master ~]# wget https://get.helm.sh/helm-v3.5.2-linux-amd64.tar.gz

[root@k8s-master ~]# tar xf helm-v3.5.2-linux-amd64.tar.gz

[root@k8s-master ~]# cd linux-amd64/

[root@k8s-master linux-amd64]# ls

[root@k8s-master linux-amd64]# mv helm /usr/bin/

[root@k8s-master ~]# helm #Verify whether the helm command is available

2. Helm commonly used commands

Order

describe

create

Create a chart and specify a name

dependency

Manage chart dependencies

get

Download a release. Available subcommands: all, hooks, manifest, notes, values

history

Get release history

install

Install a chart

list

List releases

package

Pack the chart directory into a chart archive file

pull

Download the chart from the remote repository and extract it locally# helm pull stable/mysql --untar

repo

Add, list, remove, update and index chart repositories. Available subcommands: add, index, list, remove, update

rollback

Rollback from previous version

search

Search charts based on keywords. Available subcommands: hub, repo

show

View chart details. Available subcommands: all, chart, readme, values

status

Show the status of a named version

template

Local rendering template

uninstall

Uninstall a release

upgrade

Update a release

version

Check helm client version

3. Configure domestic Chart warehouse

The Microsoft warehouse ( Index of /kubernetes/charts/ ) is highly recommended. Basically, all the charts on the official website are available here.

Alibaba Cloud Warehouse ( https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts )

Official warehouse ( Kubeapps | Home ) The official chart warehouse is a bit difficult to use in China.

Add chart repository

[root@k8s-master ~]# helm repo add stable http://mirror.azure.cn/kubernetes/charts

[root@k8s-master ~]# helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts

[root@k8s-master ~]# helm repo update

View the configured chart repository

[root@k8s-master ~]# helm repo list

Delete the repository:

[root@k8s-master ~]# helm repo remove aliyun

[root@k8s-master ~]# helm repo list

4. Use chart to deploy an Nginx application

1) Create chart

[root@k8s-master ~]# helm create nginx

[root@k8s-master ~]# tree nginx/

Detailed explanation:

nginx/

├── charts #depend on charts files from other packages

├── Chart.yaml #The description file of the chart, including IP address, version information, etc.

├── templates # Directory to store k8s template files

│ ├── deployment.yaml #Create a yaml template for k8s deployment resources

│ ├── _helpers.tpl #Files starting with an underscore can be referenced by other templates

│ ├── hpa.yaml #Configure service resources CPU memory

│ ├── ingress.yaml # Ingress configuration to access service domain name

│ ├── NOTES.txt #Description file, the content displayed to the user after helm install

│   ├── serviceaccount.yaml

│ ├── service.yaml #kubernetes Serivce yaml template

│   └── tests

│       └── test-connection.yaml

└── values.yaml #Variables used for template files

2) Modify the service type in values.yaml to NodePort

[root@k8s-master ~]# cd nginx/

[root@k8s-master nginx]# vim values.yaml

3) Install the chart task (note the last dot in the command)

[root@k8s-master nginx]# helm install -f values.yaml

4) View release

[root@k8s-master nginx]# helm ls #or helm list

5) Delete release

[root@k8s-master nginx]# helm delete nginx

6) Check pod status

[root@k8s-master nginx]# kubectl get pod

[root@k8s-master nginx]# kubectl get pod -o wide

7) Check svc status

[root@k8s-master nginx]# kubectl get svc

Visit 192.168.100.132:30281

5. Use chart to deploy a Tomcat application

[root@k8s-master ~]# helm create tomcat

Creating tomcat

[root@k8s-master ~]# cd tomcat/

Modify deployment.yaml and service.yaml files

[root@k8s-master tomcat]# vim templates/deployment.yaml

[root@k8s-master tomcat]# vim templates/service.yaml

Create release

[root@k8s-master tomcat]# helm install tomcat .

View release

[root@k8s-master tomcat]# helm ls

View pods and svc

[root@k8s-master tomcat]# kubectl get pod

[root@k8s-master tomcat]# kubectl get pod -o wide

[root@k8s-master tomcat]# kubectl get svc

Prepare test page

[root@k8s-master tomcat]# kubectl exec -it tomcat-67df6cd4d6-s7qxl /bin/bash

root@tomcat-67df6cd4d6-s7qxl:/usr/local/tomcat# mkdir webapps/ROOT

root@tomcat-67df6cd4d6-s7qxl:/usr/local/tomcat# echo "helm test1" > webapps/ROOT/index.jsp

[root@k8s-master tomcat]# kubectl exec -it tomcat-67df6cd4d6-tkp95 /bin/bash

root@tomcat-67df6cd4d6-tkp95:/usr/local/tomcat# mkdir webapps/ROOT

root@tomcat-67df6cd4d6-tkp95:/usr/local/tomcat# echo "helm test2" > webapps/ROOT/index.jsp

Access test:

Visit 192.168.100.132:32092

Visit 192.168.100.133:32092

delete

[root@k8s-master tomcat]# helm delete tomcat

[root@k8s-master tomcat]# helm ls

Upgrade (reapply after changing the yaml file)

[root@k8s-master tomcat]# helm install tomcat .

[root@k8s-master tomcat]# helm ls

[root@k8s-master tomcat]# kubectl get pod

[root@k8s-master tomcat]# vim templates/deployment.yaml

[root@k8s-master tomcat]# helm upgrade tomcat .

[root@k8s-master tomcat]# kubectl get pod

[root@k8s-master tomcat]# helm ls

rollback

[root@k8s-master tomcat]# helm rollback tomcat 1

[root@k8s-master tomcat]# helm ls

[root@k8s-master tomcat]# kubectl get pod

6. Render templates with variables

Test whether the template is normal

[root@k8s-master tomcat]# helm install --dry-run tomcat .

Define variables in values.yaml file

[root@k8s-master tomcat]# cat values.yaml

[root@k8s-master tomcat]# cat templates/deployment.yaml

[root@k8s-master tomcat]# cat templates/service.yaml

        The variables in the deployment.yaml and service.yaml files are pre-defined and referenced values ​​in values.yaml.

Release.Name represents the name after helm install

[root@k8s-master tomcat]# helm delete tomcat

Delete all redundant files in the templates directory, leaving only two test files.

[root@k8s-master tomcat]# ls templates/

[root@k8s-master tomcat]# helm install -f values.yaml

[root@k8s-master tomcat]# helm ls

 

View publishing status

[root@k8s-master tomcat]# helm status tomcat

[root@k8s-master tomcat]# kubectl get pod

View pod details

[root@k8s-master tomcat]# kubectl describe pod tomcat-dp-67df6cd4d6-78pxc

Guess you like

Origin blog.csdn.net/2302_77582029/article/details/132348710