Table of contents
1.2 Helm components and related terms
2.2 Install Tiller server (need to create authorized users)
2.3 Configure the helm warehouse
2.4 Test whether helm can be used normally
Three: Basic operations of the helm warehouse
3.1 How to view the configured repository
3.2 Use helm to quickly deploy an application
3.2.1 Use the search command to search for applications
3.2.2 Choose to install according to the search content
3.2.3 View the status after installation
4.1 Create a chart using commands
4.2 Create two yaml files in the templates folder
4.6.1 Define variables and values in values.yaml
4.6.2 Obtain the defined variable value in the specific yaml
Preface: Our yum management tool mainly solves the dependency problem between packages, while our helm tool is the problem of installing services. The package management tool helm in our k8s can download some of our packages through some warehouses. If we want yalm files, we can modify the properties corresponding to these yalm files to install the system information we want.
One: Helm overview
1.1 What is helm?
Helm is a software package management tool in the kubernetes ecosystem , similar to ubuntu's apt , centos' yum or python's pip , which is responsible for managing kubernetes application resources; using helm can uniformly package, distribute, install, upgrade and Rollback and other operations.
Helm is a client tool for simplifying the installation and deployment of container cloud applications in Kubernetes. Helm can help developers define, install and upgrade container cloud applications in Kubernetes, and can also share container cloud applications through helm. Common applications including Redis, MySQL, and Jenkins are provided in Kubeapps Hub, which can be deployed and installed in your own Kubernetes cluster with a single command through helm.
Helm is a tool for managing Kubernetes packages. Helm can provide the following capabilities:
- Create new charts (charts)
- Package the charts into a tgz file
- Interact with the chart repository
- Install and uninstall Kubernetes applications
- Manage the lifecycle of charts installed with Helm
1.2 Helm components and related terms
There are two main components in Helm, the Helm client and the Tiller server
1.2.1 Helm
Helm is a client tool under the command line. It is mainly used for the creation, packaging and release of the Kubernetes application Chart, as well as the creation and management of local and remote Chart warehouses.
The client is responsible for the following tasks:
- Local chart development
- Manage Warehouse
- Interact with the Tiller server (send charts to be installed, request information about releases, request updates or uninstall installed releases)
1.2.2 Tiller
Tiller is the server of Helm, deployed in the Kubernetes cluster. Tiller is used to receive Helm's request, and generate the Kubernetes deployment file (Helm called Release) according to the Chart, and then submit it to Kubernetes to create the application. Tiller also provides a series of functions such as Release upgrade, deletion, and rollback.
The Tiller server is responsible for the following tasks:
- Listen for requests from Helm clients
- Combine charts and configurations to build a release
- Install in Kubernetes and track subsequent releases
- Update or chart by interacting with Kubernetes
1.2.3 Chart
Packages for Helm, in TAR format. Similar to APT's DEB package or YUM's RPM package, it contains a set of YAML files that define Kubernetes resources.
1.2.4 Repoistory
Helm's software repository, Repository is essentially a web server, which saves a series of Chart packages for users to download, and provides a list file of the Repository's Chart packages for query. Helm can manage multiple different Repositories at the same time.
1.2.5 Release
A Chart deployed in a Kubernetes cluster using the helm install command is called a Release.
Note: It should be noted that the Release mentioned in Helm is different from the version in our usual concept. The Release here can be understood as an application instance deployed by Helm using the Chart package.
1.3 Architecture
The Helm architecture consists of the Helm client, the Tiller server, and the Chart warehouse ; Tiller is deployed in Kubernetes, and the Helm client obtains the Chart installation package from the Chart warehouse, and installs and deploys it to the Kubernetes cluster.
Chart Install process:
- Helm parses the Chart structure information from the specified directory or tgz file
- Helm passes the specified Chart structure and Values information to Tiller through gRPC
- Tiller generates a Release based on Chart and Values
- Tiller sends the Release to Kubernetes for execution.
Chart Update process:
- Helm parses the Chart structure information from the specified directory or tgz file
- Helm passes the name of the Release to be updated, the Chart structure, and the Values information to Tiller
- Tiller generates a Release and updates the History of the Release with the specified name
- Tiller sends Release to Kubernetes to run
Chart Rollback process:
- Helm passes the name of the Release to be rolled back to Tiller
- Tiller looks up History based on the name of the Release
- Tiller gets the last Release from History
- Tiller sends the previous Release to Kubernetes to replace the current Release
Two: Deploy Helm
There are many ways to install Helm, and here it is installed in a binary way. For more installation methods, please refer to Helm's official help documentation
2.1 Install helm client
#wget https://get.helm.sh/helm-v2.14.3-linux-amd64.tar.gz
[root@master ~]# wget http://101.34.22.188/k8s/helm-v2.14.3-linux-amd64.tar.gz
[root@master ~]# tar zxvf helm-v2.14.3-linux-amd64.tar.gz
[root@master ~]# mv linux-amd64/helm /usr/local/bin/
[root@master ~]# chmod +x /usr/local/bin/helm
[root@master ~]# echo 'source <(helm completion bash)' >> /etc/profile
[root@master ~]# . /etc/profile
Add common repo
- Add repository: helm repo add aliyun https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
- Update the repository: helm repo update
- Check out the repository: helm repo list
- Delete the repository: helm repo remove aliyun
2.2 Install Tiller server (need to create authorized users)
#创建授权用户
[root@master ~]# vim tiller-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
[root@master ~]# kubectl apply -f tiller-rbac.yaml
[root@master ~]# helm init --service-account=tiller
[root@master ~]# kubectl get pod -n kube-system | grep tiller
[root@master ~]# kubectl edit pod tiller-deploy-8557598fbc-tvfsj -n kube-system
//编辑 pod 的 yaml 文件,将其使用的镜像改为国内阿里云的,默认是 Google 的镜像,下载不下来
//修改 spec 字段的 image 指定的镜像如下:
image: gcr.io/kubernetes-helm/tiller:v2.14.3
//修改如下:
image: registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.3
//修改后,保存退出即可,它会去自动下载新镜像(如果没有自动下载,就想办法吧,比如说在 tiller 容器所在的节点手动下载下来镜像,然后重启该节点的 kubelet,或重启该容器)
[root@master ~]# kubectl get pod -n kube-system | grep tiller
//只要保证 tiller 的 pod 正常运行即可
tiller-deploy-8557598fbc-m986t 1/1 Running 0 7m54s
2.3 Configure the helm warehouse
[root@master ~]# helm repo list //查看其仓库信息
NAME URL
stable https://kubernetes-charts.storage.googleapis.com
//如上,默认是 Google,在国外,速度特别慢
local http://127.0.0.1:8879/charts
//执行下面命令,更改为国内阿里云的仓库
[root@master ~]# helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
[root@master ~]# helm repo list //再次查看,可以发现更改生效了
NAME URL
stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
local http://127.0.0.1:8879/charts
[root@master ~]# helm repo update //更新一下 helm 仓库
[root@master ~]# helm version //查看 helm 版本信息,必须保证可以查看出来 client 和 server,才可正常使用 helm
Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
2.4 Test whether helm can be used normally
[root@master ~]# helm search mysql //搜索 MySQL
//查看到的是 charts 包文件,查出来的版本是 helm 的 Charts 包的版本
[root@master ~]# helm inspect stable/mysql //查看其详细信息
[root@master ~]# helm fetch stable/mysql //下载搜索到的包到本地
[root@master templates]# helm install stable/mysql //在线安装这个 MySQL
Three: Basic operations of the helm warehouse
3.1 How to view the configured repository
helm repo list
helm search repo aliyun
delete repository
helm repo remove aliyun
3.2 Use helm to quickly deploy an application
3.2.1 Use the search command to search for applications
helm search repo app name
[root@master1 k8s]# helm search repo weave
NAME CHART VERSION APP VERSION DESCRIPTION
aliyun/weave-cloud 0.1.2 Weave Cloud is a add-on to Kubernetes which pro...
aliyun/weave-scope 0.9.2 1.6.5 A Helm chart for the Weave Scope cluster visual...
3.2.2 Choose to install according to the search content
The name of the application after helm install is installed after searching for the name of the application
3.2.3 View the status after installation
helm list
helm status 安装之后应用的名称
[root@master1 k8s]# helm list #也可以使用helm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
ui-test default 1 2022-08-10 19:52:48.561399142 +0800 CST deployed weave-scope-0.9.2 1.6.5
[root@master1 k8s]#
Of course, we can also use the kubectl command to check whether the relevant pod is created successfully
Four: Custom chart
The custom option is because not all charts can run successfully according to the default configuration, and some environment dependencies may be required, such as PV. So we need to customize chart configuration options. There are two ways to pass configuration data during installation:
- --values (or -f): Specify a YAML file with overrides. This can be specified multiple times, the rightmost file takes precedence
- --set: Specify an override on the command line. If both are used, --set takes precedence
4.1 Create a chart using commands
helm create chart name
[root@master1 k8s]# helm create mychart
Creating mychart
[root@master1 k8s]#
[root@master1 k8s]# ls mychart/
charts Chart.yaml templates values.yaml
[root@master1 k8s]# cd mychart
[root@master1 mychart]# ls -al
总用量 12
drwxr-xr-x 4 root root 93 10月 23 20:01 .
drwxr-xr-x 6 root root 233 10月 23 20:01 ..
drwxr-xr-x 2 root root 6 10月 23 20:01 charts
-rw-r--r-- 1 root root 905 10月 23 20:01 Chart.yaml
-rw-r--r-- 1 root root 342 10月 23 20:01 .helmignore
drwxr-xr-x 3 root root 146 10月 23 20:01 templates
-rw-r--r-- 1 root root 1490 10月 23 20:01 values.yaml
[root@master1 mychart]#
Analyze the meaning of the relevant directories:
charts: ordinary folder, just created as empty
Chart.yaml: used to configure the attribute information of the current chart, which can be provided to the template file as a global variable
templates: target information file, which contains a lot of yaml template files. We use helm to create applications, which is equivalent to helm helping me to execute these yaml files.
[root@master1 templates]# ls -al
总用量 24
drwxr-xr-x 3 root root 146 10月 23 20:01 .
drwxr-xr-x 4 root root 93 10月 23 20:05 ..
-rw-r--r-- 1 root root 1626 10月 23 20:01 deployment.yaml
-rw-r--r-- 1 root root 1847 10月 23 20:01 _helpers.tpl
-rw-r--r-- 1 root root 1030 10月 23 20:01 ingress.yaml
-rw-r--r-- 1 root root 1581 10月 23 20:01 NOTES.txt
-rw-r--r-- 1 root root 207 10月 23 20:01 serviceaccount.yaml
-rw-r--r-- 1 root root 361 10月 23 20:01 service.yaml
drwxr-xr-x 2 root root 34 10月 23 20:01 tests
[root@master1 templates]#
Because we need to customize the chart ourselves, we can modify these yaml files, or delete these default generated yaml files, and then rewrite them ourselves.
[root@master1 templates]# rm -rf *
[root@master1 templates]# ls
[root@master1 templates]# ls -al
总用量 0
drwxr-xr-x 2 root root 6 10月 23 20:07 .
drwxr-xr-x 4 root root 93 10月 23 20:05 ..
[root@master1 templates]#
values.yaml: global variable file, provided for the yaml file in templates
4.2 Create two yaml files in the templates folder
In order to get the service.yaml conveniently, we created a web1 service in advance, and then deleted the service after getting the service.yaml file.
4.3 Start to install mychart
[root@master1 k8s]# helm install web1 mychart/
NAME: web1
LAST DEPLOYED: Sat Oct 23 20:25:23 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
[root@master1 k8s]#
[root@master1 k8s]# kubectl get svc,pod
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 108m
service/ui-test-weave-scope ClusterIP 10.1.107.92 <none> 80/TCP 33m
service/web1 NodePort 10.1.25.42 <none> 80:32142/TCP 25s
NAME READY STATUS RESTARTS AGE
pod/weave-scope-agent-ui-test-gb42z 1/1 Running 0 33m
pod/weave-scope-frontend-ui-test-77f49fbcd5-j6mrs 1/1 Running 0 33m
pod/web1-74b5695598-t65gj 1/1 Running 0 25s
[root@master1 k8s]#
4.4 Update application
helm upgrade chart name
To update the application, generally update our macro definition variables
[root@master1 k8s]# helm upgrade web1 mychart/
Release "web1" has been upgraded. Happy Helming!
NAME: web1
LAST DEPLOYED: Sat Oct 23 20:29:06 2022
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None
[root@master1 k8s]#
4.5 Delete application
[root@master1 ~]# helm uninstall ui-test
release "ui-test" uninstalled
[root@master1 ~]#
4.6 Use of chart template
Helm can manage our yaml files as a whole, and also allow our yaml files to be reused efficiently. Let's try how to achieve efficient reuse of yaml files, that is, how to template, we dynamically render templates, and dynamically pass in parameters. It is used by values.yaml.
In general, the image, tag, label, port, and replicas in the yaml file are different, that is, we regard them as macro definitions.
4.6.1 Define variables and values in values.yaml
[root@master1 mychart]# cat values.yaml
# Default values for mychart.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicas: 1
image: nginx
tag: 1.16
lable: nginx
port: 80
podSecurityContext: {}
[root@master1 mychart]#
4.6.2 Obtain the defined variable value in the specific yaml
Use global variables in the form of an expression,
{ { .Values.Variable name }}
{ { . Release.Name }} Indicates that the name of the current version is obtained to ensure that the name is different for each deployment. This is a built-in object property of helm.
Built-in objects commonly used by helm:
Release.Name | release name |
Release.Name | release name |
Release.Namespace | release namespace |
Release.Service | The name of the release service |
Release.Revision | release revision number, incremented from 1 |
The first is values.yaml
[root@master1 mychart]# cat values.yaml
# Default values for mychart.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicas: 1
image: nginx
tag: 1.16
label: nginx
port: 80
podSecurityContext: {}
[root@master1 mychart]#
Then there is the content in the templates
After the editing of relevant files is completed, let’s verify that no error is reported to indicate success
really deploy