k8s the helm deployment installation, upgrades and rollback (chart, helm, tiller, StorageClass)

A, Helm introduced

helmIs based kubernetespackage manager. It's in kubernetesjust as yumit in centos, pipit in python,npmto thejavascript

It helmintroduced for the management of the cluster which help it?

  • More easily deploy infrastructure, such as gitlab, postgres, prometheus, grafanaetc.
  • More easily deploy their own applications, configuration Chart for the project within the company, using the helmcombination of CI, in k8s deploy applications as simple command line

1, Helm use

Helm put Kubernetes resources (such as deployments, services, or ingress, etc.) packaged into a chart, the chart and the chart is saved to the warehouse. By chart warehouses used to store and share chart. Helm can be configured so that release, version management support publishing application configuration simplifies version control Kubernetes deploy applications, packaging, release, delete, update and other operations.

As Kubernetes of a package management tool for managing charts-- preconfigured installation package resource, somewhat similar to the APT Ubuntu and CentOS are yum.

Helm has a function:

  • Create a new chart
  • chart format packaged into tgz
  • Upload chart to chart the warehouse or from the warehouse download chart
  • Install or uninstall a cluster chart in Kubernetes
  • Managed release cycle with Helm mounted chart of

Use Helm can do the following things:

  • Management Kubernetes manifest files
  • Helm installation package management charts
  • Based on the distribution of Kubernetes application chart

2, Helm components and related terms

A common problem encountered when came into contact with Helm Helm is that some concepts and terminology is very confusing, I began to learn Helm encountered this problem.

So let's look at these concepts and terminology of Helm.

Package management tools:

  • Helm: Kubernetes application packaging tool, but also the name of the command-line tool.

  • Helm CLI: Helm is a client that can be performed locally

  • Tiller: Helm's server, deployed in Kubernetes cluster for processing Helm of related commands.

    Role helm: like centos7 the yum command, package management, but the management helm here are the various containers on k8s installation.

    Role tiller: like depots centos7, like simply like xxx.repo under /etc/yum.repos.d directory.

  • Repoistory: Helm depot, repository is essentially a web server, which holds the chart package for download, and there is a repository of the chart pack manifest file for the query. When using, Helm can dock several different Repository.

  • Charts: Helm is a package that contains a Mirror operation kubernetes required application, dependencies and resource definitions.

  • Release: an instance of an application after running Charts, get.

    Need special attention, Release Helm and we usually mentioned in the concept version is different, Release here can be understood using the example of an application package deployment Chart for Helm.

    In fact, in Release Helm called the Deployment more appropriate. Deployment estimate because the concept has been used Kubernetes, therefore Release Helm was adopted this term.

Commands

http://hub.kubeapps.com/

completion  # 为指定的shell生成自动完成脚本(bash或zsh)
create      # 创建一个具有给定名称的新 chart
delete      # 从 Kubernetes 删除指定名称的 release
dependency  # 管理 chart 的依赖关系
fetch       # 从存储库下载 chart 并(可选)将其解压缩到本地目录中
get         # 下载一个命名 release
help        # 列出所有帮助信息
history     # 获取 release 历史
home        # 显示 HELM_HOME 的位置
init        # 在客户端和服务器上初始化Helm
inspect     # 检查 chart 详细信息
install     # 安装 chart 存档
lint        # 对 chart 进行语法检查
list        # releases 列表
package     # 将 chart 目录打包成 chart 档案
plugin      # 添加列表或删除 helm 插件
repo        # 添加列表删除更新和索引 chart 存储库
reset       # 从集群中卸载 Tiller
rollback    # 将版本回滚到以前的版本
search      # 在 chart 存储库中搜索关键字
serve       # 启动本地http网络服务器
status      # 显示指定 release 的状态
template    # 本地渲染模板
test        # 测试一个 release
upgrade     # 升级一个 release
verify      # 验证给定路径上的 chart 是否已签名且有效
version     # 打印客户端/服务器版本信息
dep         # 分析 Chart 并下载依赖

3, component architecture

k8s the helm deployment installation, upgrades and rollback (chart, helm, tiller, StorageClass)

Helm Client It is a user command-line tool, which is responsible for the following:

  • Local development chart
  • Warehouse Management
  • Tiller sever interact with
  • Transmitting chart preinstalled
  • Queries release information
  • Required to upgrade or uninstall the existing release

Tiller ServerIs a deployment Kubernetesserver within the cluster, which is the Helm client, Kubernetes API server interaction. Tiller server is responsible for the following:

  • Helm client listens for requests from the
  • Construction of a chart released by its configuration
  • Installation chart to Kubernetescluster, and track subsequent release
  • By Kubernetesupgrading or uninstalling interactive chart
  • Simply put, client management charts, and server management and publication release

helm client

helm client is a command-line tool, responsible for managing the charts, reprepository and release. It is by gPRC API (kubectl port-forward use of the tiller to a local port mapping, and then through the communication port with the tiller mapped) sends a request to the tiller, it is managed by a corresponding tiller Kubernetes resources.

tiller server

tiller helm receives a request from a client, and sends the operation to Kubernetes related resources, responsible for the management (install, query, update, or delete, etc.) and tracking Kubernetes resources. To facilitate the management, tiller release of the information stored in kubernetes of ConfigMap in.
tiller of foreign exposure gRPC API, client calls for the helm.

4, the working principle

Chart Install process:

  • Helm Chart parsing the configuration information from the specified directory or file tgz
  • Helm Chart configuration and the specified information to Tiller Values ​​through gRPC
  • Tiller and generates a Release Values ​​The Chart
  • Release Tiller will be sent to Kubernetes run.

Chart Update process:

  • Helm Chart parsing the configuration information from the specified directory or file tgz
  • Helm will be updated Release Chart name and structure, Values ​​information to Tiller
  • Tiller generate and update Release specify the name of the Release History
  • Tiller will be sent to Kubernetes run Release

Chart Rollback

  • helm will roll release name to the tiller
  • Find tiller according to release the name of history
  • history acquired from the tiller to a release
  • The tiller on a release sent to replace the current release kubernetes

Chart dependence treatment

Tiller in Chart processing, and all Charts Chart directly into one of its dependent Release, while passing to Kubernetes. So Tiller not responsible for managing the dependencies between the boot sequence. The application needs to be able to handle Chart these dependencies.

Second, the deployment helm installation tool (client)

k8s the helm deployment installation, upgrades and rollback (chart, helm, tiller, StorageClass)

Prerequisites

  • Kubernetes1.5 above
  • Clusters can have access to the mirror repository
  • Helm command execution hosts can access the cluster to kubernetes

(1) Download the helm of the pack

[root@master ~]#docker pull gcr.io/kubernetes-helm/tiller:v2.14.3
[root@master ~]# wget https://get.helm.sh/helm-v2.14.3-linux-amd64.tar.gz

(2) The helm command packet to the local copy

[root@master helm]# mv linux-amd64/helm /usr/local/bin/
//移动命令目录到/usr/local/bin/
[root@master helm]# chmod +x /usr/local/bin/helm 
//给予执行权限
[root@master helm]# helm help
//验证是否安装成功

(3) Set command autocomplete

[root@master helm]#  echo 'source <(helm completion bash)' >> /etc/profile
[root@master helm]# . /etc/profile
//刷新一下

2, installation Tiller server (the server, you need to create an authorized user)

[root@master ~]# vim tiller-rbac.yaml   #创建授权用户
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

About the implementation

[root@master ~]# kubectl apply -f tiller-rbac.yaml  

Environment initialization (1) Tiller server of

[root@master helm]# helm init  --service-account=tiller
//helm的服务端就是Tiller(因为是访问外国的网站,可能需要多次执行)

Look

[root@master helm]# kubectl get deployment. -n kube-system 

k8s the helm deployment installation, upgrades and rollback (chart, helm, tiller, StorageClass)

Now found not turned on, it is because of Google's default download mirror not been downloaded

(2) disposed in a mirror source to aliyun

[root@master helm]# helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts

Look

[root@master helm]# helm version

k8s the helm deployment installation, upgrades and rollback (chart, helm, tiller, StorageClass)

3, the deployment of an instance helm install + charts -n Release name.

1, the description of this Release.

2, a description of the resources of this Release.

3, how to use this Release.

(1) Helm deploy install a Mysql service.

[root@master ~]# helm search mysql
//查看关于mysqk的Charts包

k8s the helm deployment installation, upgrades and rollback (chart, helm, tiller, StorageClass)

[root@master ~]# helm install stable/mysql -n mysql 
//基于stable/mysql包安装一个名为MySQL的服务

Look

[root@master ~]# helm list

k8s the helm deployment installation, upgrades and rollback (chart, helm, tiller, StorageClass)

(2) after the Charts unpack directory:

[root@master ~]# cd .helm/cache/archive
//查看helm缓存
[root@master archive]# ls

k8s the helm deployment installation, upgrades and rollback (chart, helm, tiller, StorageClass)

[root@master mysql]# helm fetch stable/mysql
//直接下载stable/mysql的chart包
[root@master archive]# tar -zxf mysql-0.3.5.tgz 
//解压一下MySQL包
[root@master archive]# tree -C mysql 
//树状图查看解压出来的mysql目录,-C:显示颜色

k8s the helm deployment installation, upgrades and rollback (chart, helm, tiller, StorageClass)

Chart.yaml: This chart pack summary information. (Name and version two are mandatory, others optional.)

README md: This chart is a package use the help documentation.

templates: Templates pack of resources within the chart object.

deployment.yaml: Go template file deployment controller

helpers.tpl:以 开头的文件不会部署到 k8s 上,可用于定制通用信息

NOTES.txt:Chart 部署到集群后的一些信息

service.yaml:service 的 Go 模板文件

values.yaml:是这个chart包的默认的值,可以被templet内的yaml文件使用。

(3)Helm部署安装-个Mysql服务。

[root@master ~]# docker pull mysql:5.7.14
[root@master ~]# docker pull mysql:5.7.15
[root@master ~]# docker pull busybox:1.25.0
下载所需的mysql镜像
[root@master ~]# helm delete mysql --purge 
//删除之前的MySQL服务并清除缓存

(4)设置共享目录

[root@master ~]# yum -y install rpcbind nfs-utils
//安装nfs
[root@master ~]# mkdir /data
//创建共享目录
[root@master ~]# vim /etc/exports
/data *(rw,sync,no_root_squash)
//设置共享目录权限
[root@master ~]# systemctl restart rpcbind
[root@master ~]# systemctl restart nfs-server
//重启nfs服务

测试一下
[root@master ~]# showmount -e

k8s the helm deployment installation, upgrades and rollback (chart, helm, tiller, StorageClass)

(5)创建pv

[root@master xgp]# vim nfs-pv1.yml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysqlpv
spec:
  capacity:
    storage: 8Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    path: /data/mysqlpv
    server: 192.168.1.21
[root@master xgp]# mkdir /data/mysqlpv
//创建所需目录

执行一下

[root@master xgp]# kubectl apply -f nfs-pv1.yml

查看一下

[root@master xgp]# kubectl get pv

k8s the helm deployment installation, upgrades and rollback (chart, helm, tiller, StorageClass)

(6)创建一个mysql服务

[root@master xgp]# helm install stable/mysql -n bdqn-mysql --set mysqlRootPassword=123.com

查看一下

[root@master xgp]# kubectl get pod

k8s the helm deployment installation, upgrades and rollback (chart, helm, tiller, StorageClass)

(7)进入pod并查看一下

[root@master xgp]# kubectl exec -it bdqn-mysql-mysql-7b89c7b99-8ff2r -- mysql -u root -p123.com
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.01 sec)

4、mysql服务的升级与回滚

(1)mysql服务的升级

[root@master mysql]# helm upgrade --set imageTag=5.7.15 bdqn-mysql stable/mysql -f values.yaml 

查看一下

[root@master mysql]# kubectl get deployments. -o wide

k8s the helm deployment installation, upgrades and rollback (chart, helm, tiller, StorageClass)

(2)mysql服务的回滚

[root@master mysql]#  helm history bdqn-mysql
//查看历史版本

k8s the helm deployment installation, upgrades and rollback (chart, helm, tiller, StorageClass)

回滚到版本一

[root@master mysql]# helm rollback bdqn-mysql 1  

查看一下

[root@master mysql]# kubectl get deployments. -o wide

k8s the helm deployment installation, upgrades and rollback (chart, helm, tiller, StorageClass)

三、小实验

在部署mysql的时候,如何开启storageclass,以及如何将service资源对象的类型更改为NodePort, 如何使用?

将上述部署的实例进行升级回滚操作。升级的时候镜像改为: mysql:5.7.15版本。回滚到最初的版本。

1、基于NFS服务,创建NFS服务。

下载nfs所需安装包

[root@node02 ~]# yum -y install nfs-utils  rpcbind

创建共享目录

[root@master ~]# mkdir -p /xgp/wsd

创建共享目录的权限

[root@master ~]# vim /etc/exports
/xgp *(rw,sync,no_root_squash)

开启nfs和rpcbind(三台都要)

[root@master ~]# systemctl start nfs-server.service 
[root@master ~]# systemctl start rpcbind

测试一下

[root@master ~]# showmount -e

k8s the helm deployment installation, upgrades and rollback (chart, helm, tiller, StorageClass)

2、创建StorageClass资源对象。

(1)创建rbac权限。

[root@master yaml]# vim rbac.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-provisioner
  namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nfs-provisioner-runner
  namespace: default
rules:
   -  apiGroups: [""]
      resources: ["persistentvolumes"]
      verbs: ["get", "list", "watch", "create", "delete"]
   -  apiGroups: [""]
      resources: ["persistentvolumeclaims"]
      verbs: ["get", "list", "watch", "update"]
   -  apiGroups: ["storage.k8s.io"]
      resources: ["storageclasses"]
      verbs: ["get", "list", "watch"]
   -  apiGroups: [""]
      resources: ["events"]
      verbs: ["watch", "create", "update", "patch"]
   -  apiGroups: [""]
      resources: ["services", "endpoints"]
      verbs: ["get","create","list", "watch","update"]
   -  apiGroups: ["extensions"]
      resources: ["podsecuritypolicies"]
      resourceNames: ["nfs-provisioner"]
      verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-provisioner
    namespace: default        #必写字段
roleRef:
  kind: ClusterRole
  name: nfs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io

执行一下

[root@master yaml]# kubectl apply -f rbac.yaml 

(2)创建Deployment资源对象,用Pod代替 真正的NFS服务。

[root@master yaml]# vim nfs-deployment.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccount: nfs-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner
          volumeMounts:
            - name: nfs-client-root
              mountPath:  /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: xgp
            - name: NFS_SERVER
              value: 192.168.1.21
            - name: NFS_PATH
              value: /xgp/wsd
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.1.21
            path: /xgp/wsd

执行一下

[root@master yaml]# kubectl apply -f nfs-deployment.yaml 

查看一下

[root@master yaml]# kubectl get pod

k8s the helm deployment installation, upgrades and rollback (chart, helm, tiller, StorageClass)

(3)创建storageclass的yaml文件

[root@master yaml]# vim xgp-storageclass.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: xgp-nfs
provisioner: xgp  #通过provisioner字段关联到上述Deploy
reclaimPolicy: Retain

执行一下

[root@master yaml]# kubectl apply -f test-storageclass.yaml

查看一下

[root@master yaml]# kubectl get sc

k8s the helm deployment installation, upgrades and rollback (chart, helm, tiller, StorageClass)

3、创建一个mysql服务

[root@master ~]# docker pull mysql:5.7.14
[root@master ~]# docker pull mysql:5.7.15
[root@master ~]# docker pull busybox:1.25.0
//下载所需镜像
[root@master yaml]# helm fetch stable/mysql
//直接下载stable/mysql的chart包
[root@master yaml]# tar -zxf mysql-0.3.5.tgz 
//解压mysql包
[root@master yaml]# cd mysql/
[root@master mysql]# vim values.yaml 
//修改values.yaml文件,添加storageClass存储卷和更改svc的模式为NodePort

k8s the helm deployment installation, upgrades and rollback (chart, helm, tiller, StorageClass)

[root@master mysql]# helm install stable/mysql -n xgp-mysql --set mysqlRootPassword=123.com -f values.yaml 
//基于values.yaml和stable/mysql开启一个密码为123.com的mysqlpod

查看一下

[root@master mysql]# kubectl get svc

k8s the helm deployment installation, upgrades and rollback (chart, helm, tiller, StorageClass)

[root@master mysql]# kubectl get pod -o wide

k8s the helm deployment installation, upgrades and rollback (chart, helm, tiller, StorageClass)

4、进入pod并查看一下

[root@master mysql]#  kubectl exec -it xgp-mysql-mysql-67c6fb5f9-dn7s2 -- mysql -u root -p123.com
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.01 sec)

5、mysql服务的升级与回滚

(1)mysql服务的升级

[root@master mysql]# helm upgrade --set imageTag=5.7.15 xgp-mysql stable/mysql -f values.yaml 

查看一下

[root@master mysql]# kubectl get deployments. -o wide

k8s the helm deployment installation, upgrades and rollback (chart, helm, tiller, StorageClass)

(2)服务的回滚

[root@master mysql]#  helm history xgp-mysql
//查看历史版本

k8s the helm deployment installation, upgrades and rollback (chart, helm, tiller, StorageClass)

回滚到版本一

[root@master mysql]# helm rollback xgp-mysql 1  

查看一下

[root@master mysql]# kubectl get deployments. -o wide

k8s the helm deployment installation, upgrades and rollback (chart, helm, tiller, StorageClass)

6、进入pod并查看一下

[root@master mysql]#  kubectl exec -it xgp-mysql-mysql-67c6fb5f9-dn7s2 -- mysql -u root -p123.com
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.01 sec)

四、总结

Helm作为kubernetes应用的包管理以及部署工具,提供了应用打包,发布,版本管理以及部署,升级,回退等功能。Helm以Chart软件包的形式简化Kubernetes的应用管理,提高了对用户的友好性。

使用心得

helm 客户端的功能非常简单,直接参考官网文档即可。

列一下相关使用心得:

  • Helm 的所有功能都是围绕着 chart、release 和 repository 的;
  • 仅初始化客户端相关配置且仅建立本地仓库,可执行 helm init --client-only --skip-refresh
  • 查找 chart 的方式是通过 HELM_HOME(默认是 ~/.helm 目录)下的 repositories 目录进行的,几个重要文件或目录为 cache、repositories/cache;
  • 修改 chart index.yaml 的 url,可执行 helm serve --url http://demo.com 来重新 reindex;
  • 依赖关系管理,requirements定义,子 chart 值定义;
  • install 、 update 的方式管理不方便,这样需要维护 chart 的版本关系,集成 install 和 update ,组成类似 k8s 中的 apply 命令;
  • package 命令 -u 可以更新依赖,建议推到 repositiories 前先 package ,否则后期可能出现依赖检测不全的错误;
  • release 相关的信息存储在 k8s 的 configmap 中,命名形式为 release_name.v1 的格式。 rollback 相关功能就是通过存储在 configmap 中的信息进行回滚的;
  • Helm 客户端与 k8s 中的 TillerServer 是通过 k8s 提供的 port-forward 来实现的,而 port-forward 需要在指定节点上部署 socat;
  • TillerServer 可以不部署在 k8s 中, 此时 Helm 客户端需要通过 HELM_HOST 环境变量来指定 TillerServer 的地址和端口;
  • 建议 TillerServer 部署在 k8s 中,既然 Helm 为 CNCF 的一员,那么就尽量把云原生做到极致吧;
  • 写 chart 时多参考官方最佳实践,The Chart Best Practices Guide

不足

Helm while providing install, update command to install or update the corresponding release, but to bring users the need to maintain pressure release states. For example, not installed before release, release does not exist, update operation will fail. Conversely existing release, install operation will fail. In fact, in most cases I do not need to know the release of the state, whether it exists or does not exist, the command is executed and I hope my intention, and I hope to be able to release state after I execute the command. K8s apply this point of order is very good, it does not require the user to maintain state resources.

Guess you like

Origin blog.51cto.com/14320361/2475004