Sharing of K8S application management based on Helm and Operator

This article is compiled from the technical sharing made by Li Pinghui, a R&D engineer of Rancher Labs, on the evening of March 7th. Li Pinghui is familiar with the design and implementation of application containerization solutions, is familiar with continuous integration solutions, pays attention to and participates in the development of the K8S ecosystem, and is responsible for the research and development of continuous integration services in Rancher China. Search WeChat account RancherLabsChina, add Rancher assistant as a friend, you can join the official technical exchange group, and participate in the next sharing in real time~

Hello everyone, what we are sharing today is K8S application management based on Helm and Operator. We know that Kubernetes provides multiple resource description types based on service granularity. Describing an application system, especially a microservice architecture system, requires the combined use of a large number of Kubernetes resources. For stateful applications, complex operation and maintenance management operations and more domain knowledge are often required.

Tonight's sharing will introduce how to use Helm, a community-led solution for Kubernetes application package management, to simplify application deployment management, how to create application templates and create a Kubernetes version of the application store, and how to use operators to automate application operation and maintenance.

We know that in the K8S community, according to different fields, it is divided into different interest groups, called SIG in English. Tonight's topic belongs to the field of APP. They are born to solve some problems in the application management of K8S.

1. Helm

Let's start from scratch. For example, we have now deployed a K8S cluster. Whether it is GKE or EKS, it is not difficult, because deploying K8S is not as troublesome as it used to be. Then we do containerization of the application. Next, we will try to deploy our application to K8S.

In fact, in K8S, there are many resource objects:Enter image description

For some microservice architectures, there will be different services running on it, you may have to manage things such as deployment, service, stateful Statefulset, permission control, etc. You will find that there are many other related things and points that you need to consider after deploying the application: for example, your different teams, to manage such an application, from development to testing to production, in different environments, The same set of things may require different configurations. For example, when you are developing, you don't need to use PV, but some temporary storage; but in production, you must have persistent storage; and you may share between teams and then archive .

In addition, you not only have to deploy the application resource, but also manage its life cycle, including upgrades, replacements, and subsequent deletions. We know that deployment in K8S is version-managed, but considering the entire application or an application module, in addition to deployment, there may be other configmaps and the like associated with it. At this time, we will think, is there such a tool that can manage these applications at a higher level? At this time, we have a package management tool for the community: Helm.

We know that K8S means the helmsman, the one who controls the rudder. And Helm is actually that rudder. In Helm, one of its application packages is called Charts, which actually means nautical charts. What is it?

It is actually a definition description of an application. It includes some metadata of the application, as well as the template and configuration of the application's K8S resource definition. Second, Charts can also include descriptions of some documents, which can be stored in the chart's repository.

How to use Helm this tool? Helm is actually a binary tool. As long as you download it and have configured some relevant configuration information of kubeconfig, you can deploy and manage applications in K8S. What can I do with Helm? In fact, Helm is divided into two parts: server and client. After helm init, it will deploy a server called Tiller in K8S. This server can help you manage a complete life cycle of the Helm Chart application package.

Installation example of Release == Chart:Enter image description

Then talk about Helm Chart. It is essentially an application package, you can understand it as dpkg or a package like rpm. However, it is based on the concept of an application package in the K8S field. You can deploy the same chart package multiple times, and each installation will generate a Release. This Release is equivalent to an installation instance in a chart.

Now that we have Tiller deployed, we can manage our application:

$ helm install <chart>
# (stable/mariadb, ./nginx-1.2.3.tgz, ./nginx, https://example.com/charts/nginx-1.2.3.tgz)
$ helm upgrade <release>
$ helm delete <release>

For some common commands, such as installing an application package, you can use install, which can actually support different formats: for example, some local chart packages, or your remote warehouse path. For application updates, use Helm upgrade. If you want to delete it, use Helm Delete.

A Release of Helm will generate a corresponding Configmap, which will store the information of the Release and store it in K8S. It is equivalent to linking the iteration of an application's life cycle directly with K8S. Even if Tiller hangs, as long as your configuration information is still there, the application's release and iteration process will not be lost: for example, if you want to roll back to the previous version, or check its upgrade path, etc.

Next we look at the structure of a chart.

$ helm create demoapp

Enter image description

With Helm create, it will provide a general framework for you to create your own application. For example, this application is called Demoapp, which will contain the following content:Enter image description

The core of them is templates, that is, the templated K8S manifests file, which will include the definition of resources, such as deployment, service, etc. Now what we have created is a default application that is deployed with an nginx deployment.

It is essentially a Go template. Helm will add a lot of things to the Go template template. Such as some custom metadata information, extended libraries, and some workflows similar to programming, such as conditional statements, pipelines, and so on. These things will make our template very rich.

With templates, how do we fit our configuration into it? This is the values ​​file used. These two parts are actually the core functions of the chart.Enter image description Enter image description

This deployment is a Go template template. It can define some preset configuration variables. These variables are read from the values ​​file. In this way, we have a template of an application package that can be deployed in different environments with different configurations. In addition, different values ​​can be used during Helm install/upgrade.

Configuration options:Enter image description Enter image description

$ helm install --set image.tag=latest ./demoapp
$ helm install -f stagingvalues.yaml ./demoapp

For example, you can set a single variable, you can use the entire File to do a deployment, and it will overwrite its default configuration with your current configuration. Therefore, we can directly use different configuration files between different teams and use the same application package for application management. Chart.yaml is the metadata of the chart, which describes the information of the chart package.

Enter image description Enter image description

In addition, there are some document descriptions, such as NOTES.txt, which are generally placed in templates, which are automatically listed when you install or view the deployment details (helm status). Usually put some descriptive information about the deployed application and how to access it.

Enter image description

In addition to templates, another role of Helm charts is to manage dependencies.Enter image description Enter image description

Say you deploy a Wordpress, it can depend on some database services. You can put the database service as a chart in a dependency directory. In this way, dependency management between applications can be done very conveniently. If we have created our own application package and want to have a repository to manage this package, what should we do to share it between teams?

The chart repository is actually an HTTP server. As long as you put your chart and its index file on it, you can get it through the above path during Helm install.

The Helm tool itself also provides a simple command, called Helm serve, to help you create a repository for development and debugging.

For example , the warehouse directory structure of https://example.com/charts :Enter image description

Regarding Helm, the community version actually has a lot of application packages, which are generally placed in some projects under K8S. For example, when Helm is installed, it has a Stable project by default. There will be various application packages inside. Stable and incubator chart repository: https://github.com/kubernetes/charts

In addition, the Community Edition will provide a UI similar to the concept of the Rancher Catalog app store, where you can manage. It is called Monocular, which means monocular. The development of these projects is very active and has been updated with the iteration of K8S.

Monocular: UI management project for chart: https://github.com/kubernetes-helm/monocular Enter image description

So how to deploy the K8S version of the application store? It's actually very simple. Because after having Helm, you only need to use Helm install Monocular, add its warehouse first, and then install it, you can deploy the application to your K8S cluster. It actually uses Helm Tiller for deployment. We can search for some charts on it and manage your warehouse, such as the official stable, or some projects in the incubator.

Enter image description

You can also manage some deployed applications. For example, if you want to search for an application, click Deploy, and you can deploy it. However, there are still many things that need to be improved. For example, the deployment here cannot be configured with various parameters, it can only input namespace. Secondly, there are still limitations in some of the management, such as the inability to easily update the UI.

We will also cooperate with some public cloud vendors around Helm charts. Because the benefit of Helm charts is that an application package can be deployed in multiple places. For example, public cloud services can be based on which application orchestration and management can be implemented, and a service can be conveniently provided to different users. Rancher will also add support for helm charts in the 2.0 app store, hoping to help users provide a good experience while making it easy to use existing templates.

There are already many charts in the stable warehouse, but they are not particularly perfect, and there are many applications that can be supplemented and enhanced. As far as our practical experience is concerned, everything can be charted, whether it is a distributed database cluster or a parallel computing framework, it can be deployed and managed on K8S in this form.

Another point is that Helm is plug-in. Helm's plug-ins include Helm-templates, helm-github, and so on.

For example, when you install Helm, it can call the plugin to do extension. It doesn't have an official repository, but some features are already available. In fact, it is to hand over Restless/release information, your chart information and Tiller connection information to the plugin for processing. Helm itself does not matter what form the plugin is implemented in, as long as it is an application package, it can do its own processing for the incoming parameters.

The benefits of Helm are probably as follows: • Use existing Charts for rapid deployment and experiment • Create custom Charts, which are easily shared among teams • Easy to manage the life cycle of applications • Easy to manage and reuse applications • Cluster K8S As an app publishing collaboration hub

2. Operator

Let's talk about Operators next. Why talk about Operator? Operator is not actually a tool, but an idea that exists to solve a problem. what is the problem? That is, when we manage applications, we will encounter stateless and stateful applications. Managing stateless applications is relatively simple, but stateful applications are more complex. In the stable warehouse of Helm charts, many database charts are actually single-node, because distributed databases are more troublesome to do.

The idea of ​​Operator is to inject domain knowledge and manage complex applications with software. For example, for stateful applications, everything is different and you may need professional knowledge to deal with it. For different database services, there are different ways to expand or shrink and backup. Can we use the convenient features of K8S to simplify these complex things? That's what Operator wants to do.

For stateless applications, it is relatively simple to make it a Scale UP: just expand its number.Enter image description Enter image description

Then in the deployment or ReplicaSet controller, it will judge its current state and migrate to the target state. For stateful applications, we often need to consider a lot of complicated things, including upgrades, configuration updates, backups, disaster recovery, scale adjustments, etc. Sometimes it is equivalent to refreshing the entire configuration, and maybe even restarting some services.

For example, Zookeeper315 could not update the cluster status in real time before. It is very troublesome to expand the capacity, and the entire node may need to be restarted for a round. Some databases may be more convenient, just register with the master. So each service will have its own characteristics.

Take etcd as an example, it is the main storage in K8S. If you do a scale up on it, you need to add some connection information of new nodes to the cluster, so as to obtain the configuration connection of different members of the cluster. Then use its cluster information to start a new etcd node.

What if there is an etcd Operator? Operator is actually what CoreOS preaches. CoreOS has provided the community with several open source operators, including etcd, so how to expand an etcd cluster in this situation?

First, etcd Operator can be deployed to K8S in the form of deployment. After deploying this operator, it is actually very convenient to deploy an etcd cluster. Because there is no need to manage the configuration information of this cluster, you just need to tell me how many nodes you need and what version of etcd you need, and then create such a custom resource, the operator will monitor your needs and help you create out the configuration information.

$ kubectl create –f etcd-cluster.yaml

Enter image description

It is also very simple to expand the capacity, as long as the number of updates (such as changing from 3 to 5), and then apply, it will also monitor the changes of this custom resource and make corresponding updates.

$ kubectl apply -f upgrade-example.yaml

Enter image description

This is equivalent to handing over all the work that previously required operation and maintenance personnel to deal with the cluster to the operator. How to do it? That is, an extensible API of K8S - CRD (formerly called third-party resource) is applied.

After deploying an etcd Operator, manage and maintain the application state of the target through the kubernetes API. In essence, it is the Controller mode in K8S. K8S Controller will do such a management of its resources: to monitor or check its expected state, and then compare it with the current state. If there are some differences in it, it will do the corresponding update.

Kubernetes Controller mode:Enter image description

The practice of etcd is to create a custom resource called etcd cluster to monitor application changes when an etcd Operator is launched. For example, if you declare your update, it will generate a corresponding event, do the corresponding update, and maintain your etcd cluster in such a state.

In addition to etcd, communities such as Prometheus Operator can help you manage some stateful applications in this convenient form.

It is worth mentioning that Rancher 2.0 widely adopts the Kubernetes-native Controller mode to manage application loads and even K8S clusters. It is a Kubernetes operator.

3. Comparison of Helm and Operator

Now that these two things are over, let's compare them.

Operator is essentially a tool to provide stateful services for specific scenarios, or to simplify its operation and maintenance management for application scenarios with complex applications. In the case of Helm, it is actually a relatively common tool, and the idea is very simple. It is to template your K8S resources for easy sharing, and then reuse them in different configurations.

In fact, most of the things that Operator can do can also be done by Helm. Use Operator to monitor and update etcd cluster status, you can also use custom Chart to do the same thing. It's just that you may need some more complex processing. For example, when etcd is not established, you may need some init Containers to update the configuration, check the status, and then pull up the node with the corresponding information. When deleting, add some PostHook to do some processing. So Helm is a more general tool. The two can even be used in combination, such as the etcd-operator chart in the stable repository.

As far as personal understanding is concerned, on the behemoth of K8S, both of them were born from simple but natural ideas, helm is for configuration separation, and operator is for automated management of complex applications.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324463101&siteId=291194637