K8s native micro-service management tools Practical study helm-v3 (2)

Catalog:
be the template for debugging applications, using the micro chart publishing services based on the needs of micro-hair version of the services
1, based dubbo micro publish a service-based environment used in the production helm template
template Address:
git clone [email protected]:zhaocheng172/helm-dubbo.git
pull Please send your public key to I, or pull down

3.6 Chart template
core Helm is the template that templated K8S manifests file.
It is essentially a Go-template template. Helm Go template based on the template, will increase a lot. For example, some custom metadata information, as well as some expansion similar programming library in the form of workflow, such as conditional statements, pipes and the like. These things will make our templates become more abundant.

Say, in fact helm in fact, the most critical is the template rendering in front, we will field yaml may change frequently, to specify a variable that can line through helm of command, through its naming row to cover its default variables, which can be dynamically rendered to yaml, the most important of which is the template values, helm to help us to do is to concentrate management yaml, dynamically render these yaml, because it was at the time of writing yaml there may be a lot of field time after the deployment might back other changes, in fact, the field of these changes, dynamic volume to modify, not previously helm of the time, usually a generic template design will be changed field which change frequently, the general is to use sed to replace the value inside, such as replacing the mirror, will replace the image a name can deploy a new application, and replace successful, apply it to applications, of course, this image is you in advance the code is compiled, and then by a mirror dockerfile their production, generally an alternative mirror Site address is also mirrored on the harbor prevail, in which case, you may write a lot of command substitution, obviously not very flexible, more and more documents, management is also a certain cost, in fact, the best way there in a file in these variables written into the field, all yaml can read this variable, it refers to the rendering of the document, which is the helm to do, which is the helm core functions.

1, the template
with the template, how do we put our configuration into the inside of it? Use is this values file. Both of these sections is actually chart the core functionality.
When we go when deploying an application, such as a micro-publishing service, to do so is a chart, the chart can come from the Internet, or share with others to you, or to produce their own can, in this chart in the core this is the template, we went to deploy an application that the template is itself a go of a template with a go to render, but the helm Tianxie something go on the basis of the above, to make it more flexible, such as conditional sentenced segment
Next, deploy nginx applications, familiar template, first of all templates directory delete all the files out, here we come create your own template files, control syntax, the template is to meet more demand.

For example, create a chart, a total of four directories in templates which is what we needed to deploy an application configuration yaml, like deployment, service, ingress and so on, we will change some fields frequently written model variables to define these values ​​through value of a variable, by the time helm install created, it will be for values ​​to render our template inside, there is a _helpers.tpl, it will put some deployment, service templates will be used, for example, will use to some common fields, so long you can put it _helpers.tpl named templates inside, NOTES.txt a prompt deployment of an application is used, such as address access, as well as a test of the directory, such as your deployment a good application, test it and see is not deployed properly.

[root@k8s-master1 one_chart]# helm create one
Creating one
[root@k8s-master1 one_chart]# ls
one
[root@k8s-master1 one_chart]# cd one/
[root@k8s-master1 one]# ls
charts  Chart.yaml  templates  values.yaml
[root@k8s-master1 one]# tree .
.
├── charts
├── Chart.yaml
├── templates
│   ├── deployment.yaml
│   ├── _helpers.tpl
│   ├── ingress.yaml
│   ├── NOTES.txt
│   ├── serviceaccount.yaml
│   ├── service.yaml
│   └── tests
│       └── test-connection.yaml
└── values.yaml

First prepare two yaml, and then we went to some of the frequently changed fields, rendering

[root@k8s-master1 templates]# kubectl create deployment application --image=nginx --dry-run -o yaml > deployment.yaml
[root@k8s-master1 templates]# kubectl expose deployment application --port=80 --target-port=80 --dry-run -o yaml > service.yaml

Then we will service our first release out test, the test can not normally access the way
we normally are released by apply -f to such a service, use the helm now try to publish, in fact, the effect is the same, but this is to publish , we now apply -f no effect, while the functional helm of the core application is that we can effectively to render our variable, the release of our micro-services more flexible, for example, can be rendered variable by the template modify the mirror address, name of the publication services, as well as the number of copies, etc., to the incoming dynamic, fast release multiple sets of micro-services, it simply is the deployment of a common set of templates to deploy a number of routine applications.

[root@k8s-master1 one_chart]# helm install application one/
NAME: application
LAST DEPLOYED: Wed Dec 18 11:44:21 2019
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
[root@k8s-master1 templates]# kubectl get pod,svc
NAME                                          READY   STATUS    RESTARTS   AGE
pod/application-6c45f48b87-2gl95              1/1     Running   0          10s
NAME                  TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/application   ClusterIP   10.0.0.221   <none>        80/TCP    9s
[root@k8s-master1 templates]# curl -I 10.0.0.221
HTTP/1.1 200 OK
Server: nginx/1.17.6
Date: Wed, 18 Dec 2019 03:35:22 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 19 Nov 2019 12:50:08 GMT
Connection: keep-alive
ETag: "5dd3e500-264"
Accept-Ranges: bytes

That is the deployment of a chart, use this template, able to deploy a number of routine applications, you first need to frequently change the name, as well as a copy, and image name

2, debugging
Helm also provides --dry-run debugging parameters to help you verify the correctness of the template. In the execution helm install time to bring these two parameters can put values corresponding to the value and print out a list of resources, rendering, and not true to deploy a release.
For example, we created the package to debug the chart above:
helm install pod-nodejs-tools --dry-run /root/one

3, built-in objects
{{Release.Name}} This belongs to the built-in variable, the variable is actually built when we install we were to deploy applications from the name, which is passed in, then you can use this resource to deploy direct name
{{Chart.Name}} this value also belong helm of a built-in variable, this is, we have to create a chart template Chart.yaml this yaml, in fact, this is the value to get in there, of course, the name of the project, the general are unified, directly through {{Values.name}} to define ourselves, that is values.yaml go inside this definition
then we write a good transmission of variables can also be output to see if is not able to normal output rendered
here pod-base-common fact would make me {{Release.name}} is to go into effect --dry-run pre-execution, one is my chart directory
, as some commonly used release built-in variables, Chart the variables can be directly in the chart to see the package

[root@k8s-master1 one]# cat templates/deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    chart: {{ .Chart.Name }}
    app: {{ .Chart.Name }}
  name: {{ .Release.Name }}
spec:
  replicas: {{ .Values.replicas }}
  selector:
    matchLabels:
      app: {{ .Values.label }}
  template:
    metadata:
      labels:
        app: {{ .Values.label }}
    spec:
      containers:
      - image: {{ .Values.image }}:{{ .Values.imagetag }}
        name: {{ .Release.Name }}

4, Values custom variables
Values objects are to provide value Chart template, the value of this object has four sources:
values.yaml file chart package
values.yaml file parent chart package
through helm install or helm upgrade of the -f or yaml file --values custom parameters passed
by --set parameter value passed
[root@k8s-master1 one]# helm install pod-mapper-service --set replicas=1 ../one/
will give priority to cover the values of values --set command to create a copy rather than the two.
The chart provides the values values.yaml values may be covered by a user-supplied file, and the file can also be covered by the parameter --set provided.

[root@k8s-master1 one]# cat values.yaml 
replicas: 2
image: nginx
imagetag: 1.16
label: nginx
[root@k8s-master1 one_chart]# helm install pod-base-common --dry-run one/
[root@k8s-master1 ~]# helm install pod-css-commons /root/one_chart/one/
NAME: pod-css-commons
LAST DEPLOYED: Wed Dec 18 14:41:02 2019
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None

After viewing the results after using the get manifest + rendered the name of the project, through the helm ls, you can view the services helm created

[root@k8s-master1 ~]# helm get manifest pod-css-commons
[root@k8s-master1 one]# helm ls
NAME            NAMESPACE   REVISION    UPDATED                                STATUS   CHART               APP VERSION
pod-css-commons default     1           2019-12-18 14:41:02.800570406 +0800 CSTdeployed application-0.1.0   1.16.0 

For example, we re-test, for example, we updated the code, by dockerfile constitute new image, then we need to replace the new image, how to do?
In fact, the name speaks to our values defined under the new mirror into the address on it, to make presentations written here is nginx1.17
then helm upgrade replacement of a new image

[root@k8s-master1 one]# helm upgrade pod-css-commons ../one/
Release "pod-css-commons" has been upgraded. Happy Helming!
NAME: pod-css-commons
LAST DEPLOYED: Wed Dec 18 15:00:14 2019
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None

You can view the address by get manifest image after rendering, the general name of the project in order to ensure consistency, are unified in micro-publishing services among {{Values.name}}, set up their own variables
[root@k8s-master1 one]# helm get manifest pod-css-commons
such as now have to publish a micro service
so we generally replace the name of the service in general is also mirrored
directly in the values of the image to be modified to address the new project as well as the name --dry-run some tests no problem directly released

[root@k8s-master1 one]# cat values.yaml 
replicas: 2
image: nginx
imagetag: 1.15
name: pod-base-user-service
app: pod-base-user-service
port: 80
targetPort: 80
[root@k8s-master1 one]# helm install pod-base-user-service  ../one/
[root@k8s-master1 one]# kubectl get pod
NAME                                      READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-6f54fc894d-dbvmk   1/1     Running   0          5d3h
pod-base-ec-service-664987f9c6-5f9vl      1/1     Running   0          7m18s
pod-base-ec-service-664987f9c6-mw4jb      1/1     Running   0          7m18s
pod-base-user-service-6b7d9d47b8-qqcbp    1/1     Running   0          7s
pod-base-user-service-6b7d9d47b8-r5f96    1/1     Running   0          8s

5, pipeline and function

Just values, as well as built-in objects is actually the value passed to render template engine, template engine also supports additional data to get secondary treatment, that is, necessarily have to use the value of values, and can also be two pairs of this template treatments, for example, I will get the data, the first letter is capitalized, or I will get the string value will be added to double quotation marks, then secondary treatment can be carried out on this template engine, for example, will get value into a string, which uses a function, then the template engine supports this function, the function is not particularly common, but this will be used, such as deployment here, such as labels here to get this value and add a double quotation mark, some yaml values must be double quotes can, in fact, come true is relatively simple, direct quote can add a
double quotation mark

labels:
        app: {{ quote .Values.name }}

Test results have to be added on to double quotes, but in fact this is the quote function to help a secondary treatment we do, when we give some specific values ​​of double quotes when it can be achieved through direct quote function

[root@k8s-master1 one]# helm install pod-tools-service --dry-run ../one/
 labels:
        app: "pod-mapper-service"

Another example will be passed directly to a particular variable, not by values, env I define this field, the default is not, by {{default "xxxx" .Values.env}} comes in, such as this is a default value, do not need to change, so you can directly define

spec:
      nodeSelector:
        team: {{ .Values.team }}
        env: {{ default "JAVA_OPTS" .Values.env }}

So if there are values this value, it will use the default value of values which, if not it will use the default using the default values
like this one indentation, yaml itself is defined in a hierarchical relationship, so sometimes we'll this is used to render the needs of our hierarchy
of other functions:

缩进:{{ .Values.resources | indent 12 }}
大写:{{ upper .Values.resources }}
首字母大写:{{ title .Values.resources }}

6, process control

Process control is the ability to provide a template, to meet more complex data processing logic.
Helm template language provides the following flow control statements:
IF / conditions of the else block
with a specified range
range cycle block
like indentation process control generally will be used, as else / if the will to do something to deal with complex logic,
define the parameters values
the Test : "123"
Templates / deployment.yaml continue to define this judgment, if test = 123, then the output test: a, or if the value of other variables, here print test: b, and this scenario can be applied yaml to be defined according to their actual application scenarios, but this situation is not much experience.

spec:
      nodeSelector:
        team: {{ .Values.team }}
        env: {{ default "JAVA_OPTS" .Values.env }}
        {{ if eq .Values.test "123" }}
        test: a
        {{ else }}
        test: b
        {{ end }}

In fact, here output, then, to stay out of space, in fact, just {{parameters we define left}}, this directly can be removed through - you can delete
equality eq operator judgment, and also supports ne , lt, gt, and, or the like operator.

{{- if eq .Values.test "123" }}
        test: a
        {{- else }}
        test: b
        {{- end }}
      containers:

Condition determination is to determine whether a condition is true, if the value is compared to the following situations false:
is a boolean flase
a digital zero
and a null character
a nil (empty or null)
an empty set (map, slice, tuple, dict, array)
in addition to those of the above situation, all the other conditions are true.
For example, the value of values of the set is flase, it is not true
or is the value of values of the setting is 0, then the default is set to false, that's not true
if it is empty, it also is false, or as a collection of more than the case is false
then we set a 0 in values, test it, and then print side is b, explanation is false

test: 0
test: ""
[root@k8s-master1 one]# helm install pod-mapper-service --dry-run ../one/
 spec:
      nodeSelector:
        team: team1
        env: JAVA_OPTS
        test: b
      containers:
      - image: nginx:1.15
        name: pod-mapper-service

Now we use the official templates they helm the original values ​​to create an application and it is this structure serialized format support, such as a mirror image, address labels may be the name of his mirror, the following may also define multiple attributes, this can be defined so that a structured format, such as the address of the warehouse, pulling mirroring policy

image:
    repository: nginx
    tag: 1.17
    pullPolicy: IfNotPresent
[root@k8s-master1 one]# cat templates/deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: {{ .Values.name }}
  name: {{ .Values.name }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: {{ .Values.name }}
  template:
    metadata:
      labels:
        app: {{ .Values.name }}
    spec:
      nodeSelector:
        team: {{ .Values.team }}
      containers:
      - image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
        name: {{ .Values.name }}
[root@k8s-master1 one]# cat templates/service.yaml 
apiVersion: v1
kind: Service
metadata:
  labels:
    app: {{ .Values.name }}
  name: {{ .Values.name }}
spec:
  ports:
  - port: {{ .Values.port }}
    protocol: TCP
    targetPort: {{ .Values.port }}
  selector:
    app: {{ .Values.name }}
[root@k8s-master1 one]# vim values.yaml
app: pod-base-jss
name: pod-base-jss
replicaCount: 3

image:
  repository: nginx
  tag: 1.17
  pullPolicy: IfNotPresent

team: team3

[root@k8s-master1 one]# helm install pod-base-jss  ../one/
[root@k8s-master1 one]# helm ls
NAME            NAMESPACE   REVISION    UPDATED                                 STATUS      CHART               APP VERSION
pod-base-jss    default     1           2019-12-19 13:57:49.881954736 +0800 CST deployed    application-0.1.0

Now add a resource limit
a judge in the case when the default is false or true, and to do related actions, that resource constraints do not use, if the definition is false, then there is no use of resources, it is ture, then , use, or not set, remove the comment will limit the resources
go now determine whether this resource is not true, and if true, then you use the resource, and then one more restrictions on the resources of the pod, if false, then do not resource constraints do directly determine {{if .Values.resources}}
here, I first determine the true test that will determine our true added to the list

[root@k8s-master1 one]# cat templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: {{ .Values.name }}
  name: {{ .Values.name }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: {{ .Values.name }}
  template:
    metadata:
      labels:
        app: {{ .Values.name }}
    spec:
      nodeSelector:
        team: {{ .Values.team }}
      containers:
      - image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
        name: {{ .Values.name }}
        {{- if .Values.resources }}
        resources:
          limits:
            cpu: {{ .Values.resources.limits.cpu }}
            memory: {{ .Values.resources.limits.memory}}
          requests:
            cpu: {{ .Values.resources.requests.cpu }}
            memory: {{ .Values.resources.requests.memory }}
        {{- else }}
        resources: {}
        {{- end }}

Here we will go to reference the following variables, if this requirement can not directly resources: can be 0 or "" or false, then the following comments will not be cited, which is equivalent to a switch, good the application to manage our

[root@k8s-master1 one]# cat values.yaml
resources: 
     limits:
       cpu: 100m
       memory: 128Mi
     requests:
       cpu: 100m
       memory: 128Mi

Test results View

[root@k8s-master1 one]# helm upgrade pod-base-jss --dry-run ../one/
Release "pod-base-jss" has been upgraded. Happy Helming!
NAME: pod-base-jss
LAST DEPLOYED: Thu Dec 19 14:36:37 2019
NAMESPACE: default
STATUS: pending-upgrade
REVISION: 2
TEST SUITE: None
HOOKS:
MANIFEST:

---
Source: application/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: pod-base-jss
  name: pod-base-jss
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: pod-base-jss

---
Source: application/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: pod-base-jss
  name: pod-base-jss
spec:
  replicas: 3
  selector:
    matchLabels:
      app: pod-base-jss
  template:
    metadata:
      labels:
        app: pod-base-jss
    spec:
      nodeSelector:
        team: team3
      containers:
      - image: nginx:1.17
        name: pod-base-jss
        resources:
          limits:
            cpu: 100m
            memory: 128Mi
          requests:
            cpu: 100m
            memory: 128Mi

Or there is a way, added directly enabled in values, false meaning is closed, after performing the first to use based on enabeld

resources:
      enabled: false
      limits:
        cpu: 100m
        memory: 128Mi
      requests:
        cpu: 100m
        memory: 128Mi

Then {{if .Values.resource.enabled}} to define the values ​​of the switch will use to true, false will not be used for the

[root@k8s-master1 one]# cat templates/deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: {{ .Values.name }}
  name: {{ .Values.name }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: {{ .Values.name }}
  template:
    metadata:
      labels:
        app: {{ .Values.name }}
    spec:
      nodeSelector:
        team: {{ .Values.team }}
      containers:
      - image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
        name: {{ .Values.name }}
        {{- if .Values.resources.enabled }}
        resources:
          limits:
            cpu: {{ .Values.resources.limits.cpu }}
            memory: {{ .Values.resources.limits.memory}}
          requests:
            cpu: {{ .Values.resources.requests.cpu }}
            memory: {{ .Values.resources.requests.memory }}
        {{- else }}
        resources: {}
        {{- end }}

Like a lot of demand, and some services do not need to create micro ingress, and some may need, or some may not be used as an external ingress cluster load balancer to flow into the interior of your cluster service, direct use of service the clusterIP deploy several nginx load balancer to be responsible for forwarding internal services, were exposed to by slb, then we look to achieve these two requirements
in the values it has enabled the template may also like just now, if you set do not create false, then ingress rule, if you create this rule is true, then
the first definition of the service, that is, process control, and now for the service without using the
values set switch is false, enabled

[root@k8s-master1 one]# cat values.yaml 
app: pod-base-tools
name: pod-base-tools
replicaCount: 3

image:
  repository: nginx
  tag: 1.17
  pullPolicy: IfNotPresent

serviceAccount:
  create: true
  name:

service:
  enabled: false
  port: 80
  targetPort: 80

ingress:
  enabled: false
  annotations: {}
  hosts:
    - host: chart-example.local
      paths: []
  tls: []

resources: 
      enabled: true
      limits:
        cpu: 100m
        memory: 128Mi
      requests:
        cpu: 100m
        memory: 128Mi

nodeSelector:
  team: team2
[root@k8s-master1 templates]# cat service.yaml 
{{- if .Values.service.enabled }}
apiVersion: v1
kind: Service
metadata:
  labels:
    app: {{ .Values.name }}
  name: {{ .Values.name }}
spec:
  ports:
  - port: {{ .Values.service.port }}
    protocol: TCP
    targetPort: {{ .Values.service.targetPort }}
  selector:
    app: {{ .Values.name }}
{{ end }}

[root@k8s-master1 templates]# cat deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: {{ .Values.name }}
  name: {{ .Values.name }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: {{ .Values.name }}
  template:
    metadata:
      labels:
        app: {{ .Values.name }}
    spec:
      nodeSelector:
        team: {{ .Values.nodeSelector.team }}
      containers:
      - image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
        name: {{ .Values.name }}
        {{- if .Values.resources.enabled }}
        resources:
          limits:
            cpu: {{ .Values.resources.limits.cpu }}
            memory: {{ .Values.resources.limits.memory}}
          requests:
            cpu: {{ .Values.resources.requests.cpu }}
            memory: {{ .Values.resources.requests.memory }}
        {{- else }}
        resources: {}
        {{- end }}

After performing the service will not be created, because if I set up to judge is false, then the service is not created, and the switch is set to false, then set directly to create a service to true
[root@k8s-master1 templates]# helm install pod-base-tools --dry-run ../../one/

Now create a ingress, also set a switch, in fact, the method is the same

[root@k8s-master1 one]# cat values.yaml 
app: pod-base-user
name: pod-base-user
replicaCount: 3

image:
  repository: nginx
  tag: 1.17
  pullPolicy: IfNotPresent

serviceAccount:
  create: true
  name:

service:
  enabled: false
  port: 80
  targetPort: 80

ingress:
  enabled: true
  annotations: {}
  hosts:
    - host: chart-example.local
      paths: []
  tls: []

resources: 
      enabled: true
      limits:
        cpu: 100m
        memory: 128Mi
      requests:
        cpu: 100m
        memory: 128Mi

nodeSelector:
  team: team2
[root@k8s-master1 templates]# cat ingress.yaml 
{{- if .Values.ingress.enabled }}
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: test-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /testpath
        backend:
          serviceName: test
          servicePort: 80
{{ end }}

[root@k8s-master1 templates]# helm install pod-base-user  ../../one/
[root@k8s-master1 templates]# kubectl get ing
NAME                              HOSTS   ADDRESS   PORTS   AGE
ingress.extensions/test-ingress   *                 80      39s

with
with: control variable scope.
{{.Release.xxx}} or {{.Values.xxx}} of them. That represents the reference to the current range, .Values is to tell the template lookup values Values objects in the current range. And with the statement since you can control the scope of the variable scope, its syntax and a simple if statement that compares similar
one small problem is that when we go to write variable references will be preceded by a. This means that from the point of the range to which find, according to., then, is generated by the template structure to find the

Then use it nodeSelector this value it, this look at the actual scene, usually it will set up a schedule of all the nodes are grouped, so as to ensure that the layout of us better to manage the distribution of micro node node services
that it can be used before syntax do if determination
can also be configured by way of the switch, or just with the way, or manner that is a function of toyaml

spec:
      {{- if .Values.nodeSelector.enabled }}
      nodeSelector:
        team: {{ .Values.nodeSelector.team }}
      {{- else }}
      {{- end }}
[root@k8s-master1 one]# cat values.yaml
nodeSelector:
  enabled: true
  team: team2
[root@k8s-master1 templates]# cat deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: {{ .Values.name }}
  name: {{ .Values.name }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      app: {{ .Values.name }}
  template:
    metadata:
      labels:
        app: {{ .Values.name }}
    spec:
      {{- if .Values.nodeSelector.enabled }}
      nodeSelector:
        team: {{ .Values.nodeSelector.team }}
      {{- else }}
      {{- end }}
      containers:
      - image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
        name: {{ .Values.name }}
        {{- if .Values.resources.enabled }}
        resources:
          limits:
            cpu: {{ .Values.resources.limits.cpu }}
            memory: {{ .Values.resources.limits.memory}}
          requests:
            cpu: {{ .Values.resources.requests.cpu }}
            memory: {{ .Values.resources.requests.memory }}
        {{- else }}
        resources: {}
        {{- end }}

Or to remove the switch directly to read our parameters it is possible with

[root@k8s-master1 one]# tail -4  values.yaml 

nodeSelector:
  team: team2

To define the field deployment, - with the specified .team, to read the corresponding values

spec:
      {{- with .Values.nodeSelector }}
      nodeSelector:
        team: {{ .team }}
      {{- else }}
      {{- end }}

Use toYaml way

 spec:
      {{- with .Values.nodeSelector }}
      nodeSelector:
        {{- toYaml . | nindent 8 }}
      {{- end }}

with a looping construct. Using the values in .Values.nodeSelector: convert it to Yaml.
toYaml point after the current value of the cycle .Values.nodeSelector

range
in Helm template language, use the range key to cycle.
We add values.yaml file on a variable list:
like to write more general range of elements when you want to use, and with generally used as toyaml more structured hierarchy, comparing the use of such a suitable range env,

cat values.yaml 
test:
  - 1
  - 2
  - 3

Circulation print the list:

apiVersion: v1
kind: ConfigMap
metadata:
  name: {{ .Release.Name }}
data:
  test: |
  {{- range .Values.test }}
    {{ . }}
  {{- end }}

We use the inner loop is one. This is because the current scope in the current cycle, this. The current reading of the reference element

7, variable
variables in the template, use variables occasions much, but we will see how to use it to simplify the code and make better use of and with range. Just because we use with, can not go to other variables defined in the following, then how to go in with reference to some of the built-in variable full value of it, there are two ways, one is to use variable assignment were to use the helm, The second is to use the $ to use
test, to not add $

spec:
      {{- with .Values.nodeSelector }}
      nodeSelector:
        app: {{ .Values.name }}

Execution result is this

[root@k8s-master1 templates]# helm install pod-base-user --dry-run ../../one/
Error: template: application/templates/deployment.yaml:19:23: executing "application/templates/deployment.yaml" at <.Values.name>: nil pointer evaluating interface {}.name

Well, then this will add $ normal output

 spec:
      nodeSelector:
        app: pod-base-user
        team: team2

It may also be used to form another output

 spec:
      {{- $releaseName := .Release.Name -}}
      {{- with .Values.nodeSelector }}
      nodeSelector:
        app: {{ $releaseName }}

The above statement can be seen with an increase in the {{- $ releaseName: = Release.Name-}.}, Which is a reference variable $ releasename behind the object, its form is $ name, using the assignment operator: =, such inner block with the $ releaseName variable still points to the .Release.Name

The other is when we define a micro-services or java project will set the java heap size, then this is more commonly used options, how will this be added into it, there are many methods you can use toYaml way,
we go values to define it

[root@k8s-master1 one]# tail -4 values.yaml 

env:
- name: JAVA_OPTS
  value: -Xmx1024m -Xms1014m

By the method also can be printed just

{{- with .Values.env }}
        env:
        {{- toYaml . | nindent 8 }}
        {{- end }}

8, named template
named template: Use define the definition, template introduced beginning underlined in a default templates directory files to a common template (helpers.tpl), such as the yaml there are one or two are not as good as it needs to toYaml mode, or if else switch mode, so long you can use this template a name
such as the name of the resource is the same name that you can define a named template, this one's logic is written in a template, so that these yaml have cited this one, then they cite the names are the same, such as this one label, tag selector, the controller does need to match pod according to the tag selector, then this one can write _helper.tpl inside, this is the actual storage of local public templates, it is defined templates defined using the define, template introduced

cat _helpers.tpl
{{- define "demo.fullname" -}}
{{- .Chart.Name -}}-{{ .Release.Name }}
{{- end -}}

cat deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ template "demo.fullname" . }}
...

instruction template is a template method further comprising the template. However, template functions can not be used to Go template pipes. To solve this problem, increase the function include

cat _helpers.tpl
{{- define "demo.labels" -}}
app: {{ template "demo.fullname" . }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
release: "{{ .Release.Name }}"
{{- end -}}
cat deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "demo.fullname" . }}
  labels:
    {{- include "demo.labels" . | nindent 4 }}
...

The above named demo.labels comprising a template, and then the value passed to the template, the final transmission output templates to nindent function.
3.7 develop their own Chart: dubbo micro-service application, for example

  1. Create templates

    helm create dubbo
  2. Modify Chart.yaml,Values.yaml,variables add common

  3. Create a deployment image file yaml needed in the templates directory, and variable references in the constantly changing field yaml
    here code has the code on my git repository, to use please send me your public key
    git clone [email protected]:zhaocheng172/helm-dubbo.git

Guess you like

Origin blog.51cto.com/14143894/2461024