[Posts] Kubernetes and Helm: Chart use the same deployment of multiple applications and Kubernetes Helm: Chart use the same deployment of multiple applications

 

k8s cluster build a good, ready to use on docker swarm migrate to k8s, but requires a written application yaml a profile, not only to write but also to write deployment.yaml service.yaml, and many application configuration is similar this tedious work was a bit prohibitive.

k8s there is no way to save it for this problem? I found a savior Helm - k8s application package manager, actual operating experience.

First mounted on the helm k8s master node, using the following command line 1 can handle.

curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash

Next, create a chart (chart is the helm of the bag)

helm create cnblogs-chart

Note: Prepare a number of different application deployment based on this chart.

helm creates a folder, let's look at the contents of the folder:

cnblogs-chart
├── charts
├── Chart.yaml
├── templates
│   ├── deployment.yaml
│   ├── _helpers.tpl
│   ├── ingress.yaml
│   ├── NOTES.txt
│   ├── serviceaccount.yaml
│   ├── service.yaml
│   └── tests
│       └── test-connection.yaml
└── values.yaml

About the use of these files, see the garden Bowen  kubernetes actual articles of the helm example yaml file details  .

The following modified in accordance with our deployment scenarios these profiles in the chart, because we want to use a chart with the deployment of a number of applications, the need to minimize repeatedly, so in the configuration will be based more on the convention. Assume that the application name we deploy a cache-api, it helm of release name also used cache-api, name docker mirror also use cache-api, name of the deployment and the service is also used cache-api, name ConfigMap is cache-api -appsettings.

Modifying the configuration templates (shared common configuration)

1) modify the configuration deployment.yaml

  • The  metadata.name value is modified .Release.Name
  • Will be  containers.name of value to .Release.Name
  • Will be  containers. image of value to {{ .Release.Name }}:{{ .Values.image.version }}
  • Add  containers. workingDir container working directory configuration
  • Add  containers.command container configuration startup command
  • Add  containers.env the environment variable configuration
  • Will  matchLabels and  labels values are changed {{ .Release.Name }}
  • Add the  configMap mounted  volume configuration for the application reads appsettings.Production.json.
metadata:
  name: {{ .Release.Name }}  labels:  name: {{ .Release.Name }} spec:  selector:  matchLabels:  app: {{ .Release.Name }}  template:  metadata:  labels:  app: {{ .Release.Name }}  spec:  containers:  - name: {{ .Release.Name }}  image: "{{ .Release.Name }}:{{ .Values.image.version }}"  workingDir: /app  command:  - sh  - run.sh  env:  - name: TZ  value: "Asia/Shanghai"  volumeMounts:  - name: appsettings  mountPath: /app/appsettings.Production.json  subPath: appsettings.Production.json  readOnly: true  volumes:  - name: appsettings  configMap:  name: "{{ .Release.Name }}-appsettings"

2) modify service.yaml

Also agreed with the application name name: {{ .Release.Name }}

kind: Service
metadata:
  name: {{ .Release.Name }}  labels:  name: {{ .Release.Name }} spec:  type: {{ .Values.service.type }}  ports:  - port: {{ .Values.service.port }}  targetPort: http  protocol: TCP  name: http  selector:  app: {{ .Release.Name }}

Modify values.yaml configuration (shared default configuration)

  • Will be  image.pullPolicy revised to  Always .
  • Add  image.version and set  latest .
  • In  imagePullSecrets adding secret name.
  • Will be  serviceAccount.create set to false.
  • In  resources the  limits with  requests the CPU and memory limits provided.
replicaCount: 1

image:
  repository: {}  version: latest  pullPolicy: Always imagePullSecrets:  - name: regcred nameOverride: "" fullnameOverride: "" serviceAccount:  create: false  name: podSecurityContext: {} securityContext: {} service:  type: ClusterIP  port: 80 ingress:  enabled: false resources:  limits:  cpu: 2  memory: 2G  requests:  cpu: 100m  memory: 64Mi nodeSelector: {} tolerations: [] affinity: {}

Deploying Applications

1) Verify the configuration

Run the following command to verify the configuration is correct

helm install cache-api --set image.version=1.0 --dry-run --debug .

2) to deploy applications

If the configuration is validated, you can use the following command to deploy the application.

helm install cache-api --set image.version=1.0 .

Check deployed applications.

helm ls                                            
NAME        NAMESPACE   REVISION    UPDATED                                 STATUS      CHART               APP VERSION
cache-api   production  1           2020-01-22 17:17:30.414863452 +0800 CST deployed    cnblogs-chart-0.1.0 1

3) deployment of multiple applications

Can now deploy the chart based on a plurality of previously created applications, simply uploaded configuration parameters passed by the application helm install command, such as news-web deployment q-web with these two applications, the following command can be used are:

helm install news-web --set image.version=1.0.4,resources.limits.cpu=1 --dry-run --debug cnblogs-chart/
helm install ing-web --set image.version=1.3.11,resources.limits.cpu=1.5 --dry-run --debug cnblogs-chart/

k8s cluster build a good, ready to use on docker swarm migrate to k8s, but requires a written application yaml a profile, not only to write but also to write deployment.yaml service.yaml, and many application configuration is similar this tedious work was a bit prohibitive.

k8s there is no way to save it for this problem? I found a savior Helm - k8s application package manager, actual operating experience.

First mounted on the helm k8s master node, using the following command line 1 can handle.

curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash

Next, create a chart (chart is the helm of the bag)

helm create cnblogs-chart

Note: Prepare a number of different application deployment based on this chart.

helm creates a folder, let's look at the contents of the folder:

cnblogs-chart
├── charts
├── Chart.yaml
├── templates
│   ├── deployment.yaml
│   ├── _helpers.tpl
│   ├── ingress.yaml
│   ├── NOTES.txt
│   ├── serviceaccount.yaml
│   ├── service.yaml
│   └── tests
│       └── test-connection.yaml
└── values.yaml

About the use of these files, see the garden Bowen  kubernetes actual articles of the helm example yaml file details  .

The following modified in accordance with our deployment scenarios these profiles in the chart, because we want to use a chart with the deployment of a number of applications, the need to minimize repeatedly, so in the configuration will be based more on the convention. Assume that the application name we deploy a cache-api, it helm of release name also used cache-api, name docker mirror also use cache-api, name of the deployment and the service is also used cache-api, name ConfigMap is cache-api -appsettings.

Modifying the configuration templates (shared common configuration)

1) modify the configuration deployment.yaml

  • The  metadata.name value is modified .Release.Name
  • Will be  containers.name of value to .Release.Name
  • Will be  containers. image of value to {{ .Release.Name }}:{{ .Values.image.version }}
  • Add  containers. workingDir container working directory configuration
  • Add  containers.command container configuration startup command
  • Add  containers.env the environment variable configuration
  • Will  matchLabels and  labels values are changed {{ .Release.Name }}
  • Add the  configMap mounted  volume configuration for the application reads appsettings.Production.json.
metadata:
  name: {{ .Release.Name }}  labels:  name: {{ .Release.Name }} spec:  selector:  matchLabels:  app: {{ .Release.Name }}  template:  metadata:  labels:  app: {{ .Release.Name }}  spec:  containers:  - name: {{ .Release.Name }}  image: "{{ .Release.Name }}:{{ .Values.image.version }}"  workingDir: /app  command:  - sh  - run.sh  env:  - name: TZ  value: "Asia/Shanghai"  volumeMounts:  - name: appsettings  mountPath: /app/appsettings.Production.json  subPath: appsettings.Production.json  readOnly: true  volumes:  - name: appsettings  configMap:  name: "{{ .Release.Name }}-appsettings"

2) modify service.yaml

Also agreed with the application name name: {{ .Release.Name }}

kind: Service
metadata:
  name: {{ .Release.Name }}  labels:  name: {{ .Release.Name }} spec:  type: {{ .Values.service.type }}  ports:  - port: {{ .Values.service.port }}  targetPort: http  protocol: TCP  name: http  selector:  app: {{ .Release.Name }}

Modify values.yaml configuration (shared default configuration)

  • Will be  image.pullPolicy revised to  Always .
  • Add  image.version and set  latest .
  • In  imagePullSecrets adding secret name.
  • Will be  serviceAccount.create set to false.
  • In  resources the  limits with  requests the CPU and memory limits provided.
replicaCount: 1

image:
  repository: {}  version: latest  pullPolicy: Always imagePullSecrets:  - name: regcred nameOverride: "" fullnameOverride: "" serviceAccount:  create: false  name: podSecurityContext: {} securityContext: {} service:  type: ClusterIP  port: 80 ingress:  enabled: false resources:  limits:  cpu: 2  memory: 2G  requests:  cpu: 100m  memory: 64Mi nodeSelector: {} tolerations: [] affinity: {}

Deploying Applications

1) Verify the configuration

Run the following command to verify the configuration is correct

helm install cache-api --set image.version=1.0 --dry-run --debug .

2) to deploy applications

If the configuration is validated, you can use the following command to deploy the application.

helm install cache-api --set image.version=1.0 .

Check deployed applications.

helm ls                                            
NAME        NAMESPACE   REVISION    UPDATED                                 STATUS      CHART               APP VERSION
cache-api   production  1           2020-01-22 17:17:30.414863452 +0800 CST deployed    cnblogs-chart-0.1.0 1

3) deployment of multiple applications

Can now deploy the chart based on a plurality of previously created applications, simply uploaded configuration parameters passed by the application helm install command, such as news-web deployment q-web with these two applications, the following command can be used are:

helm install news-web --set image.version=1.0.4,resources.limits.cpu=1 --dry-run --debug cnblogs-chart/
helm install ing-web --set image.version=1.3.11,resources.limits.cpu=1.5 --dry-run --debug cnblogs-chart/

Guess you like

Origin www.cnblogs.com/jinanxiaolaohu/p/12355996.html