Detailed explanation and examples of Kustomize - how to manage multi-version yaml files in k8s

Detailed explanation and examples of K8S Kustomize

common scene

Let's say you use a specific vendor's Helm chart, which is fine for your needs, but not perfect, and requires some customization. You fork this graph, make some configuration changes, and apply them to your cluster. A few months later, your vendor releases a new version of this diagram that includes some of the important features you need. In order to use these new features, you have to fork the new chart again and customize it. Re-forking and customizing these Helm charts will be very time-consuming and increase the risk of configuration errors that may affect the stability of products and services.

image.png

The diagram above shows a common use case for a continuous delivery pipeline that starts with a git event. The event might be a push, a merge, or the creation of a new branch. In this case Helm is used to generate yaml files which Kustomize will patch with environment specific values ​​based on events.

What is Kustomize?

Kustomize is a configuration management solution. Without changing the original yaml file, it selectively overwrites it to generate a new configuration through patch. It is very convenient to maintain multi-version yaml files, especially when the project is large or needs to deal with many cases. It is very convenient and easy to maintain.

Kustomize is a powerful configuration management approach for most organizations building products that rely on a combination of home-created and common off-the-shelf applications. Over-customizing configuration files not only reduces reusability, but also makes upgrading difficult and painful.

With kustomize, your team can easily get basic file updates of underlying components and keep custom overrides intact. Patch coverage can also add dimension to your configuration, can isolate and troubleshoot configuration errors, and create a framework for configuration specifications from the broadest to the most specific.

To recap, Kustomize relies on the following configuration management layer systems for reusability:

  • Grassroots - specifies the most common resources
  • Patch Layer - specifies use case specific resources

Kustomize ability

As mentioned above, Kustomize is commonly used in conjunction with Helm, and since version 1.14 released in March 2019 (invoke apply -k via the command), it has been embedded in Kubernetes.

Kustomize provides the following valuable features:

  • Kubectl 原生 -无需作为单独的依赖项安装或管理
  • 普通 Yaml - 没有复杂的模板语言
  • 声明式 - 纯粹声明式(就像 Kubectl 一样)
  • 多种配置 - 管理任意数量的不同配置

在深入了解 Kustomize 的功能之前,让我们将 Kustomize 与原生 Helm 和原生 Kubectl 进行比较,以更好地突出它提供的差异化功能。


功能性 Kustomize Native Helm Native Kubectl
模板化 无模板 复杂的模板 无模板
设置 无需单独设置 需要设置 无需单独设置
配置 使用一个base文件管理多种配置 使用一个base文件管理多种配置 每个不同的配置应该有单独的文件
易用性 简单 简单

使用 Kustomize 的好处

1. 可重复使用

Kustomize 允许您在所有环境(开发、暂存、生产)中重复使用一个基本文件,然后为每个环境覆盖唯一的规范。

2. 快速生成

由于 Kustomize 没有模板语言,因此您可以使用标准 YAML 快速声明您的配置。

3.更容易调试

当出现问题时,YAML 本身很容易理解和调试。再加上您的配置在补丁中被隔离,您将能够立即找出性能问题的根本原因。只需将性能与您的基本配置和正在运行的任何其他变体进行比较即可。

示例

让我们使用涉及 3 个不同环境的部署场景逐步了解 Kustomize 的工作原理:dev、staging和production。在此示例中,我们将使用服务、部署和水平 Pod 自动缩放器资源。所有环境都将使用不同类型的服务:

  • Dev - Cluster IP
  • Staging - Node Port
  • Production - LadBalance

它们每个都有不同的 HPA 设置。这是目录结构的样子:

├── base
  │   ├── deployment.yaml
  │   ├── hpa.yaml
  │   ├── kustomization.yaml
  │   └── service.yaml
  └── overlays
      ├── dev
      │   ├── hpa.yaml
      │   └── kustomization.yaml
      ├── production
      │   ├── hpa.yaml
      │   ├── kustomization.yaml
      │   ├── rollout-replica.yaml
      │   └── service-loadbalancer.yaml
      └── staging
          ├── hpa.yaml
          ├── kustomization.yaml
          └── service-nodeport.yaml

1. 检查base文件

base文件夹包含公共资源,例如标准的deployment.yaml、service.yaml和hpa.yaml资源配置文件。我们将在以下部分中探讨它们的每个内容。

base/deployment.yaml

apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: frontend-deployment
  spec:
    selector:
    matchLabels:
      app: frontend-deployment
    template:
    metadata:
      labels:
        app: frontend-deployment
    spec:
      containers:
      - name: app
        image: foo/bar:latest
        ports:
        - name: http
          containerPort: 8080
          protocol: TCP

base/service.yaml

apiVersion: v1
  kind: Service
  metadata:
    name: frontend-service
  spec:
    ports:
    - name: http
      port: 8080
    selector:
    app: frontend-deployment

base/hpa.yaml

apiVersion: autoscaling/v2beta2
  kind: HorizontalPodAutoscaler
  metadata:
    name: frontend-deployment-hpa
  spec:
    scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: frontend-deployment
    minReplicas: 1
    maxReplicas: 5
    metrics:
    - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 50

base/kustomization.yaml kustmization.yaml 文件是基本文件夹中最重要的文件,它描述了您使用的资源。

apiVersion: kustomize.config.k8s.io/v1beta1
  kind: Kustomization
  
  resources:
    - service.yaml
    - deployment.yaml
    - hpa.yaml

2. 定义开发补丁文件

补丁文件夹包含特定于环境的覆盖。它有 3 个子文件夹(每个环境一个)。

dev/kustomization.yaml 此文件定义要引用的基础配置并使用 patchStrategicMerge 进行修补,这允许定义部分 YAML 文件并将其覆盖在基础之上。

apiVersion: kustomize.config.k8s.io/v1beta1
  kind: Kustomization
  bases:
  - ../../base
  patchesStrategicMerge:
  - hpa.yaml

dev/hpa.yaml 该文件与基础文件中的资源名称相同。这有助于匹配要修补的文件。

apiVersion: autoscaling/v2beta2
  kind: HorizontalPodAutoscaler
  metadata:
    name: frontend-deployment-hpa
  spec:
    minReplicas: 1
    maxReplicas: 2
    metrics:
    - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 90

如果将之前的 hpa.yaml 文件与 base/hpa.yaml 文件进行比较,您会注意到 minReplicas、maxReplicas 和 AverageUtilization 值之间的差异。

3. 检查补丁

要在应用于集群之前确认补丁配置文件更改正确,您可以运行kustomize build overlays/dev

apiVersion: v1
  kind: Service
  metadata:
    name: frontend-service
  spec:
    ports:
    - name: http
      port: 8080
    selector:
      app: frontend-deployment
  ---
  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: frontend-deployment
  spec:
    selector:
      matchLabels:
        app: frontend-deployment
    template:
      metadata:
        labels:
          app: frontend-deployment
      spec:
        containers:
        - image: foo/bar:latest
          name: app
          ports:
          - containerPort: 8080
            name: http
            protocol: TCP
  ---
  apiVersion: autoscaling/v2beta2
  kind: HorizontalPodAutoscaler
  metadata:
    name: frontend-deployment-hpa
  spec:
    maxReplicas: 2
    metrics:
    - resource:
        name: cpu
        target:
          averageUtilization: 90
          type: Utilization
      type: Resource
    minReplicas: 1
    scaleTargetRef:
      apiVersion: apps/v1
      kind: Deployment

4. 应用补丁

确认补丁正确后,请使用以下kubectl apply -k overlays/dev命令将设置应用到集群:

  kubectl apply -k  overlays/dev 
  service/frontend-service created
  deployment.apps/frontend-deployment created
  horizontalpodautoscaler.autoscaling/frontend-deployment-hpa created

处理完开发环境后,我们将演示生产环境,因为在我们的例子中它是 staging 的超集(就 k8s 资源而言)。

5. 定义产品补丁文件

production/hpa.yaml 在我们的生产hpa.yaml 中,假设我们希望允许最多 10 个副本,新副本由平均 CPU 使用率 70% 的资源利用率阈值触发。看起来是这样的:

apiVersion: autoscaling/v2beta2
  kind: HorizontalPodAutoscaler
  metadata:
    name: frontend-deployment-hpa
  spec:
    minReplicas: 1
    maxReplicas: 10
    metrics:
    - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70

production/rollout-replicas.yaml 我们的生产目录中还有一个rollout-replicas.yaml文件指定了我们的滚动策略:

apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: frontend-deployment
  spec:
    replicas: 10
    strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate

prod/service-loadbalancer.yaml 我们使用此文件将服务类型更改为 LoadBalancer(而在 中staging/service-nodeport.yaml,它被修补为 NodePort)。

apiVersion: v1
  kind: Service
  metadata:
    name: frontend-service
  spec:
    type: LoadBalancer

prod/kustomization.yaml

此文件在生产文件夹中的操作方式与在基本文件夹中的操作方式相同:它定义要引用的基本文件以及要应用于生产环境的补丁。在本例中,它还包括两个文件:rollout-replica.yaml 和 service-loadbalancer.yaml。

apiVersion: kustomize.config.k8s.io/v1beta1
  kind: Kustomization
  bases:
  - ../../base
  patchesStrategicMerge:
  - rollout-replica.yaml
  - hpa.yaml
  - service-loadbalancer.yaml

6. 查看产品补丁

检查一下 kustomize build overlays/production 检查完毕后,将patch应用到集群 kubectl apply -k overlays/production

kubectl apply -k overlays/production service/frontend-service created deployment.apps/frontend-deployment created horizontalpodautoscaler.autoscaling/frontend-deployment-hpa created

Kustomize最佳实践

  • 将您的自定义资源及其实例保存在单独的包中,否则可能会遇到冲突,并导致创建过程中出现问题。例如,许多人将证书管理器(CertManager)的CRD和资源保存在同一个包中,这可能会引起问题。大多数情况下,重新应用 YAML 文件可以解决此问题。但是最好将它们分开管理。
  • 尽量将namespace、公共元数据等常见值保存在基本文件中。
  • 按种类对资源进行组织,使用以下命名约定:小写连字符.yaml(例如,horizontal-pod-autoscaler.yaml)。将服务放置在service.yaml 文件中。
  • 遵循标准的目录结构,使用bases/保存基础文件,使用patches/或overlays/保存特定于环境的文件。
  • During development or before pushing the file to git, run the kubectl kustomize cfg fmt file_name command to format the file and set the correct indentation.

Guess you like

Origin juejin.im/post/7257785104713613371