Terraform uses Dynamic block to realize module dynamic loading

Conditional Dynamic block in Terraform

insert image description here

background

The function of Terraform is introduced above , and how to use terrform workspace to manage multiple deployment environments. This article will continue to talk about some other usages of Terraform, and introduce how to combine the Dynamic block in Terraform syntax , allowing us to use the same Terraform Module to dynamically load blocks in different Workspaces.

need

In the previous case, the company used Terraform to deploy Elastic Cloud on multiple environments ( INT, Staging, Prod ) for system and customer log storage and analysis. We have created a reusable module ec_deploymentmodule to deploy our EC cluster. The code has defined various configurations such as ES Hot nodes and WARM nodes. Now, suppose we only want to enable Dedicated Master and Cold&Frozen nodes in the Prod environment to improve ES performance and achieve Long Storage retention of logs. If we ec_deploymentdirectly add the topology block to the module, it will affect the creation of corresponding nodes in the DEV environment, making it impossible to use the module file. How should we do it?

plan

  • Solution 1: Create new ec_dev_deploymentmodules and ec_prod_deploymentmodules to distinguish deployment files in different environments (redundant code, not recommended)

  • Solution 2: Dynamic block dynamic loading
    According to the terrform document, it is mentioned that the behavior of the dynamic block is very similar to the for expression, but it is not explained. In addition, we can also implement a conditional logic in the block, so that different workspaces can be changed according to variables. For the deployed topology, we have found through experiments that this is feasible.

accomplish

ec_deploymentLet's take a look at the module under the Terraform folder first.

`-- modules
    |-- ec_deployment
    |   |-- README.md
    |   |-- data.tf
    |   |-- locals.tf
    |   |-- main.tf
    |   |-- outputs.tf
    |   |-- provider.tf
    |   `-- variables.tf

The relevant codes to implement ElastcCloud Topology in main.tf are as follows:

  elasticsearch {
    autoscale = "false"
    
    topology {
      id            = "hot_content"
      zone_count    = var.elasticsearch_hot_zones
      size          = var.elasticsearch_hot_size
      size_resource = "memory"
    }

    dynamic "topology" {
      for_each = var.elasticsearch_master_zones > 0 ? [1] : []
      content {
        id            = "master"
        zone_count    = var.elasticsearch_master_zones
        size          = var.elasticsearch_master_size
        size_resource = "memory"
      }
    }

    dynamic "topology" {
      for_each = var.elasticsearch_warm_zones > 0 ? [1] : [])
      content {
        id            = "warm"
        zone_count    = var.elasticsearch_warm_zones
        size          = var.elasticsearch_warm_size
        size_resource = "memory"
      }
    }
  }

The difference from the previous article is that you will find that we have added a syntax block here dynamic "topology"to realize the code of the Warm and Master nodes. in:

  • for_each directive is used for conditional logic (the bolck is enabled if the node area is greater than zero)
  • var defines a variable (var.elasticsearch_warm_zones)

Then in the envs directory, customize the specific variable values ​​of each ElasticCloud Terraform code

|   |-- envs
|   |   |-- production
|   |   |   |-- ec_deployment
|   |   |   |   `-- infra
|   |   |   |       |-- locals.tf
|   |   |   |       |-- main.tf
|   |   |   |       |-- outputs.tf
|   |   |   |       `-- provider.tf
|   |   |-- staging
|   |   |   |-- ec_deployment
|   |   |   |   `-- infra
|   |   |   |       |-- locals.tf
|   |   |   |       |-- main.tf
|   |   |   |       |-- outputs.tf
|   |   |   |       `-- provider.tf
|   |   |-- int
|   |   |   |-- ec_deployment
|   |   |   |   `-- infra
|   |   |   |       |-- locals.tf
|   |   |   |       |-- main.tf
|   |   |   |       |-- outputs.tf
|   |   |   |       `-- provider.tf

Such as main.tf under int

module "ec_deployment" {
  source = "../../../../modules/ec_deployment/"

  deployment_name          = "aws-integration-infra"
  deployment_region        = "us-east-1"
  deployment_stack_version = "8.4.1"
  deployment_template_id   = "aws-storage-optimized-v3"

  elasticsearch_hot_size     = "15g"
  elasticsearch_hot_zones    = 2
  elasticsearch_warm_size    = "4g"
  elasticsearch_warm_zones   = 0
  elasticsearch_cold_size    = "2g"
  elasticsearch_cold_zones   = 0
  elasticsearch_frozen_size  = "4g"
  elasticsearch_frozen_zones = 0
  elasticsearch_master_size  = "1g"
  elasticsearch_master_zones = 0

main.tf under PROD

module "ec_deployment" {
  source = "../../../../modules/ec_deployment/"

  deployment_ready         = var.deployment_ready
  deployment_name          = local.deployment_name
  deployment_alias         = local.deployment_alias_name
  deployment_region        = local.deployment_region
  deployment_stack_version = local.deployment_stack_version
  deployment_template_id   = "aws-storage-optimized-v3"

  elasticsearch_hot_size     = "15g"
  elasticsearch_hot_zones    = 4
  elasticsearch_warm_size    = "4g"
  elasticsearch_warm_zones   = 2
  elasticsearch_cold_size    = "2g"
  elasticsearch_cold_zones   = 2
  elasticsearch_frozen_size  = "4g"
  elasticsearch_frozen_zones = 1
  elasticsearch_master_size  = "1g"
  elasticsearch_master_zones = 3

It can be seen that in Prod, we have specified the number of node areas for warm & cold respectively, so that after the
Terraform Plan, you will find that the resource state of INT has not changed (the number of node areas is zero), and Prod will increase the deployment of these nodes

insert image description here

Effect

After Terraform Apply, you can see the differences of Elastic Cloud clusters in different environments

The status of the ES nodes in the INT environment will be the same as before, and these new nodes are created in the Prod environment

insert image description here

Similar usage is also used when I create ILM (index lifecycle magement) Policy for ES index (some indexes require rollover, while others do not)

For example, in ec_provisioninga module you could have the following definition

resource "elasticstack_elasticsearch_index_lifecycle" "main" {
  for_each = var.indices

  name = each.key

  dynamic "warm" {
    for_each = toset(each.value.ilm_warm_enabled ? ["enable"] : [])
    content {
      min_age = each.value.ilm_warm_min_age

      forcemerge {
        max_num_segments = 1
      }
    }
  }

  delete {
    min_age = each.value.ilm_delete_min_age

    delete {
      delete_searchable_snapshot = true
    }
  }
}

Under ENV, define the specific template setting and ILM policy of each index

    {
      log-access = {
        number_of_shards   = 2
        ilm_delete_min_age = "14d"
      },
      log-syslog = {
        ilm_delete_min_age = "360d"
        ilm_warm_min_age = "90d"
        ilm_warm_enabled   = true
      }

Since log-syslogilm_warm_enabled is enabled, the log data of this index will rollover from the hot node to the warm node after 90 days

in conclusion

In conclusion, dynamic blocks can be very useful when writing reusable modules, or when using the same module across multiple environments and want to differentiate some configurations among them.
It's worth noting that Hashicorp recommends against overuse of dynamic blocks, as this can make configuration difficult to read and maintain.

Reference link:

https://developer.hashicorp.com/terraform/language/expressions/dynamic-blocks
https://medium.com/geekculture/terraform-how-to-use-dynamic-blocks-when-conditionally-deploying-to-multiple-environments-57e63c0a2b56

Follow the official account of the original article: "Cloud Native SRE"

Guess you like

Origin blog.csdn.net/dongshi_89757/article/details/127887401