Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error in iteration #35179

Closed
Nello-Angelo opened this issue May 17, 2024 · 1 comment
Closed

error in iteration #35179

Nello-Angelo opened this issue May 17, 2024 · 1 comment
Labels
bug new new issue not yet triaged question

Comments

@Nello-Angelo
Copy link

Terraform Version

1.8.3

Terraform Configuration Files

variable "cloudru_k8s_cluster" {
  type = map(object({
    control_plane_type                         = string
    control_plane_multizonal                   = bool
    control_plane_count                        = number
    control_plane_version                      = string
    control_plane_zones                        = list(string)
    network_configuration_services_subnet_cidr = string
    network_configuration_nodes_subnet_cidr    = string
    network_configuration_pods_subnet_cidr     = string
    network_configuration_kube_api_internet    = bool
    timeouts_create                            = string
    timeouts_read                              = string
    timeouts_update                            = string
    timeouts_delete                            = string
    nodepool = map(object({
      scale_policy_fixed_scale_count                = number
      nodes_network_configuration_nodes_subnet_cidr = string
      hardware_compute_disk_size                    = number
      hardware_compute_disk_type                    = string
      hardware_compute_flavor_id                    = string
      timeouts_create                               = string
      timeouts_read                                 = string
      timeouts_update                               = string
      timeouts_delete                               = string
    }))
  }))

  default = {
    "cluster-1" = {
      control_plane_type                         = "MASTER_TYPE_SMALL"
      control_plane_multizonal                   = false
      control_plane_count                        = 3
      control_plane_version                      = "v1.26.7"
      control_plane_zones                        = ["hbghefbvkhsdbfvkshdbfkv"]
      network_configuration_services_subnet_cidr = "10.0.0.0/24"
      network_configuration_nodes_subnet_cidr    = "10.0.1.0/24"
      network_configuration_pods_subnet_cidr     = "10.0.2.0/24"
      network_configuration_kube_api_internet    = true
      timeouts_create                            = "60m"
      timeouts_read                              = "15s"
      timeouts_update                            = "30m"
      timeouts_delete                            = "5m"
      nodepool = {
        "nodepool-1" = {
          nodes_network_configuration_nodes_subnet_cidr = "10.0.1.0/28"
          hardware_compute_disk_size                    = 10
          hardware_compute_disk_type                    = "SSD"
          hardware_compute_flavor_id                    = "small"
          scale_policy_fixed_scale_count                = 3
          timeouts_create                               = "60m"
          timeouts_read                                 = "15s"
          timeouts_update                               = "30m"
          timeouts_delete                               = "5m"
        }
      }
    }
  }
}

locals {
  cluster_config = {
    for key, value in var.cloudru_k8s_cluster : key => {
      control_plane_type                         = value.control_plane_type
      network_configuration_services_subnet_cidr = value.network_configuration_services_subnet_cidr
      network_configuration_nodes_subnet_cidr    = value.network_configuration_nodes_subnet_cidr
      network_configuration_pods_subnet_cidr     = value.network_configuration_pods_subnet_cidr
      control_plane_zones                        = value.control_plane_zones

      nodepool = {
        for np_key, np_value in value.nodepool : np_key => {
          nodes_network_configuration_nodes_subnet_cidr = np_value.nodes_network_configuration_nodes_subnet_cidr
          hardware_compute_flavor_id                    = np_value.hardware_compute_flavor_id
          scale_policy_fixed_scale_count                = np_value.scale_policy_fixed_scale_count
          timeouts_create                               = np_value.timeouts_create
          timeouts_read                                 = np_value.timeouts_read
          timeouts_update                               = np_value.timeouts_update
          timeouts_delete                               = np_value.timeouts_delete
        }
      }
    }
  }
}

resource "cloudru_k8s_cluster" "cluster" {
  for_each = { for key, value in var.cloudru_k8s_cluster : key => value }
  name     = each.key

  control_plane = {
    zones      = each.value.control_plane_zones
    multizonal = each.value.control_plane_multizonal
    count      = each.value.control_plane_count
    type       = each.value.control_plane_type
    version    = each.value.control_plane_version
  }

  network_configuration = {
    services_subnet_cidr = each.value.network_configuration_services_subnet_cidr
    nodes_subnet_cidr    = each.value.network_configuration_nodes_subnet_cidr
    pods_subnet_cidr     = each.value.network_configuration_pods_subnet_cidr
    kube_api_internet    = each.value.network_configuration_kube_api_internet
  }

  timeouts = {
    create = each.value.timeouts_create
    read   = each.value.timeouts_read
    update = each.value.timeouts_update
    delete = each.value.timeouts_delete
  }

  provider = cloudru
}

resource "cloudru_k8s_nodepool" "nodepool" {
  for_each   = { for key, value in local.cluster_config.nodepool : key => value }
  cluster_id = cloudru_k8s_cluster.cluster[each.key]
  name       = each.key

  scale_policy = {
    fixed_scale = {
      count = each.value.scale_policy_fixed_scale_count
    }
  }

  hardware_compute = {
    disk_size = each.value.hardware_compute_disk_size
    disk_type = each.value.hardware_compute_disk_type
    flavor_id = each.value.hardware_compute_flavor_id
  }

  nodes_network_configuration = {
    nodes_subnet_cidr = each.value.nodes_network_configuration_nodes_subnet_cidr
  }

  timeouts = {
    create = each.value.timeouts_create
    read   = each.value.timeouts_read
    update = each.value.timeouts_update
    delete = each.value.timeouts_delete
  }

  depends_on = [
    cloudru_k8s_cluster.cluster
  ]

  provider = cloudru
}

Debug Output

module.cloudflare.data.cloudflare_zone.dns_zone: Reading...
module.cloudflare.data.cloudflare_zone.dns_zone: Read complete after 3s [id=79cc175d473dde84c1623741ed637f6e]
module.cloudflare.cloudflare_record.record["gitlab"]: Refreshing state... [id=4c3d4e0c8725ae26617269f5737f2fd0]
module.cloudflare.cloudflare_record.record["minio-api"]: Refreshing state... [id=594acc47142d2ddb25df459300c201c2]
module.cloudflare.cloudflare_record.record["grafana"]: Refreshing state... [id=d14e265c76c41974482bb15a60875164]
module.cloudflare.cloudflare_record.record["keycloak"]: Refreshing state... [id=796fc58d240e9a33279ba7ddd026d4ac]
module.cloudflare.cloudflare_record.record["vmagent"]: Refreshing state... [id=3cc2adda3cc8e011a648c7d4f8c8e4c4]
module.cloudflare.cloudflare_record.record["cloud-infra.ru"]: Refreshing state... [id=29964e651a739bc398ac258b9b4fce3a]
module.cloudflare.cloudflare_record.record["sonarqube"]: Refreshing state... [id=cfd6a36749bd33e1fb202e259821a9af]
module.cloudflare.cloudflare_record.record["vminsert"]: Refreshing state... [id=bd49025aad398abb79dac9d66b6c71ed]
module.cloudflare.cloudflare_record.record["harbor"]: Refreshing state... [id=1da86e659a9ebd6c39ba044b8c097d11]
module.cloudflare.cloudflare_record.record["minio"]: Refreshing state... [id=1bd2e5c3f3dde2ad4597be888d19c37d]
module.cloudflare.cloudflare_record.record["stack"]: Refreshing state... [id=1d84817edda0367cb7fd04b7d4bb9bb1]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with
the following symbols:
  + create

Terraform planned the following actions, but then encountered a problem:

  # module.k8s.cloudru_k8s_cluster.cluster["cluster-1"] will be created
  + resource "cloudru_k8s_cluster" "cluster" {
      + control_plane         = {
          + count      = 3
          + multizonal = false
          + type       = "MASTER_TYPE_SMALL"
          + version    = "v1.26.7"
          + zones      = [
              + "hbghefbvkhsdbfvkshdbfkv",
            ]
        }
      + created_at            = (known after apply)
      + created_by            = (known after apply)
      + id                    = (known after apply)
      + logging_service       = (known after apply)
      + monitoring_service    = (known after apply)
      + name                  = "cluster-1"
      + network_configuration = {
          + control_plane_endpoints = (known after apply)
          + kube_api_internet       = true
          + nodes_subnet_cidr       = "10.0.1.0/24"
          + nodes_subnet_id         = (known after apply)
          + pods_subnet_cidr        = "10.0.2.0/24"
          + services_subnet_cidr    = "10.0.0.0/24"
        }
      + nodepools_info        = (known after apply)
      + project_id            = (known after apply)
      + state                 = (known after apply)
      + task_id               = (known after apply)
      + timeouts              = {
          + create = "60m"
          + delete = "5m"
          + read   = "15s"
          + update = "30m"
        }
      + updated_at            = (known after apply)
      + updated_by            = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.
╷
│ Error: Unsupported attribute
│ 
│   on ../modules/cloud-ru-k8s-module/main.tf line 141, in resource "cloudru_k8s_nodepool" "nodepool":
│  141:   for_each   = { for key, value in local.cluster_config.nodepool : key => value }
│     ├────────────────
│     │ local.cluster_config is object with 1 attribute "cluster-1"
│ 
│ This object does not have an attribute named "nodepool".

Expected Behavior

variable values are substituted

Actual Behavior

│ Error: Unsupported attribute
│ 
│   on ../modules/cloud-ru-k8s-module/main.tf line 141, in resource "cloudru_k8s_nodepool" "nodepool":
│  141:   for_each   = { for key, value in local.cluster_config.nodepool : key => value }
│     ├────────────────
│     │ local.cluster_config is object with 1 attribute "cluster-1"
│ 
│ This object does not have an attribute named "nodepool".

Steps to Reproduce

...

Additional Context

...

References

...

@Nello-Angelo Nello-Angelo added bug new new issue not yet triaged labels May 17, 2024
@apparentlymart
Copy link
Member

Hi @Nello-Angelo!

This error message seems to be correct: local.cluster_config is an object with a single attribute named cluster-1, because it's derived from your map in var.cloudru_k8s_cluster which has just that one element by default. Since Terraform is behaving correctly -- this expression is incorrect as reported -- I'm going to close this issue.

I'm not familiar with the provider you are using here so I'm not 100% sure of what you were intending, but it seems like you need to construct a map that has one element for each node pool instead of one element for each cluster, and then use that map as the for_each.

If you have more questions about how to achieve that, please start a topic in the Terraform community forum and we can discuss it more there.

@apparentlymart apparentlymart closed this as not planned Won't fix, can't repro, duplicate, stale May 17, 2024
@crw crw added the question label May 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug new new issue not yet triaged question
Projects
None yet
Development

No branches or pull requests

3 participants