Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistent state after creating a GKE autopliot cluster #34812

Closed
rumsrami opened this issue Mar 10, 2024 · 2 comments
Closed

Inconsistent state after creating a GKE autopliot cluster #34812

rumsrami opened this issue Mar 10, 2024 · 2 comments
Labels

Comments

@rumsrami
Copy link

rumsrami commented Mar 10, 2024

Terraform Version

Terraform v1.7.4
on darwin_arm64
+ provider registry.terraform.io/hashicorp/google v5.19.0
+ provider registry.terraform.io/hashicorp/google-beta v5.19.0
+ provider registry.terraform.io/hashicorp/kubernetes v2.27.0

Terraform Configuration Files

resource "google_container_cluster" "orcas_shipyard_london_cluster" {
  name     = var.orcas_cluster_name
  location = var.cluster_region

  network    = google_compute_network.vpc_network.name
  subnetwork = google_compute_subnetwork.orcas_cluster_subnet.name

  ip_allocation_policy {
    cluster_ipv4_cidr_block  = "/20"
    services_ipv4_cidr_block = "/20"
  }

  deletion_protection = false
  networking_mode     = "VPC_NATIVE"
  datapath_provider   = "ADVANCED_DATAPATH"
  database_encryption {
    key_name = data.google_kms_crypto_key.cluster_db_boot_enc_key.id
    state    = "ENCRYPTED"
  }
  release_channel {
    channel = "STABLE"
  }
  logging_config {
    enable_components = ["SYSTEM_COMPONENTS", "WORKLOADS"]
  }
  security_posture_config {
    mode               = "BASIC"
    vulnerability_mode = "VULNERABILITY_BASIC"
  }
  binary_authorization {
    evaluation_mode = "DISABLED"
  }
  monitoring_config {
    enable_components = [
      "SYSTEM_COMPONENTS",
      "STORAGE",
      "POD",
      "DEPLOYMENT",
      "STATEFULSET",
      "DAEMONSET",
      "HPA"
    ]
    managed_prometheus {
      enabled = true
    }
    advanced_datapath_observability_config {
      enable_metrics = true
      enable_relay   = true
    }
  }
  vertical_pod_autoscaling {
    enabled = true
  }
  private_cluster_config {
    enable_private_nodes = true
    master_global_access_config {
      enabled = true
    }
  }
  #  Confidential nodes feature only supports N2D and C2D machine families.
  #  confidential_nodes {
  #    enabled = true
  #  }
  cluster_autoscaling {
    auto_provisioning_defaults {
      oauth_scopes = [
        "https://www.googleapis.com/auth/cloud-platform"
      ]
      service_account   = data.google_service_account.orcas_shipyard_cluster_service_account.email
      boot_disk_kms_key = data.google_kms_crypto_key.cluster_db_boot_enc_key.id
      management {
        auto_upgrade = true
        auto_repair  = true
      }
      shielded_instance_config {                                       <==== Seems the issue is here
        enable_secure_boot          = true                            <======
        enable_integrity_monitoring = true                        <=======
      }
    }
  }

  # other settings...
  enable_autopilot = true
  addons_config {
    horizontal_pod_autoscaling {
      disabled = false
    }
    gcs_fuse_csi_driver_config {
      enabled = true
    }
    gce_persistent_disk_csi_driver_config {
      enabled = true
    }
    http_load_balancing {
      disabled = false
    }
  }
  depends_on = [
    google_project_iam_member.account_iam_binding,
    google_project_iam_member.k8s_cluster_compute_robot_sa_1,
    google_project_iam_member.k8s_cluster_robot_sa_1,
    google_project_iam_member.k8s_cluster_robot_sa_2,
    data.google_kms_crypto_key.cluster_db_boot_enc_key
  ]
}

Debug Output

https://gist.github.com/rumsrami/02205a157ffd0aeac152b586fe8347b3

Expected Behavior

The first time to run terraform plan / apply everything works fine, no issues.

When I run terraform plan / apply without doing any changes =>

The result of the command terraform plan should be:

Plan: 0 to add, 0 to change, 0 to destroy.

Actual Behavior

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
  ~ update in-place

Terraform will perform the following actions:

  # google_container_cluster.orcas_shipyard_london_cluster will be updated in-place
  ~ resource "google_container_cluster" "orcas_shipyard_london_cluster" {
        id                          = "projects/orcas-shipyard/locations/europe-west2/clusters/orcas-shipyard-cluster"
        name                        = "orcas-shipyard-cluster"
        # (27 unchanged attributes hidden)

      ~ cluster_autoscaling {
            # (2 unchanged attributes hidden)

          ~ auto_provisioning_defaults {
                # (5 unchanged attributes hidden)

              + shielded_instance_config {
                  + enable_integrity_monitoring = true
                  + enable_secure_boot          = true
                }

                # (2 unchanged blocks hidden)
            }

            # (4 unchanged blocks hidden)
        }

        # (27 unchanged blocks hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

Steps to Reproduce

  1. terraform init
  2. terraform plan

Additional Context

Same Happens using both terraform cli open source and using terraform cloud

So this is a fresh terratom apply to a new google cloud account, and a new gke cluster. The cluster creates fine the first time, no errors. When I do terraform plan / terraform apply it comes out with a diff that should not be there.

When I apply the chanage i get the following error.

│ Error: googleapi: Error 400: Overriding Autopilot autoscaling settings is not allowed.
│ Details:
│ [
│   {
│     "@type": "type.googleapis.com/google.rpc.RequestInfo",
│     "requestId": "0xe7f4ba2a6af4bf49"
│   }
│ ]
│ , badRequest
│
│   with google_container_cluster.orcas_shipyard_london_cluster,
│   on clusters.tf line 11, in resource "google_container_cluster" "orcas_shipyard_london_cluster":
│   11: resource "google_container_cluster" "orcas_shipyard_london_cluster" {
│
╵

References

No response

@rumsrami rumsrami added bug new new issue not yet triaged labels Mar 10, 2024
@jbardin
Copy link
Member

jbardin commented Mar 11, 2024

Hello,

This appears to be an issue or question with the Goole provider, not with Terraform itself. You can see existing issues and file a new one in their repository here: https://github.com/hashicorp/terraform-provider-google/issues. If you have questions about Terraform or the Google provider, it's better to use the community forum where there are more people ready to help. The GitHub issues here are monitored only by a few core maintainers.

Thanks!

@jbardin jbardin closed this as not planned Won't fix, can't repro, duplicate, stale Mar 11, 2024
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Apr 11, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

3 participants