Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provider produced inconsistent final plan #236

Open
1 task done
ralvarezmar opened this issue Aug 8, 2023 · 10 comments
Open
1 task done

Provider produced inconsistent final plan #236

ralvarezmar opened this issue Aug 8, 2023 · 10 comments
Labels

Comments

@ralvarezmar
Copy link

Terraform CLI and Provider Versions

terraform version 1.2.3 and the version of hashicorp local is 2.4.0

Terraform Configuration

it doesn't look like a configuration error

Expected Behavior

This error shouldn't appear and it have to work without problems

Actual Behavior

In first place, terraform plan work it correctly. I do a terraform apply and fail, but if you execute again work without problems

Steps to Reproduce

  1. 'terraform plan'
  2. terraform apply

How much impact is this issue causing?

Medium

Logs

No response

Additional Information

terraform apply -auto-approve tfplan

│ Warning: "use_microsoft_graph": [DEPRECATED] This field now defaults to true and will be removed in v1.3 of Terraform Core due to the deprecation of ADAL by Microsoft.



azurerm_kubernetes_cluster.aks: Modifying... [id=/subscriptions//resourceGroups/rg-condorcloud-pre/providers/Microsoft.ContainerService/managedClusters/aks-condor-pre]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 10s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 20s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 30s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 40s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 50s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 1m0s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 1m10s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 1m20s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 1m30s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 1m40s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 1m50s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 2m0s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 2m10s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 2m20s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 2m30s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 2m40s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 2m50s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 3m0s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 3m10s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 3m20s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 3m30s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 3m40s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 3m50s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 4m0s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 4m10s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 4m20s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 4m30s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 4m40s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 4m50s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 5m0s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 5m10s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 5m20s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 5m30s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 5m40s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 5m50s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 6m0s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 6m10s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 6m20s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 6m30s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 6m40s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 6m50s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 7m0s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 7m10s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 7m20s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 7m30s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 7m40s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 7m50s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 8m0s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 8m10s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 8m20s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 8m30s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 8m40s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 8m50s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 9m0s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 9m10s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 9m20s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 9m30s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 9m40s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 9m50s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 10m0s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 10m10s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 10m20s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 10m30s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 10m40s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 10m50s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 11m0s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 11m10s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 11m20s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 11m30s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 11m40s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 11m50s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 12m0s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 12m10s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 12m20s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 12m30s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 12m40s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 12m50s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 13m0s elapsed]
azurerm_kubernetes_cluster.aks: Still modifying... [id=/subscriptions/ae3abbb9-1ae5-4775-954c-...Service/managedClusters/aks-condor-pre, 13m10s elapsed]
azurerm_kubernetes_cluster.aks: Modifications complete after 13m20s [id=/subscriptions/
/resourceGroups/rg-condorcloud-pre/providers/Microsoft.ContainerService/managedClusters/aks-condor-pre]

│ Warning: Argument is deprecated

│ with azurerm_kubernetes_cluster.aks,
│ on aks.tf line 9, in resource "azurerm_kubernetes_cluster" "aks":
│ 9: api_server_authorized_ip_ranges = var.api_server_authorized_ip_ranges

│ This property has been renamed to authorized_ip_ranges within the
api_server_access_profile block and will be removed in v4.0 of the
│ provider


│ Error: Provider produced inconsistent final plan

│ When expanding the plan for local_file.aks_kube_config to include new
│ values learned so far during apply, provider
│ "registry.terraform.io/hashicorp/local" produced an invalid new value for
│ .content: inconsistent values for sensitive attribute.

│ This is a bug in the provider, which should be reported in the provider's
│ own issue tracker.

Code of Conduct

  • I agree to follow this project's Code of Conduct
@ralvarezmar ralvarezmar added the bug label Aug 8, 2023
@austinvalle
Copy link
Member

Hi there @ralvarezmar 👋🏻 , thanks for reporting the issue and sorry you're running into trouble here.

To help us narrow down your issue, can you provide a snippet of your terraform configuration? Specifically the local_file resource and any resource/data sources that are being supplied as input to that local_file. (potentially in the content argument)

@ralvarezmar
Copy link
Author

ralvarezmar commented Aug 9, 2023

The local file is kubeconfig file:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://aks-condor-pre.mre-aks-condorcloud-pre.privatelink.northeurope.azmk8s.io:443
  name: aks-condor-pre
contexts:
- context:
    cluster: aks-condor-pre
    namespace: siniestros
    user: clusterAdmin_rg-condorcloud-pre_aks-condor-pre
  name: aks-condor-pre-admin
current-context: aks-condor-pre-admin
kind: Config
preferences: {}
users:
- name: clusterAdmin_rg-condorcloud-pre_aks-condor-pre
  user:
    client-certificate-data: DATA+OMITTED
    client-key-data: DATA+OMITTED
    token: REDACTED

And the configuration in terraform is:

resource "local_file" "aks_kube_config" {

  content         = azurerm_kubernetes_cluster.aks.kube_admin_config_raw
  filename        = var.kube_config_file
  file_permission = 0600

  depends_on = [azurerm_kubernetes_cluster.aks]
}
variable "kube_config_file" {
  default = "./kube.conf"
}
resource "azurerm_kubernetes_cluster" "aks" {
  name                            = var.aks_name
  location                        = var.location
  resource_group_name             = var.resource_group_name
  dns_prefix                      = (var.aks_private_cluster ? null : var.dns_prefix)
  dns_prefix_private_cluster      = (var.aks_private_cluster ? var.dns_prefix : null)
  kubernetes_version              = var.kubernetes_version
  api_server_authorized_ip_ranges = var.api_server_authorized_ip_ranges
  role_based_access_control_enabled = true
  http_application_routing_enabled = var.http_application_routing_enabled 
  private_dns_zone_id             = (var.aks_private_cluster ? data.azurerm_private_dns_zone.aks.id : null)
  private_cluster_enabled    = (var.aks_private_cluster ? true : false)
    sku_tier                   = var.aks_sku_tier

  default_node_pool {
    name                  = var.node_pool_name
    enable_node_public_ip = false
    vm_size               = var.node_pool_vm_size
    max_pods              = var.node_pool_max_pods
    os_disk_size_gb       = var.node_pool_os_disk_size_gb
    vnet_subnet_id        = data.terraform_remote_state.vnet_state.outputs.vnet_aks_subnet_id

    zones    = var.aks_availability_zones
    enable_auto_scaling   = var.auto_scaling_enable
    min_count             = var.auto_scaling_enable == true ? var.auto_scaling_min_count : null
    max_count             = var.auto_scaling_enable == true ? var.auto_scaling_max_count : null
    orchestrator_version = var.orchestrator_version
    #disk_encryption_id   = data.terraform_remote_state.analytics_state.outputs.disk_encryption_id
  }

  dynamic "linux_profile" {
    for_each = try(tls_private_key.ssh.public_key_openssh, null) != null ? [1] : []
    content {
      admin_username = var.linux_admin_username
      ssh_key {
        key_data = tls_private_key.ssh.public_key_openssh
      }
    }
  }

@austinvalle
Copy link
Member

austinvalle commented Aug 9, 2023

Thanks for supplying that info 👍🏻 ,

I don't see it in the docs for azurerm_kubernetes_cluster, but I looked at their provider code and the kube_admin_config_raw attribute is actually a sensitive attribute. Can you first try switching from using the local_file resource to the local_file_sensitive resource?

(there may be an additional problem in the azurerm provider resource occurring, but you'll want to use local_file_sensitive regardless)

@ralvarezmar
Copy link
Author

ralvarezmar commented Aug 10, 2023

Ok, i will try it changing this resource.

So the problem is in azurerm provider and not in hashicorp/local provider?

@mattduguid
Copy link

mattduguid commented Jan 15, 2024

I believe we too just had this issue, pipeline had been running fine each day, we upgraded the kubernetes version from 1.26.3 to 1.26.6 and that worked but when we wrote out the kube.config using resource "local_sensitive_file" we received the following error,

Error: Provider produced inconsistent final plan
When expanding the plan for module.k8s-cluster.local_sensitive_file.aks_connection_details to include new values learned so far during apply, provider "registry.terraform.io/hashicorp/local" produced an invalid new value for .content: inconsistent values for sensitive attribute.
This is a bug in the provider, which should be reported in the provider's own issue tracker.

We added some debug to the pipeline using null_resource which didn't end up displaying due to sensitive values but on the next run the error disappeared and the resource applied correctly.

Our versions in case it helps shed light on the issue,

...
Terraform v1.6.1
on linux_amd64
...
- Installing hashicorp/azurerm v3.87.0...
- Installed hashicorp/azurerm v3.87.0 (signed by HashiCorp)
- Installing hashicorp/kubernetes v2.25.2...
- Installed hashicorp/kubernetes v2.25.2 (signed by HashiCorp)
- Installing hashicorp/null v3.2.2...
- Installed hashicorp/null v3.2.2 (signed by HashiCorp)
- Installing hashicorp/local v2.4.1...
- Installed hashicorp/local v2.4.1 (signed by HashiCorp)
...

We have a few more version upgrades to perform will advise if we see it again.

@mattduguid
Copy link

mattduguid commented Jan 17, 2024

We have just stepped to the next version and I can confirm we see the bug on the pipeline run with the kubernetes version change, subsequent runs it comes right

@mattduguid
Copy link

mattduguid commented Jan 23, 2024

Confirming this happened on every single kubernetes version upgrade 1.26.3 -> 1.26.6 -> 1.26.10 -> 1.27.3 -> 1.27.7 -> 1.28.3

Ran TF_DEBUG this time and got the following which helped locate it,

...
2024-01-22T21:49:03.7443448Z 2024-01-22T21:49:03.742Z [WARN]  Provider "registry.terraform.io/hashicorp/local" produced an unexpected new value for module.k8s-cluster.local_sensitive_file.aks_connection_details during refresh.
2024-01-22T21:49:03.7444087Z       - Root object was present, but now absent
...

Looks like the resource is removed and a new one created and padded out in memory with nulls ready for values to come after apply,

...
Marking Computed attributes with null configuration values as unknown (known after apply) in the plan to prevent potential Terraform errors: 
...

However when it makes an http PUT request to write the state file to storage it contains an empty resource -> "instances": [],

...
2024-01-22T22:04:47.9532982Z       "module": "module.k8s-cluster",
2024-01-22T22:04:47.9533185Z       "mode": "managed",
2024-01-22T22:04:47.9533387Z       "type": "local_sensitive_file",
2024-01-22T22:04:47.9533606Z       "name": "aks_connection_details",
2024-01-22T22:04:47.9533860Z       "provider": "provider[\"registry.terraform.io/hashicorp/local\"]",
2024-01-22T22:04:47.9534256Z       "instances": []
2024-01-22T22:04:47.9534443Z     },
...

And this is what is written to state file,

...
 {
      "module": "module.k8s-cluster",
      "mode": "managed",
      "type": "local_sensitive_file",
      "name": "aks_connection_details",
      "provider": "provider[\"registry.terraform.io/hashicorp/local\"]",
      "instances": []
    },
...

On subsequent runs its populated and works fine from there until the next upgrade.

Is a ".SetID" in the "Create" required around here -> https://github.com/hashicorp/terraform-provider-local/blob/5c130fb2ed401765158856c407ee0684e308972e/internal/provider/resource_local_sensitive_file.go#L214C2-L214C19

eg,

...
checksums := genFileChecksums(content) # existing
resourceId := checksums.sha1Hex # new
resp.State.SetId(resourceId) # new
...

@mattgagliardi
Copy link

I'm not able to offer much information as I had to quickly revert my TF version to get the issue (kinda) resolved/get some work done...but I experienced the same problem. I'd been using TF v1.3.8 with zero problems, ran an upgrade to v1.7.1 this morning and suddenly I couldn't run deploys (apply) using the same code as before. Error message was Error: Provider produced inconsistent final plan. The file being problematic was an Ansible inventory...which changes between the initial plan and the actual execution of said plan (DHCP and whatnot). A second "apply" resolves the problem but why has the behavior changed? Reverting to v.1.3.8 got things back to their former working/expected behavior.

@mattgagliardi
Copy link

mattgagliardi commented Jan 29, 2024

Any chance this might be related to hashicorp/terraform#33234 (in v1.6.0)? Long story but I can tell you that as I've run the same code on a few different systems since my original comment. v.1.6.6 didn't work but v1.4.7, v1.5.5 and v1.5.7 worked as expected.

EDIT - installed v.1.6.0 and tried again...blew up in my face. To reiterate I'm OK up to v.1.5.7, then things begin not working in v.1.6.0 (and beyond).

@apparentlymart
Copy link
Member

It is possible (but not yet proven) that an error like this could be caused by a problem in another provider. I'm sharing the following in case it helps with debugging on this issue, but I don't have enough information here to debug this directly myself.

The error message described in this issue is one of Terraform's "consistency checks" to try to contain the impact of a buggy provider.

Terraform first creates a plan during the planning phase using a partial configuration (because of references to other objects that haven't been fully created/updated yet) and then planned again during the apply phase with full information, thereby creating the "final plan" that the error message mentions.

Terraform then checks to make sure that the final plan is consistent with the initial plan. "Consistent" is a shorthand for a number of different rules, but the most important rule (and the one that causes this most often) is if the provider returned a concrete known value in the initial plan but then returned a different known value in the final plan. Providers that cannot reliably predict the values in the final plan are supposed to use unknown value placeholders during planning to represent that.

However, some providers are built with a legacy Terraform SDK that was originally made for much older versions of Terraform, and those earlier Terraform versions had fewer of these safety checks. To avoid breaking those older providers, Terraform chooses to tolerate some consistency problems that tend to be caused by bugs and quirks of the old SDK, rather than true provider misbehavior.

Unfortunately problems can arise when a provider using the legacy SDK is used in conjunction with one that uses the modern plugin framework: if a result from the old-style provider is used as an argument to a new-style provider then Terraform will tolerate an inconsistent result from the first provider, but then the inconsistent result will propagate to the second provider, making its result appear inconsistent. In that case, Terraform misreports that the problem was with the new-style provider instead of the old-style provider, because the upstream problem wasn't contained sufficiently.

The hashicorp/azurerm provider uses the legacy SDK, while this hashicorp/local provider uses the modern plugin framework. Therefore if there were a consistency error in the Azure provider, and the inconsistent value were assigned to an argument of a resource type in this provider, then Terraform would misreport the problem as being with the local provider using an error like what this issue is discussing.


This is a situation that we can typically diagnose when full internal logs are provided, because Terraform Core emits internal warnings when it "tolerates" an inconsistency with a legacy-SDK provider. The warning equivalent of an error like we see in this issue would look something like this:

[WARN] Provider registry.terraform.io/hashicorp/azurerm produced an unexpected new value for azurerm_kubernetes_cluster.aks, but we are tolerating it because it is using the legacy plugin SDK.

The following problems may be the cause of any confusing errors from downstream operations:

[...]

The message would then include a list of the detected consistency problems, which we could then use to report a bug upstream (in the other provider) if appropriate.

If you experience this issue and are able to reproduce it, you can help with getting this resolved by running your reproduction with the environment variable TF_LOG=trace set, and then searching in the result for a log message like the one I described above. (I normally do this by searching for the word "tolerating", but beware that there may be multiple "tolerating" messages and not all of them are real problems in practice -- that's why this is just a warning and not an error -- so it's best to review all of them and look for one that mentions a resource whose attributes are being (directly or indirectly) used in the configuration of the local_file resource that Terraform blamed in the error message.

If you share messages that fit this description in subsequent comments -- along with the relevant parts of the configuration they relate to -- then I'd be happy to help with diagnosing whether they might be a cause of this error, and figuring out what bug we might report upstream if so.

If there's no such message then that would also be useful information to know, because it would suggest that the bug is likely in the hashicorp/local provider after all, and isn't an upstream bug.

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants