Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: Provider produced inconsistent final plan #1040

Open
MauriceArikoglu opened this issue Oct 2, 2023 · 5 comments
Open

Error: Provider produced inconsistent final plan #1040

MauriceArikoglu opened this issue Oct 2, 2023 · 5 comments
Labels

Comments

@MauriceArikoglu
Copy link

MauriceArikoglu commented Oct 2, 2023

Provider produced inconsistent final plan when running terraform apply without refreshing with existing tfstate

Affected Resource(s)

  • digitalocean_kubernetes_cluster

Expected Behavior

terraform apply to apply all changes required

Actual Behavior

Error: Provider produced inconsistent final plan
 
When expanding the plan for module.kubernetes.digitalocean_kubernetes_cluster.cluster to
include new values learned so far during apply, provider
"registry.terraform.io/digitalocean/digitalocean" produced an invalid new value for
.node_pool[0].tags: was cty.SetValEmpty(cty.String), but now null.
 
This is a bug in the provider, which should be reported in the provider's own issue tracker.

Steps to Reproduce

@danaelhe
Copy link
Member

danaelhe commented Oct 2, 2023

Thanks for the write up! Could you share what version of the terraform provider you're using? To best reproduce the issue, could you share the config file you're trying to apply?

@MauriceArikoglu
Copy link
Author

Hi @danaelhe

We are using the provider digitalocean/digitalocean at version 2.30

I will provide a comment with the configuration used - mind you it is working now. I specified dependencies between various resources strictly with depends_on and also changed the dependency graph manually, since many resources are involved and the implicit graph seemed to cause this and other issues.

What still happens from time to time is that a destroy isn't able to destroy all resources. Sometimes either the project or the project main vpc fail to delete. Re-running destroy always succeeds in destroying the remaining resource/s though.

@MauriceArikoglu
Copy link
Author

terraform {
    required_providers {
        digitalocean = {
            source  = "digitalocean/digitalocean"
        }
    }
}

data "digitalocean_kubernetes_versions" "available" {
    version_prefix = var.kubernetes_version
}

resource "digitalocean_kubernetes_cluster" "cluster" {
    name   = var.cluster_name
    region = var.cluster_region
    vpc_uuid = var.vpc_id
    auto_upgrade = true
    version = data.digitalocean_kubernetes_versions.available.latest_version

    maintenance_policy {
        day         = "sunday"
        start_time  = "03:00"
    }

    node_pool {
        name       = var.cluster_defaultnode_name
        size       = var.cluster_defaultnode_size
        node_count = 1
        labels = {
            project = var.project_id
        }
    }
}

resource "digitalocean_project_resources" "project" {
    project = var.project_id
    resources = [
        digitalocean_kubernetes_cluster.cluster.urn
    ]

    depends_on = [digitalocean_kubernetes_cluster.cluster]
}

@danaelhe
Copy link
Member

danaelhe commented Oct 6, 2023

Hmmm...that's interesting. Thanks for providing additional context. I agree with your hunch that it could be something up with the terraform dependency graph since adding explicit dependencies seems to mitigate the error'd behavior. I'm going to do some research and see if there's any opportunity for optimization for our provider to best direct terraform's dependency graphs.

I haven't been able to get the destroy error yet, but I'll keep trying at it 🤞

@MauriceArikoglu
Copy link
Author

MauriceArikoglu commented Oct 7, 2023

Hmmm...that's interesting. Thanks for providing additional context. I agree with your hunch that it could be something up with the terraform dependency graph since adding explicit dependencies seems to mitigate the error'd behavior. I'm going to do some research and see if there's any opportunity for optimization for our provider to best direct terraform's dependency graphs.

I haven't been able to get the destroy error yet, but I'll keep trying at it 🤞

@danaelhe sorry I am not of more help right now. I did not record the full config when the error happened since I was busy trial-ing my way to success. Once I have more time at hands I could try recreating, but I am not sure when that will be.

To give a little more context:
The given config file is set up as a terraform module which I reference from my main terraform file in the root. From the documentation I learned that accessing kubernetes credentials for setting up a kubernetes provider is better done via a datasource than dialing the resource from the module itself, so in my main terraform file I also have a digitalocean kubernetes cluster datasource.

I've also had dependency issues when destroying the infra with that setup, as the cluster was destroyed and terraform then tried dialing the datasource when the cluster was not existing anymore. Funnily enough I had a dependency on the cluster but it still errored. I was able to also resolve this with the explicit dependency graph throughout the entire infra.

Again what I wasn't able to resolve is the destroy issue of the project mentioned above. But that also only happens in 50% of the cases. (We are creating and destroying infra regularly, think of it as preview deployments for pull requests)


Edit: I would be willing to take the time and go through the whole config in a call if that would be helpful. I sadly cannot provide access to the repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants