New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrading Global Database Clusters Yields Inconsistent Plan #425
Comments
…es to global clusters. Fix terraform-aws-modules#425. A much more thorough explanation is provided in examples/global-cluster/README.md.
…upgrades to global clusters. Fix terraform-aws-modules#425. A much more thorough explanation is provided in examples/global-cluster/README.md. Update documentation via pre-commit for global_upgradable.
This issue has been automatically marked as stale because it has been open 30 days |
…upgrades to global clusters. Fix terraform-aws-modules#425. A much more thorough explanation is provided in examples/global-cluster/README.md. Update documentation via pre-commit for global_upgradable.
This issue isn't stale aside from its resolution awaiting review. |
This issue has been automatically marked as stale because it has been open 30 days |
Same as before. The resolution is awaiting review. |
This issue has been automatically marked as stale because it has been open 30 days |
Still alive and awaiting feedback. |
Description
There is an issue with global database clusters that is documented in the provider but not yet accounted for in the module. It only appears when both using global clusters and when upgrading to a new engine version. And even then... not always; it is inconsistent.
Given an implementation:
If the variable
engine_version
is changed to upgrade the cluster, we usually get the error given further down.Set aside that this isn't the latest version. I have worked with that as well, and will. The issue, I believe, is here. This needs
engine_version
to be ignored in the case of global clusters. However, since dynamic lifecycle blocks are not supported, the change I'm proposing is to have bothaws_rds_cluster.this
andaws_rds_cluster.this_ignore_engine_version
. Then in the locations that reference this resource, add a ternary to select the correct instance of the resource.What are your thoughts, @antonbabenko? Maybe there is a more simple workaround I'm overlooking.
Versions
~> 7.1.0"
and9.0.0
1.5.7
and1.6.6
Expected behavior
I expect this to happen given the note in the provider. What I would expect given the proposed change is that the isn't an inconsistent plan and all upgrades go well.
Actual behavior
The error is given as documented in the provider.
This is because when upgrading a global cluster, AWS upgrades the members, then when terraform attempts to upgrade the member, if that happens in a certain order the state of the member is not the same as in the state. Ignoring the change to
engine_version
in the member cluster would avoid the issues.The text was updated successfully, but these errors were encountered: