You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
not sure if it's a bug or a not implemented feature but it looks like the module is working properly together with terraform workspaces.
It seems the issue is that when a workspace is switched this module is going to try again to create the s3 bucket and dynamodb. but this two resources are already existing so it will fail.
Using the workspace in the bucket and dynamo table name with case issues with the backend.tf because this will alter the whole time.
Using enabled = false will cause the first terraform workspace which has created the s3 bucket to destroy it again. Also sounds not good :D
Expected Behavior
The module is checking if the expected bucket is already and then skip the creation.
Maybe something like the enabled flag but just skip_creation_if_resources_exists or so on.
Steps to Reproduce
Steps to reproduce the behavior:
Create two workspaces
Apply the first workspace and see s3 bucket and dynamo table created
Switch to second workspace
Apply second workspace and see errors while s3 bucket and dynamo table creation
Additional Context
Maybe its also not possible and for multi workspace usage we need to create a separate terraform project (workspace) which is taking care of the resource but then an example would be nice.
Thanks
The text was updated successfully, but these errors were encountered:
Instead of setting enabled to false, since you're working with workspaces you can have the default workspace house the state for the backend, and after bootstrap work simply with other workspaces. Then, set enabled as follows:
module"terraform_state_backend" {
enabled =terraform.workspace=="default"
source ="cloudposse/tfstate-backend/aws"
version ="0.38.1"# ...
}
This way, the module will set the count argument of every resource block it uses to 0, turning them into no-ops. Additionally, if you want to avoid provisioning infrastructure of your own save for the one supporting the S3+DynamoDB backend, you can separate the rest of your configuration into a module and use count:
module"rest_of_my_config" {
# Avoid creating anything within if we're in the default workspace.
count =terraform.workspace=="default"?0:1# ...
}
The module creates an S3 backend to be used by Terraform. Usually this is done once per company/organization, so I do not understand why you are trying to create one per workspace.
Whether or not you are using workspaces, it is your responsibility to ensure you are not creating duplicate resources by using different inputs for different instantiations of this module. This module provides several inputs you can use to vary the name of created resources:
Describe the Bug
Hi,
not sure if it's a bug or a not implemented feature but it looks like the module is working properly together with terraform workspaces.
It seems the issue is that when a workspace is switched this module is going to try again to create the s3 bucket and dynamodb. but this two resources are already existing so it will fail.
Using the workspace in the bucket and dynamo table name with case issues with the
backend.tf
because this will alter the whole time.Using
enabled = false
will cause the first terraform workspace which has created the s3 bucket to destroy it again. Also sounds not good :DExpected Behavior
The module is checking if the expected bucket is already and then skip the creation.
Maybe something like the
enabled
flag but justskip_creation_if_resources_exists
or so on.Steps to Reproduce
Steps to reproduce the behavior:
Additional Context
Maybe its also not possible and for multi workspace usage we need to create a separate terraform project (workspace) which is taking care of the resource but then an example would be nice.
Thanks
The text was updated successfully, but these errors were encountered: