Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to be used with terraform workspaces #122

Open
eloo opened this issue Oct 6, 2022 · 2 comments
Open

Unable to be used with terraform workspaces #122

eloo opened this issue Oct 6, 2022 · 2 comments
Labels
invalid This doesn't seem right

Comments

@eloo
Copy link

eloo commented Oct 6, 2022

Describe the Bug

Hi,
not sure if it's a bug or a not implemented feature but it looks like the module is working properly together with terraform workspaces.

It seems the issue is that when a workspace is switched this module is going to try again to create the s3 bucket and dynamodb. but this two resources are already existing so it will fail.

Using the workspace in the bucket and dynamo table name with case issues with the backend.tf because this will alter the whole time.

Using enabled = false will cause the first terraform workspace which has created the s3 bucket to destroy it again. Also sounds not good :D

Expected Behavior

The module is checking if the expected bucket is already and then skip the creation.
Maybe something like the enabled flag but just skip_creation_if_resources_exists or so on.

Steps to Reproduce

Steps to reproduce the behavior:

  1. Create two workspaces
  2. Apply the first workspace and see s3 bucket and dynamo table created
  3. Switch to second workspace
  4. Apply second workspace and see errors while s3 bucket and dynamo table creation

Additional Context

Maybe its also not possible and for multi workspace usage we need to create a separate terraform project (workspace) which is taking care of the resource but then an example would be nice.

Thanks

@eloo eloo added the bug 🐛 An issue with the system label Oct 6, 2022
@d3adb5
Copy link

d3adb5 commented Jan 4, 2023

Instead of setting enabled to false, since you're working with workspaces you can have the default workspace house the state for the backend, and after bootstrap work simply with other workspaces. Then, set enabled as follows:

module "terraform_state_backend" {
  enabled = terraform.workspace == "default"
  source  = "cloudposse/tfstate-backend/aws"
  version = "0.38.1"
  # ...
}

This way, the module will set the count argument of every resource block it uses to 0, turning them into no-ops. Additionally, if you want to avoid provisioning infrastructure of your own save for the one supporting the S3+DynamoDB backend, you can separate the rest of your configuration into a module and use count:

module "rest_of_my_config" {
  # Avoid creating anything within if we're in the default workspace.
  count = terraform.workspace == "default" ? 0 : 1
  # ...
}

@Nuru
Copy link
Sponsor Contributor

Nuru commented Apr 23, 2023

The module creates an S3 backend to be used by Terraform. Usually this is done once per company/organization, so I do not understand why you are trying to create one per workspace.

Whether or not you are using workspaces, it is your responsibility to ensure you are not creating duplicate resources by using different inputs for different instantiations of this module. This module provides several inputs you can use to vary the name of created resources:

  • namespace
  • tenant
  • environment
  • stage
  • name
  • attributes

@Nuru Nuru added invalid This doesn't seem right and removed bug 🐛 An issue with the system labels Apr 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
invalid This doesn't seem right
Projects
None yet
Development

No branches or pull requests

3 participants