Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform Cloud Remote Backend #341

Open
e-moshaya opened this issue May 19, 2019 · 13 comments
Open

Terraform Cloud Remote Backend #341

e-moshaya opened this issue May 19, 2019 · 13 comments

Comments

@e-moshaya
Copy link

e-moshaya commented May 19, 2019

I am trying to use the "remote" backend for the backend_configurations.

https://www.terraform.io/docs/backends/types/remote.html

However, this doesn't seem to be working. in my .kitchen.yml, I have:

---
driver:
  name: "terraform"
  root_module_directory: "examples/test_fixture"
  backend_configurations:
    backend: "remote"
    hostname: "app.terraform.io"
    organization: "myorgname"

$$$$$$ Running command `terraform init -input=false -lock=true -lock-timeout=0s -upgrade -force-copy -backend=true -backend-config=backend\=remote -backend-config=hostname\=app.terraform.io -backend-config=organization\=myorgname -get=true -get-plugins=true -verify-plugins=true /Users/emoshaya/Documents/GitHub/terraform-projects/modules/cloudtrail-terraform-module/examples/test_fixture

What's the correct syntax?

@aaron-lane
Copy link
Collaborator

@e-moshaya: thank you for your interest in the project!

The backend must be defined in the Terraform configuration. backend_configurations provides the ability to specify backend properties as described by the Partial Configuration documentation.

@tdsacilowski
Copy link

Hi there, I have an additional question on this topic... and forgive me if I misunderstand how some of this configuration works, still wrapping my head around kitchen-terraform.

Makes sense that you'd define the backend type in your Terraform configuration file and just pass specific parameters for it through your kitchen.yml file. One area where I see an issue though is with the remote backend type.

This type takes a number of configuration parameters, one of them being workspaces, which is configured as a block. The documentation for Class: Kitchen::Driver::Terraform indicates that backend_configurations is of the type "Mapping of scalars to scalars", which seems to indicate that I can't have a nested block here.

This would be especially useful for setting a prefix to be added to the workspaces that kitchen-terraform creates via the CLI, since the remote backend also supports a CLI-driven workflow for interfacing with Terraform Cloud and Terraform Enterprise.

Curious if this is supported and if maybe I'm just missing something?

For reference, I tried the following in my kitchen.yml file:

---
driver:
  name: terraform

  backend_configurations:
    organization: teddyruxpin
    hostname:     app.terraform.io
    workspaces:
      prefix: kt-validate
...

And received the following error response:

Dev/ptfe-testing/terraform-gcp via 🛠 default took 3s
❯ bundle exec kitchen verify
-----> Starting Kitchen (v1.25.0)
>>>>>> ------Exception-------
>>>>>> Class: Kitchen::UserError
>>>>>> Message: Kitchen::Driver::Terraform configuration: backend_configurations {:value=>["must be a hash which includes only symbol keys and string values"]}
>>>>>> ----------------------
>>>>>> Please see .kitchen/logs/kitchen.log for more details
>>>>>> Also try running `kitchen diagnose --all` for configuration

@aaron-lane
Copy link
Collaborator

aaron-lane commented Jun 21, 2019

Hi @tdsacilowski, great question!

The values of the backend_configurations mapping must be scalars because they are ultimately passed as command-line arguments to terraform init. The same is true for variables, and there is an example of a variable with a map value within the kitchen.yml used to test this project.

@tdsacilowski
Copy link

Hi there, so I just wanted to follow up on this since there were a couple things going on.

TL;DR: I'm still unable to pass workspaces data through the kitchen.yml file.

My first problem was that I was testing with TF 0.12; which does some validation of arguments passed to the command line (see here). Based on the comments in that discussion, it seems that a properly formatted CLI call with backend configs should look something like this:

terraform init -backend-config=organization=my-org \
  -backend-config=workspaces=[{name = "foobar"}]

(workspaces needs to be a list)

But, running that, I get a shell error:

terraform init -backend-config=organization=my-org \
➜   -backend-config=workspaces=[{name = "foobar"}]
zsh: bad pattern: -backend-config=workspaces=[{name

A bit of digging and I found this issue that indicates I need to single-quote the value for workspaces:

terraform init -backend-config=organization=my-org \
  -backend-config=workspaces='[{name = "foobar"}]'

Initializing the backend...

Ok, I know what the format should look like now. So I downgraded back to TF 0.11.14 and updated my kitchen.yml file:

---
driver:
  name: terraform

  backend_configurations:
    organization: teddyruxpin
    hostname:     app.terraform.io
    workspaces:   '''[{ name = "foobar" }]'''
...

With the following results:

bundle exec kitchen verify
-----> Starting Kitchen (v1.25.0)
-----> Creating <default-centos>...
       Terraform v0.11.14
       + provider.google v2.9.1

       Your version of Terraform is out of date! The latest version
       is 0.12.2. You can update by downloading from www.terraform.io/downloads.html
$$$$$$ Running command `terraform init -input=false -lock=true -lock-timeout=0s  -upgrade -force-copy -backend=true -backend-config="organization=teddyruxpin" -backend-config="hostname=app.terraform.io" -backend-config="workspaces='[{ name = "foobar" }]'" -get=true -get-plugins=true -verify-plugins=true` in directory /Users/teddy/Dev/ptfe-testing/terraform-gcp

       Initializing the backend...
       Backend configuration changed!

       Terraform has detected that the configuration specified for the backend
       has changed. Terraform will now check for existing state in the backends.


       Error initializing new backend: Error configuring the backend "remote": 1 error occurred:
       	* workspaces: should be a list


>>>>>> ------Exception-------
>>>>>> Class: Kitchen::ActionFailed
>>>>>> Message: 1 actions failed.
>>>>>>     Create failed on instance <default-centos>.  Please see .kitchen/logs/default-centos.log for more details
>>>>>> ----------------------
>>>>>> Please see .kitchen/logs/kitchen.log for more details
>>>>>> Also try running `kitchen diagnose --all` for configuration

Looking at the command that KT is composing:

terraform init -input=false -lock=true -lock-timeout=0s  -upgrade -force-copy -backend=true -backend-config="organization=teddyruxpin" -backend-config="hostname=app.terraform.io" -backend-config="workspaces='[{ name = "foobar" }]'" -get=true -get-plugins=true -verify-plugins=true

I see that there are double-quotes around workspaces=....

If I take just that command and remove those double-quotes:

terraform init -input=false -lock=true -lock-timeout=0s  -upgrade -force-copy -backend=true -backend-config="organization=teddyruxpin" -backend-config="hostname=app.terraform.io" -backend-config=workspaces='[{ name = "foobar" }]' -get=true -get-plugins=true -verify-plugins=true

Initializing the backend...

Initializing provider plugins...
- Checking for available provider plugins on https://releases.hashicorp.com...
- Downloading plugin for provider "google" (2.9.1)...
^C
The following providers do not have any version constraints in configuration,
so the latest version was installed.

To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.

* provider.google: version = "~> 2.9"

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

It works! So it seems that however KT is parsing values and composing the terraform init command is causing errors with parsing.

Not sure what can be done about that though at this point. Thoughts? Suggestions?

@aaron-lane
Copy link
Collaborator

@tdsacilowski can you try workspaces: '[ { name = \"foobar\" } ]'?

Side note: I'm glad to see that Kitchen-Terraform is being used to thwart the evil ambitions of M.A.V.O.

@lawliet89
Copy link

I think this is a limitation of Terraform init CLI.

The relevant code shows that the CLI validates only the Attributes part of the Backend, whereas workspaces are Blocks.

@aaron-lane
Copy link
Collaborator

If that is the case, we could also add an attribute to support the ability to pass pathnames to backend-config.

@tdsacilowski
Copy link

Just a quick update... using workspaces: '[ { name = \"foobar\" } ]' worked, allowing the configuration to pass through to the Terraform CLI. Alternately, backend-config can also contain a path to a file containing the partial config, so adding the appropriate attribute would seem to make sense.

However, I still ran into other issues on the TF side, even though the config passed through successfully. I was discussing this with some engineers internally, but seems Slack is currently down so I'll post more of those details as soon as it's back up.

In short, though, defining a workspace.name or workspace.prefix for the remote backend determines the name of a workspace that would be created in Terraform Enterprise/Terraform Cloud (TFE/TFC) when performing the terraform init step, creating the workspace if it doesn't exist. However, KT creates its own workspace based on the suite(s)/platform(s) defined in the kitchen.yml file and seems there's a conflict that causes a Error loading state: named states not supported from TF.

However, taking a few steps back here... I'm wondering if having KT running its tests in a remote workspace makes much sense? After thinking about it a bit, I think it makes more sense for KT to run its tests locally, since the point is to provision resources, test, and destroy... meaning fairly ephemeral resources/state that doesn't need to be collaborated on by multiple teams, etc. Whereas the workspaces/resources on TFE/TFC would represent actual deployed, longer-lived resources/state.

I want to be very careful here to not make any assumptions on what KT should do vs what it can do, so I'd be curious as to what others think?

@aaron-lane
Copy link
Collaborator

@tdsacilowski My recommendation is to stick with local backends when using KT for the reasons which you identified. While a remote backend usually isn't a critical element in testing the behaviour of a module, if there is a need to improve support support for remote backends then we can certainly investigate it.

@darrylb-github
Copy link

My use case for using a remote backend is that my tests create AWS resources, and I run test kitchen from docker in CI. If something goes wrong before kitchen can cleanup with --destroy=always, I don't want to lose the state or it becomes a pain to cleanup the AWS resources created during the testing. I'd rather not start trying to preserve/extract the state manually either from container environment. The comfort of a dedicated remote state backend is reassuring in this scenario, as I know I can always easily cleanup if something goes wrong during tests.

@aaron-lane
Copy link
Collaborator

@darrylb-github that is a good point! Are there limitations of the current design which are preventing you from using a remote backend?

@darrylb-github
Copy link

@aaron-lane I can use the s3 remote backend and it seems ok. Using terraform cloud would have been slightly preferable but it's not a dealbreaker. Thanks.

@aaron-lane
Copy link
Collaborator

I think we can explore updating backend_configurations to support nested objects. 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants