Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

terraform-nixos doesn't work on Terraform Cloud #41

Open
bbigras opened this issue Jan 14, 2021 · 20 comments
Open

terraform-nixos doesn't work on Terraform Cloud #41

bbigras opened this issue Jan 14, 2021 · 20 comments

Comments

@bbigras
Copy link

bbigras commented Jan 14, 2021

Describe the bug

Terraform v0.14.4
Configuring remote state backend...
Initializing Terraform configuration...
tls_private_key.state_ssh_key: Refreshing state... [id=73c4bc5aee756477cd0e5329a0217a5d5538bed7]
local_file.machine_ssh_key: Refreshing state... [id=403f29bde18a7b04fbb51d277140357369628b1f]
aws_key_pair.generated_key: Refreshing state... [id=generated-key-597bc4e3ec93b09f8543849173beca0a55dd2c5ce00ad482b4ca79bde84c7732]
aws_security_group.ssh_and_egress: Refreshing state... [id=sg-0ead032fef275ba39]
aws_instance.machine: Refreshing state... [id=i-063cd3c8a22644746]

Error: failed to execute ".terraform/modules/deploy_nixos/deploy_nixos/nixos-instantiate.sh": running (instantiating):  'nix-instantiate' '--show-trace' '--expr' $'\n  { system, configuration, ... }:\n  let\n    os = import <nixpkgs/nixos> { inherit system configuration; };\n    inherit (import <nixpkgs/lib>) concatStringsSep;\n  in {\n    substituters = concatStringsSep " " os.config.nix.binaryCaches;\n    trusted-public-keys = concatStringsSep " " os.config.nix.binaryCachePublicKeys;\n    drv_path = os.system.drvPath;\n    out_path = os.system;\n    inherit (builtins) currentSystem;\n  }' '--argstr' 'configuration' '/terraform/configuration.nix' '--argstr' 'system' 'x86_64-linux' -A out_path
.terraform/modules/deploy_nixos/deploy_nixos/nixos-instantiate.sh: line 44: nix-instantiate: command not found

To Reproduce

Follow the guide at https://nixos.org/guides/deploying-nixos-using-terraform.html (it's at the part with the configuration.nix file).

Expected behavior

Environment

  • system: "x86_64-linux"
  • host os: Linux 5.10.1-zen1, NixOS, 21.03.20210109.257cbbc (Okapi)
  • multi-user?: yes
  • sandbox: yes
  • version: nix-env (Nix) 2.4pre20201205_a5d85d0
  • channels(root): "nixos-21.03pre260232.733e537a8ad"
  • channels(bbigras): "home-manager-20.09"
  • nixpkgs: /nix/var/nix/profiles/per-user/root/channels/nixos

Terraform v0.14.4

I tried with 5f5a040 and f0f6232.

Additional context

@bbigras
Copy link
Author

bbigras commented Jan 14, 2021

Maybe it's because I forgot to run terraform init.

@bbigras
Copy link
Author

bbigras commented Jan 14, 2021

Ah no I have the same message.

@zimbatm
Copy link
Member

zimbatm commented Feb 9, 2021

It's as the error says, nix-instantiate needs to be installed on the machine that runs Terraform. This can also happen when running nix-shell --pure if Nix is not part of the shell's runtime closure.

@bbigras
Copy link
Author

bbigras commented Feb 10, 2021

I'm running this on NixOS. nix-instantiate is in my path and I'm not using nix-shell.

@zimbatm
Copy link
Member

zimbatm commented Feb 11, 2021

Are you using Terraform Cloud by any chance? If the execution is done on the remote worker it won't have nix-instantiate installed.

@bbigras
Copy link
Author

bbigras commented Feb 11, 2021

Yes, sorry. I didn't realize I was using it.

I'm guessing there's no way around that.

Thanks.

@bbigras bbigras closed this as completed Feb 11, 2021
@zimbatm
Copy link
Member

zimbatm commented Feb 11, 2021

Would you be interested in investigating this if it was possible?
The local-exec provisioned could be added on a null_recource to pull a static version of nix. See https://twitter.com/zimbatm/status/1359160894249385988?s=20

@bbigras
Copy link
Author

bbigras commented Feb 12, 2021

resource "null_resource" "cluster" {
  provisioner "local-exec" {
    command = "sh <(curl -L https://github.com/numtide/nix-flakes-installer/releases/download/nix-2.4pre20210126_f15f0b8/install)"
    interpreter = ["bash", "-c"]
  }
}
module "deploy_nixos" {
    source = "git::https://github.com/tweag/terraform-nixos.git//deploy_nixos?ref=5f5a0408b299874d6a29d1271e9bffeee4c9ca71"
    nixos_config = "${path.module}/configuration.nix"
    target_host = aws_instance.machine.public_ip
    ssh_private_key_file = local_file.machine_ssh_key.filename
  ssh_agent = false
  depends_on = [ "null_resource.cluster" ]
}
Error: Error running command 'sh <(curl -L https://github.com/numtide/nix-flakes-installer/releases/download/nix-2.4pre20210126_f15f0b8/install)': exit status 1. Output:   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   615  100   615    0     0   4300      0 --:--:-- --:--:-- --:--:--  4300
100  3865  100  3865    0     0  15775      0 --:--:-- --:--:-- --:--:-- 15775
/dev/fd/63: 63: /dev/fd/63: --tarball-url-prefix: not found
downloading Nix 2.4pre20210126_f15f0b8 binary tarball for x86_64-linux from 'https://github.com/numtide/nix-flakes-installer/releases/download/nix-2.4pre20210126_f15f0b8/nix-2.4pre20210126_f15f0b8-x86_64-linux.tar.xz' to '/tmp/nix-binary-tarball-unpack.Khl0ernFOe'...
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   654  100   654    0     0   5190      0 --:--:-- --:--:-- --:--:--  5232
100 17.0M  100 17.0M    0     0  14.8M      0  0:00:01  0:00:01 --:--:-- 20.9M
Note: a multi-user installation is possible. See https://nixos.org/nix/manual/#sect-multi-user-installation
performing a single-user installation of Nix...
directory /nix does not exist; creating it by running 'mkdir -m 0755 /nix && chown terraform /nix' using sudo
sudo: unable to stat /etc/sudoers: No such file or directory
sudo: no valid sudoers sources found, quitting
sudo: unable to initialize policy plugin
/tmp/nix-binary-tarball-unpack.Khl0ernFOe/unpack/nix-2.4pre20210126_f15f0b8-x86_64-linux/install: please manually run 'mkdir -m 0755 /nix && chown terraform /nix' as root to create /nix

Terraform Cloud does not allow you to elevate a command's permissions with sudo during Terraform runs. This means you cannot install packages using the worker OS's normal package management tools. However, you can install and execute standalone binaries in Terraform's working directory.

https://www.terraform.io/docs/cloud/run/install-software.html#only-install-standalone-binaries

@zimbatm zimbatm changed the title nixos-instantiate.sh: line 44: nix-instantiate: command not found terraform-nixos doesn't work on Terraform Cloud Feb 13, 2021
@zimbatm zimbatm reopened this Feb 13, 2021
@zimbatm
Copy link
Member

zimbatm commented Feb 13, 2021

It's a bit of uncharted territory so I am not sure it will work. Ideally something like this would work:

  1. Pull the buildStatic.x86_64-linux binary from https://github.com/numtide/nix-flakes-installer/releases/tag/nix-2.4pre20210207_fd6eaa1
  2. Create a wrapper script that exec -a "$0" ./buildStatic.x86_64-linux --store /tmp/nix "$@".
  3. Add a bunch of symlink aliases for all the commands somewhere in the PATH that point to this wrapper script.

Unfortunately, --store wasn't working great in my previous attempts.

Another approach would be to use nix-user-chroot.

Another thing to figure out is if Terraform Cloud provides any user-writable location in the PATH. If not, deploy_nixos could be extended with an option to set the PATH, or point to the folder that contains the Nix installation.

@bbigras
Copy link
Author

bbigras commented Mar 3, 2021

Could https://github.com/DavHau/nix-portable also help with this?

@zimbatm
Copy link
Member

zimbatm commented Mar 4, 2021

Yes, that might also be a good fallback

@bbigras
Copy link
Author

bbigras commented Mar 9, 2021

I tried but got some error.

null_resource.cluster (local-exec): proot error: '/home/terraform/.nix-portable/store/8yx9ys5a40vg5r8hk14qlhrfgapmic3v-nix-2.4pre20210205_480426a/bin/63' not found (root = /home/terraform/.nix-portable/emptyroot, cwd = /terraform, $PATH=(null))

terraform {
    backend "remote" {
        organization = "bbigras"

        workspaces {
            name = "test-nix-portable"
        }
    }
}

resource "null_resource" "cluster" {
  provisioner "local-exec" {
    command = "curl https://gitlab.com/proot/proot/-/jobs/981080842/artifacts/raw/dist/proot > proot"
  }

  provisioner "local-exec" {
    command = "chmod u+x proot"
  }

  provisioner "local-exec" {
    command = "ls -l /home/terraform"
  }

  provisioner "local-exec" {
    command = "/terraform/proot --help"
  }

  provisioner "local-exec" {
    command = "ls -l"
  }

  provisioner "local-exec" {
    command = "pwd"
  }

  provisioner "local-exec" {
    command = "bash <(curl -L https://github.com/DavHau/nix-portable/releases/download/v003/nix-portable)"

    interpreter = ["bash", "-c"]
    environment = {
      NP_PROOT = "/terraform/proot"
      NP_RUNTIME = "proot"
      NP_DEBUG = "1"
    #   NP_RUNTIME = "bwrap"
    }
  }
}

provider "aws" {
  region = "ca-central-1"
}

module "nixos_image" {
    source  = "git::https://github.com/tweag/terraform-nixos.git//aws_image_nixos?ref=5f5a0408b299874d6a29d1271e9bffeee4c9ca71"
    release = "20.09"
}

resource "aws_instance" "machine" {
  ami             = module.nixos_image.ami
  instance_type   = "t3.micro"
  root_block_device {
    volume_size = 50 # GiB
  }
}

module "deploy_nixos" {
  source = "git::https://github.com/tweag/terraform-nixos.git//deploy_nixos?ref=5f5a0408b299874d6a29d1271e9bffeee4c9ca71"
  target_host = aws_instance.machine.public_ip
  ssh_agent = false
  depends_on = [ null_resource.cluster ]
}
log
Running apply in the remote backend. Output will stream here. Pressing Ctrl-C
will cancel the remote apply if it's still pending. If the apply started it
will stop streaming the logs, but will not stop the apply running remotely.

Preparing the remote apply...

To view this run in a browser, visit:
https://app.terraform.io/app/bbigras/test-nix-portable/runs/run-Kk93kpHJ2wWY3SSL

Waiting for the plan to start...

Terraform v0.14.7
Configuring remote state backend...
Initializing Terraform configuration...

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create
 <= read (data resources)

Terraform will perform the following actions:

  # aws_instance.machine will be created
  + resource "aws_instance" "machine" {
      + ami                          = "ami-06d5ee429f153f856"
      + arn                          = (known after apply)
      + associate_public_ip_address  = (known after apply)
      + availability_zone            = (known after apply)
      + cpu_core_count               = (known after apply)
      + cpu_threads_per_core         = (known after apply)
      + get_password_data            = false
      + host_id                      = (known after apply)
      + id                           = (known after apply)
      + instance_state               = (known after apply)
      + instance_type                = "t3.micro"
      + ipv6_address_count           = (known after apply)
      + ipv6_addresses               = (known after apply)
      + key_name                     = (known after apply)
      + outpost_arn                  = (known after apply)
      + password_data                = (known after apply)
      + placement_group              = (known after apply)
      + primary_network_interface_id = (known after apply)
      + private_dns                  = (known after apply)
      + private_ip                   = (known after apply)
      + public_dns                   = (known after apply)
      + public_ip                    = (known after apply)
      + secondary_private_ips        = (known after apply)
      + security_groups              = (known after apply)
      + source_dest_check            = true
      + subnet_id                    = (known after apply)
      + tenancy                      = (known after apply)
      + vpc_security_group_ids       = (known after apply)

      + ebs_block_device {
          + delete_on_termination = (known after apply)
          + device_name           = (known after apply)
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + kms_key_id            = (known after apply)
          + snapshot_id           = (known after apply)
          + tags                  = (known after apply)
          + throughput            = (known after apply)
          + volume_id             = (known after apply)
          + volume_size           = (known after apply)
          + volume_type           = (known after apply)
        }

      + enclave_options {
          + enabled = (known after apply)
        }

      + ephemeral_block_device {
          + device_name  = (known after apply)
          + no_device    = (known after apply)
          + virtual_name = (known after apply)
        }

      + metadata_options {
          + http_endpoint               = (known after apply)
          + http_put_response_hop_limit = (known after apply)
          + http_tokens                 = (known after apply)
        }

      + network_interface {
          + delete_on_termination = (known after apply)
          + device_index          = (known after apply)
          + network_interface_id  = (known after apply)
        }

      + root_block_device {
          + delete_on_termination = true
          + device_name           = (known after apply)
          + encrypted             = (known after apply)
          + iops                  = (known after apply)
          + kms_key_id            = (known after apply)
          + throughput            = (known after apply)
          + volume_id             = (known after apply)
          + volume_size           = 50
          + volume_type           = (known after apply)
        }
    }

  # null_resource.cluster will be created
  + resource "null_resource" "cluster" {
      + id = (known after apply)
    }

  # module.deploy_nixos.data.external.nixos-instantiate will be read during apply
  # (config refers to values not yet known)
 <= data "external" "nixos-instantiate"  {
      + id      = (known after apply)
      + program = [
          + ".terraform/modules/deploy_nixos/deploy_nixos/nixos-instantiate.sh",
          + "-",
          + "",
          + ".",
          + "--argstr",
          + "system",
          + "x86_64-linux",
        ]
      + result  = (known after apply)
    }

  # module.deploy_nixos.null_resource.deploy_nixos will be created
  + resource "null_resource" "deploy_nixos" {
      + id       = (known after apply)
      + triggers = (known after apply)
    }

Plan: 3 to add, 0 to change, 0 to destroy.

null_resource.cluster (local-exec): Executing: ["/bin/sh" "-c" "curl https://gitlab.com/proot/proot/-/jobs/981080842/artifacts/raw/dist/proot > proot"]
null_resource.cluster (local-exec):   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
null_resource.cluster (local-exec):                                  Dload  Upload   Total   Spent    Left  Speed
aws_instance.machine: Creating...
null_resource.cluster (local-exec):   0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
null_resource.cluster (local-exec): 100 1899k  100 1899k    0     0  4724k      0 --:--:-- --:--:-- --:--:-- 4724k
null_resource.cluster: Provisioning with 'local-exec'...
null_resource.cluster (local-exec): Executing: ["/bin/sh" "-c" "chmod u+x proot"]
null_resource.cluster: Provisioning with 'local-exec'...
null_resource.cluster (local-exec): Executing: ["/bin/sh" "-c" "ls -l /home/terraform"]
null_resource.cluster (local-exec): total 32996
null_resource.cluster (local-exec): -rw-r--r-- 1 terraform terraform 33783879 Mar  9 16:36 terraform.zip
null_resource.cluster: Provisioning with 'local-exec'...
null_resource.cluster (local-exec): Executing: ["/bin/sh" "-c" "/terraform/proot --help"]
null_resource.cluster (local-exec): proot v5.2.0-alpha-8c0ccf7d: chroot, mount --bind, and binfmt_misc without privilege/setup.

null_resource.cluster (local-exec): Usage:
null_resource.cluster (local-exec):   proot [option] ... [command]


null_resource.cluster (local-exec): Regular options:
null_resource.cluster (local-exec):   -r *path*, --rootfs=*path*
null_resource.cluster (local-exec): 	Use *path* as the new guest root file-system, default is /.

null_resource.cluster (local-exec): 	The specified path typically contains a Linux distribution where
null_resource.cluster (local-exec): 	all new programs will be confined.  The default rootfs is /
null_resource.cluster (local-exec): 	when none is specified, this makes sense when the bind mechanism
null_resource.cluster (local-exec): 	is used to relocate host files and directories, see the -b
null_resource.cluster (local-exec): 	option and the Examples section for details.

null_resource.cluster (local-exec): 	It is recommended to use the -R or -S options instead.

null_resource.cluster (local-exec):   -b *path*, --bind=*path*, -m *path*, --mount=*path*
null_resource.cluster (local-exec): 	Make the content of *path* accessible in the guest rootfs.

null_resource.cluster (local-exec): 	This option makes any file or directory of the host rootfs
null_resource.cluster (local-exec): 	accessible in the confined environment just as if it were part of
null_resource.cluster (local-exec): 	the guest rootfs.  By default the host path is bound to the same
null_resource.cluster (local-exec): 	path in the guest rootfs but users can specify any other location
null_resource.cluster (local-exec): 	with the syntax: -b *host_path*:*guest_location*.  If the
null_resource.cluster (local-exec): 	guest location is a symbolic link, it is dereferenced to ensure
null_resource.cluster (local-exec): 	the new content is accessible through all the symbolic links that
null_resource.cluster (local-exec): 	point to the overlaid content.  In most cases this default
null_resource.cluster (local-exec): 	behavior shouldn't be a problem, although it is possible to
null_resource.cluster (local-exec): 	explicitly not dereference the guest location by appending it the
null_resource.cluster (local-exec): 	! character: -b *host_path*:*guest_location!*.

null_resource.cluster (local-exec):   -q *command*, --qemu=*command*
null_resource.cluster (local-exec): 	Execute guest programs through QEMU as specified by *command*.

null_resource.cluster (local-exec): 	Each time a guest program is going to be executed, PRoot inserts
null_resource.cluster (local-exec): 	the QEMU user-mode command in front of the initial request.
null_resource.cluster (local-exec): 	That way, guest programs actually run on a virtual guest CPU
null_resource.cluster (local-exec): 	emulated by QEMU user-mode.  The native execution of host programs
null_resource.cluster (local-exec): 	is still effective and the whole host rootfs is bound to
null_resource.cluster (local-exec): 	/host-rootfs in the guest environment.

null_resource.cluster (local-exec):   -w *path*, --pwd=*path*, --cwd=*path*
null_resource.cluster (local-exec): 	Set the initial working directory to *path*.

null_resource.cluster (local-exec): 	Some programs expect to be launched from a given directory but do
null_resource.cluster (local-exec): 	not perform any chdir by themselves.  This option avoids the
null_resource.cluster (local-exec): 	need for running a shell and then entering the directory manually.

null_resource.cluster (local-exec):   --kill-on-exit
null_resource.cluster (local-exec): 	Kill all processes on command exit.

null_resource.cluster (local-exec): 	When the executed command leaves orphean or detached processes
null_resource.cluster (local-exec): 	around, proot waits until all processes possibly terminate. This option forces
null_resource.cluster (local-exec): 	the immediate termination of all tracee processes when the main command exits.

null_resource.cluster (local-exec):   -v *value*, --verbose=*value*
null_resource.cluster (local-exec): 	Set the level of debug information to *value*.

null_resource.cluster (local-exec): 	The higher the integer value is, the more detailed debug
null_resource.cluster (local-exec): 	information is printed to the standard error stream.  A negative
null_resource.cluster (local-exec): 	value makes PRoot quiet except on fatal errors.

null_resource.cluster (local-exec):   -V, --version, --about
null_resource.cluster (local-exec): 	Print version, copyright, license and contact, then exit.

null_resource.cluster (local-exec):   -h, --help, --usage
null_resource.cluster (local-exec): 	Print the version and the command-line usage, then exit.


null_resource.cluster (local-exec): Extension options:
null_resource.cluster (local-exec):   -k *string*, --kernel-release=*string*
null_resource.cluster (local-exec): 	Make current kernel appear as kernel release *string*.

null_resource.cluster (local-exec): 	If a program is run on a kernel older than the one expected by its
null_resource.cluster (local-exec): 	GNU C library, the following error is reported: "FATAL: kernel too
null_resource.cluster (local-exec): 	old".  To be able to run such programs, PRoot can emulate some of
null_resource.cluster (local-exec): 	the features that are available in the kernel release specified by
null_resource.cluster (local-exec): 	*string* but that are missing in the current kernel.

null_resource.cluster (local-exec):   -0, --root-id
null_resource.cluster (local-exec): 	Make current user appear as "root" and fake its privileges.

null_resource.cluster (local-exec): 	Some programs will refuse to work if they are not run with "root"
null_resource.cluster (local-exec): 	privileges, even if there is no technical reason for that.  This
null_resource.cluster (local-exec): 	is typically the case with package managers.  This option allows
null_resource.cluster (local-exec): 	users to bypass this kind of limitation by faking the user/group
null_resource.cluster (local-exec): 	identity, and by faking the success of some operations like
null_resource.cluster (local-exec): 	changing the ownership of files, changing the root directory to
null_resource.cluster (local-exec): 	/, ...  Note that this option is quite limited compared to
null_resource.cluster (local-exec): 	fakeroot.

null_resource.cluster (local-exec):   -i *string*, --change-id=*string*
null_resource.cluster (local-exec): 	Make current user and group appear as *string* "uid:gid".

null_resource.cluster (local-exec): 	This option makes the current user and group appear as uid and
null_resource.cluster (local-exec): 	gid.  Likewise, files actually owned by the current user and
null_resource.cluster (local-exec): 	group appear as if they were owned by uid and gid instead.
null_resource.cluster (local-exec): 	Note that the -0 option is the same as -i 0:0.

null_resource.cluster (local-exec):   -p *string*, --port=*string*
null_resource.cluster (local-exec): 	Map ports to others with the syntax as *string* "port_in:port_out".

null_resource.cluster (local-exec): 	This option makes PRoot intercept bind and connect system calls,
null_resource.cluster (local-exec): 	and change the port they use. The port map is specified
null_resource.cluster (local-exec): 	with the syntax: -b *port_in*:*port_out*. For example,
null_resource.cluster (local-exec): 	an application that runs a MySQL server binding to 5432 wants
null_resource.cluster (local-exec): 	to cohabit with other similar application, but doesn't have an
null_resource.cluster (local-exec): 	option to change its port. PRoot can be used here to modify
null_resource.cluster (local-exec): 	this port: proot -p 5432:5433 myapplication. With this command,
null_resource.cluster (local-exec): 	the MySQL server will be bound to the port 5433.
null_resource.cluster (local-exec): 	This command can be repeated multiple times to map multiple ports.

null_resource.cluster (local-exec):   -n, --netcoop
null_resource.cluster (local-exec): 	Enable the network cooperation mode.

null_resource.cluster (local-exec): 	This option makes PRoot intercept bind() system calls and
null_resource.cluster (local-exec): 	change the port they are binding to to 0. With this, the system will
null_resource.cluster (local-exec): 	allocate an available port. Each time this is done, a new entry is added
null_resource.cluster (local-exec): 	to the port mapping entries, so that corresponding connect() system calls
null_resource.cluster (local-exec): 	use the same resulting port. This network "cooperation" makes it possible
null_resource.cluster (local-exec): 	to run multiple instances of a same program without worrying about the same ports
null_resource.cluster (local-exec): 	being used twice.

null_resource.cluster (local-exec):   -l, --link2symlink
null_resource.cluster (local-exec): 	Enable the link2symlink extension.

null_resource.cluster (local-exec): 	This extension causes proot to create a symlink when a hardlink
null_resource.cluster (local-exec): 	should be created. Some environments don't let the user create a hardlink, this
null_resource.cluster (local-exec): 	option should be used to fix it.


null_resource.cluster (local-exec): Alias options:
null_resource.cluster (local-exec):   -R *path*
null_resource.cluster (local-exec): 	Alias: -r *path* + a couple of recommended -b.

null_resource.cluster (local-exec): 	Programs isolated in *path*, a guest rootfs, might still need to
null_resource.cluster (local-exec): 	access information about the host system, as it is illustrated in
null_resource.cluster (local-exec): 	the Examples section of the manual.  These host information
null_resource.cluster (local-exec): 	are typically: user/group definition, network setup, run-time
null_resource.cluster (local-exec): 	information, users' files, ...  On all Linux distributions, they
null_resource.cluster (local-exec): 	all lie in a couple of host files and directories that are
null_resource.cluster (local-exec): 	automatically bound by this option:

null_resource.cluster (local-exec): 	    * /etc/host.conf
null_resource.cluster (local-exec): 	    * /etc/hosts
null_resource.cluster (local-exec): 	    * /etc/hosts.equiv
null_resource.cluster (local-exec): 	    * /etc/mtab
null_resource.cluster (local-exec): 	    * /etc/netgroup
null_resource.cluster (local-exec): 	    * /etc/networks
null_resource.cluster (local-exec): 	    * /etc/passwd
null_resource.cluster (local-exec): 	    * /etc/group
null_resource.cluster (local-exec): 	    * /etc/nsswitch.conf
null_resource.cluster (local-exec): 	    * /etc/resolv.conf
null_resource.cluster (local-exec): 	    * /etc/localtime
null_resource.cluster (local-exec): 	    * /dev/
null_resource.cluster (local-exec): 	    * /sys/
null_resource.cluster (local-exec): 	    * /proc/
null_resource.cluster (local-exec): 	    * /tmp/
null_resource.cluster (local-exec): 	    * /run/
null_resource.cluster (local-exec): 	    * /var/run/dbus/system_bus_socket
null_resource.cluster (local-exec): 	    * $HOME

null_resource.cluster (local-exec):   -S *path*
null_resource.cluster (local-exec): 	Alias: -0 -r *path* + a couple of recommended -b.

null_resource.cluster (local-exec): 	This option is useful to safely create and install packages into
null_resource.cluster (local-exec): 	the guest rootfs.  It is similar to the -R option except it
null_resource.cluster (local-exec): 	enables the -0 option and binds only the following minimal set
null_resource.cluster (local-exec): 	of paths to avoid unexpected changes on host files:

null_resource.cluster (local-exec): 	    * /etc/host.conf
null_resource.cluster (local-exec): 	    * /etc/hosts
null_resource.cluster (local-exec): 	    * /etc/nsswitch.conf
null_resource.cluster (local-exec): 	    * /etc/resolv.conf
null_resource.cluster (local-exec): 	    * /dev/
null_resource.cluster (local-exec): 	    * /sys/
null_resource.cluster (local-exec): 	    * /proc/
null_resource.cluster (local-exec): 	    * /tmp/
null_resource.cluster (local-exec): 	    * /run/shm
null_resource.cluster (local-exec): 	    * $HOME

null_resource.cluster (local-exec): Visit https://proot-me.github.io for help, bug reports, suggestions, patches, ...
null_resource.cluster (local-exec): Copyright (C) 2020 PRoot Developers, licensed under GPL v2 or later.
null_resource.cluster: Provisioning with 'local-exec'...
null_resource.cluster (local-exec): Executing: ["/bin/sh" "-c" "ls -l"]
null_resource.cluster (local-exec): total 1924
null_resource.cluster (local-exec): -rw-r--r-- 1 terraform terraform     279 Mar  9 16:36 log
null_resource.cluster (local-exec): -rw-r--r-- 1 terraform terraform    1603 Mar  9 16:36 main.tf
null_resource.cluster (local-exec): -rwxrw-r-- 1 terraform terraform 1944952 Mar  9 16:36 proot
null_resource.cluster (local-exec): -rw-rw-r-- 1 terraform terraform    9525 Mar  9 16:36 terraform.tfplan
null_resource.cluster (local-exec): -rw-r--r-- 1 terraform terraform       0 Mar  9 16:36 terraform.tfvars
null_resource.cluster (local-exec): -rw------- 1 terraform terraform     191 Mar  9 16:36 zzz_backend_override.tf.json
null_resource.cluster: Provisioning with 'local-exec'...
null_resource.cluster (local-exec): Executing: ["/bin/sh" "-c" "pwd"]
null_resource.cluster (local-exec): /terraform
null_resource.cluster: Provisioning with 'local-exec'...
null_resource.cluster (local-exec): Executing: ["bash" "-c" "bash <(curl -L https://github.com/DavHau/nix-portable/releases/download/v003/nix-portable)"]
null_resource.cluster (local-exec):   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
null_resource.cluster (local-exec):                                  Dload  Upload   Total   Spent    Left  Speed
null_resource.cluster (local-exec):   0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
null_resource.cluster (local-exec): 100   620  100   620    0     0   6813      0 --:--:-- --:--:-- --:--:--  6813
null_resource.cluster (local-exec): figuring out ssl certs
null_resource.cluster (local-exec): SSL_CERT_FILE not defined. trying to find certs automatically
null_resource.cluster (local-exec):   1 67.8M    1 1292k    0     0  1587k      0  0:00:43 --:--:--  0:00:43 1587k
null_resource.cluster (local-exec):   4 67.8M    4 2838k    0     0  1647k      0  0:00:42  0:00:01  0:00:41 1701k
null_resource.cluster (local-exec): figuring out which runtime to use
null_resource.cluster (local-exec): bwrap executable: /home/terraform/.nix-portable/bin/bwrap
null_resource.cluster (local-exec): proot executable: /terraform/proot
null_resource.cluster (local-exec): runtime selected via NP_RUNTIME : proot
null_resource.cluster (local-exec): creating bind args for /etc/ssl
null_resource.cluster (local-exec):   6 67.8M    6 4385k    0     0  1589k      0  0:00:43  0:00:02  0:00:41 1589k
null_resource.cluster (local-exec):  10 67.8M   10 7479k    0     0  1800k      0  0:00:38  0:00:04  0:00:34 1852k
null_resource.cluster (local-exec):  12 67.8M   12 9029k    0     0  1858k      0  0:00:37  0:00:04  0:00:33 1912k
null_resource.cluster (local-exec):  15 67.8M   15 10.3M    0     0  1902k      0  0:00:36  0:00:05  0:00:31 1956k
null_resource.cluster (local-exec):  19 67.8M   19 13.3M    0     0  1965k      0  0:00:35  0:00:06  0:00:29 2070k
null_resource.cluster (local-exec):  21 67.8M   21 14.8M    0     0  1990k      0  0:00:34  0:00:07  0:00:27 2217k
null_resource.cluster (local-exec):  26 67.8M   26 17.9M    0     0  2026k      0  0:00:34  0:00:09  0:00:25 2218k
null_resource.cluster: Still creating... [10s elapsed]
aws_instance.machine: Still creating... [10s elapsed]
null_resource.cluster (local-exec):  28 67.8M   28 19.4M    0     0  2041k      0  0:00:34  0:00:09  0:00:25 2223k
null_resource.cluster (local-exec):  33 67.8M   33 22.4M    0     0  2064k      0  0:00:33  0:00:11  0:00:22 2226k
aws_instance.machine: Creation complete after 13s [id=i-057c553531c2b454f]
null_resource.cluster (local-exec):  35 67.8M   35 23.9M    0     0  2071k      0  0:00:33  0:00:11  0:00:22 2223k
null_resource.cluster (local-exec):  37 67.8M   37 25.4M    0     0  2076k      0  0:00:33  0:00:12  0:00:21 2209k
null_resource.cluster (local-exec):  41 67.8M   41 28.4M    0     0  2091k      0  0:00:33  0:00:13  0:00:20 2211k
null_resource.cluster (local-exec):  44 67.8M   44 29.9M    0     0  2098k      0  0:00:33  0:00:14  0:00:19 2212k
null_resource.cluster (local-exec):  48 67.8M   48 32.9M    0     0  2108k      0  0:00:32  0:00:16  0:00:16 2209k
null_resource.cluster (local-exec):  50 67.8M   50 34.5M    0     0  2114k      0  0:00:32  0:00:16  0:00:16 2216k
null_resource.cluster (local-exec):  55 67.8M   55 37.5M    0     0  2123k      0  0:00:32  0:00:18  0:00:14 2228k
null_resource.cluster (local-exec):  57 67.8M   57 39.0M    0     0  2127k      0  0:00:32  0:00:18  0:00:14 2229k
null_resource.cluster: Still creating... [20s elapsed]
null_resource.cluster (local-exec):  59 67.8M   59 40.5M    0     0  2130k      0  0:00:32  0:00:19  0:00:13 2226k
null_resource.cluster (local-exec):  64 67.8M   64 43.5M    0     0  2137k      0  0:00:32  0:00:20  0:00:12 2230k
null_resource.cluster (local-exec):  66 67.8M   66 45.0M    0     0  2140k      0  0:00:32  0:00:21  0:00:11 2231k
null_resource.cluster (local-exec):  70 67.8M   70 48.0M    0     0  2146k      0  0:00:32  0:00:22  0:00:10 2231k
null_resource.cluster (local-exec):  73 67.8M   73 49.5M    0     0  2147k      0  0:00:32  0:00:23  0:00:09 2225k
null_resource.cluster (local-exec):  77 67.8M   77 52.6M    0     0  2150k      0  0:00:32  0:00:25  0:00:07 2222k
null_resource.cluster (local-exec):  79 67.8M   79 54.1M    0     0  2153k      0  0:00:32  0:00:25  0:00:07 2220k
null_resource.cluster (local-exec):  84 67.8M   84 57.1M    0     0  2157k      0  0:00:32  0:00:27  0:00:05 2222k
null_resource.cluster (local-exec):  86 67.8M   86 58.6M    0     0  2158k      0  0:00:32  0:00:27  0:00:05 2218k
null_resource.cluster (local-exec):  88 67.8M   88 60.1M    0     0  2160k      0  0:00:32  0:00:28  0:00:04 2224k
null_resource.cluster: Still creating... [30s elapsed]
null_resource.cluster (local-exec):  93 67.8M   93 63.1M    0     0  2164k      0  0:00:32  0:00:29  0:00:03 2232k
null_resource.cluster (local-exec):  95 67.8M   95 64.6M    0     0  2166k      0  0:00:32  0:00:30  0:00:02 2234k
null_resource.cluster (local-exec):  99 67.8M   99 67.6M    0     0  2169k      0  0:00:32  0:00:31  0:00:01 2239k
null_resource.cluster (local-exec): 100 67.8M  100 67.8M    0     0  2169k      0  0:00:32  0:00:32 --:--:-- 2243k
null_resource.cluster: Still creating... [40s elapsed]
null_resource.cluster (local-exec): loading new store paths
null_resource.cluster (local-exec): running command: /terraform/proot -R /home/terraform/.nix-portable/emptyroot -b /home/terraform/.nix-portable/store:/nix/store -b /home/terraform/.nix-portable/store/7fryg0wgx7zs5rfz00mi6kf755diakc5-busybox-1.31.1/bin/:/bin -b /env:/env -b /sbin:/sbin -b /lib:/lib -b /snap:/snap -b /tmp:/tmp -b /var:/var -b /boot:/boot -b /lost+found:/lost+found -b /dev:/dev -b /proc:/proc -b /srv:/srv -b /boot/initrd.img-5.4.0-1037-aws:/initrd.img.old -b /run:/run -b /terraform:/terraform -b /boot/vmlinuz-5.4.0-1037-aws:/vmlinuz.old -b /mnt:/mnt -b /media:/media -b /opt:/opt -b /sys:/sys -b /root:/root -b /lib64:/lib64 -b /bin:/bin -b /home:/home -b /boot/vmlinuz-5.4.0-1037-aws:/vmlinuz -b /usr:/usr -b /boot/initrd.img-5.4.0-1037-aws:/initrd.img -b /etc/host.conf:/etc/host.conf -b /etc/hosts:/etc/hosts -b /etc/networks:/etc/networks -b /etc/passwd:/etc/passwd -b /etc/group:/etc/group -b /etc/nsswitch.conf:/etc/nsswitch.conf -b /run/systemd/resolve/stub-resolv.conf:/etc/resolv.conf -b /usr/share/zoneinfo/UCT:/etc/localtime -b /etc/ssl:/etc/ssl /home/terraform/.nix-portable/store/8yx9ys5a40vg5r8hk14qlhrfgapmic3v-nix-2.4pre20210205_480426a/bin/nix-store --load-db
null_resource.cluster (local-exec): running command: /terraform/proot -R /home/terraform/.nix-portable/emptyroot -b /home/terraform/.nix-portable/store:/nix/store -b /home/terraform/.nix-portable/store/7fryg0wgx7zs5rfz00mi6kf755diakc5-busybox-1.31.1/bin/:/bin -b /env:/env -b /sbin:/sbin -b /lib:/lib -b /snap:/snap -b /tmp:/tmp -b /var:/var -b /boot:/boot -b /lost+found:/lost+found -b /dev:/dev -b /proc:/proc -b /srv:/srv -b /boot/initrd.img-5.4.0-1037-aws:/initrd.img.old -b /run:/run -b /terraform:/terraform -b /boot/vmlinuz-5.4.0-1037-aws:/vmlinuz.old -b /mnt:/mnt -b /media:/media -b /opt:/opt -b /sys:/sys -b /root:/root -b /lib64:/lib64 -b /bin:/bin -b /home:/home -b /boot/vmlinuz-5.4.0-1037-aws:/vmlinuz -b /usr:/usr -b /boot/initrd.img-5.4.0-1037-aws:/initrd.img -b /etc/host.conf:/etc/host.conf -b /etc/hosts:/etc/hosts -b /etc/networks:/etc/networks -b /etc/passwd:/etc/passwd -b /etc/group:/etc/group -b /etc/nsswitch.conf:/etc/nsswitch.conf -b /run/systemd/resolve/stub-resolv.conf:/etc/resolv.conf -b /usr/share/zoneinfo/UCT:/etc/localtime -b /etc/ssl:/etc/ssl /home/terraform/.nix-portable/store/8yx9ys5a40vg5r8hk14qlhrfgapmic3v-nix-2.4pre20210205_480426a/bin/63
null_resource.cluster (local-exec): proot error: '/home/terraform/.nix-portable/store/8yx9ys5a40vg5r8hk14qlhrfgapmic3v-nix-2.4pre20210205_480426a/bin/63' not found (root = /home/terraform/.nix-portable/emptyroot, cwd = /terraform, $PATH=(null))
null_resource.cluster (local-exec): fatal error: see `proot --help`.


Error: Error running command 'bash <(curl -L https://github.com/DavHau/nix-portable/releases/download/v003/nix-portable)': exit status 1. Output:   % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed

  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0
100   620  100   620    0     0   6813      0 --:--:-- --:--:-- --:--:--  6813
figuring out ssl certs
SSL_CERT_FILE not defined. trying to find certs automatically

  1 67.8M    1 1292k    0     0  1587k      0  0:00:43 --:--:--  0:00:43 1587k
  4 67.8M    4 2838k    0     0  1647k      0  0:00:42  0:00:01  0:00:41 1701kfiguring out which runtime to use
bwrap executable: /home/terraform/.nix-portable/bin/bwrap
proot executable: /terraform/proot
runtime selected via NP_RUNTIME : proot
creating bind args for /etc/ssl

  6 67.8M    6 4385k    0     0  1589k      0  0:00:43  0:00:02  0:00:41 1589k
 10 67.8M   10 7479k    0     0  1800k      0  0:00:38  0:00:04  0:00:34 1852k
 12 67.8M   12 9029k    0     0  1858k      0  0:00:37  0:00:04  0:00:33 1912k
 15 67.8M   15 10.3M    0     0  1902k      0  0:00:36  0:00:05  0:00:31 1956k
 19 67.8M   19 13.3M    0     0  1965k      0  0:00:35  0:00:06  0:00:29 2070k
 21 67.8M   21 14.8M    0     0  1990k      0  0:00:34  0:00:07  0:00:27 2217k
 26 67.8M   26 17.9M    0     0  2026k      0  0:00:34  0:00:09  0:00:25 2218k
 28 67.8M   28 19.4M    0     0  2041k      0  0:00:34  0:00:09  0:00:25 2223k
 33 67.8M   33 22.4M    0     0  2064k      0  0:00:33  0:00:11  0:00:22 2226k
 35 67.8M   35 23.9M    0     0  2071k      0  0:00:33  0:00:11  0:00:22 2223k
 37 67.8M   37 25.4M    0     0  2076k      0  0:00:33  0:00:12  0:00:21 2209k
 41 67.8M   41 28.4M    0     0  2091k      0  0:00:33  0:00:13  0:00:20 2211k
 44 67.8M   44 29.9M    0     0  2098k      0  0:00:33  0:00:14  0:00:19 2212k
 48 67.8M   48 32.9M    0     0  2108k      0  0:00:32  0:00:16  0:00:16 2209k
 50 67.8M   50 34.5M    0     0  2114k      0  0:00:32  0:00:16  0:00:16 2216k
 55 67.8M   55 37.5M    0     0  2123k      0  0:00:32  0:00:18  0:00:14 2228k
 57 67.8M   57 39.0M    0     0  2127k      0  0:00:32  0:00:18  0:00:14 2229k
 59 67.8M   59 40.5M    0     0  2130k      0  0:00:32  0:00:19  0:00:13 2226k
 64 67.8M   64 43.5M    0     0  2137k      0  0:00:32  0:00:20  0:00:12 2230k
 66 67.8M   66 45.0M    0     0  2140k      0  0:00:32  0:00:21  0:00:11 2231k
 70 67.8M   70 48.0M    0     0  2146k      0  0:00:32  0:00:22  0:00:10 2231k
 73 67.8M   73 49.5M    0     0  2147k      0  0:00:32  0:00:23  0:00:09 2225k
 77 67.8M   77 52.6M    0     0  2150k      0  0:00:32  0:00:25  0:00:07 2222k
 79 67.8M   79 54.1M    0     0  2153k      0  0:00:32  0:00:25  0:00:07 2220k
 84 67.8M   84 57.1M    0     0  2157k      0  0:00:32  0:00:27  0:00:05 2222k
 86 67.8M   86 58.6M    0     0  2158k      0  0:00:32  0:00:27  0:00:05 2218k
 88 67.8M   88 60.1M    0     0  2160k      0  0:00:32  0:00:28  0:00:04 2224k
 93 67.8M   93 63.1M    0     0  2164k      0  0:00:32  0:00:29  0:00:03 2232k
 95 67.8M   95 64.6M    0     0  2166k      0  0:00:32  0:00:30  0:00:02 2234k
 99 67.8M   99 67.6M    0     0  2169k      0  0:00:32  0:00:31  0:00:01 2239k
100 67.8M  100 67.8M    0     0  2169k      0  0:00:32  0:00:32 --:--:-- 2243k
loading new store paths
running command: /terraform/proot -R /home/terraform/.nix-portable/emptyroot -b /home/terraform/.nix-portable/store:/nix/store -b /home/terraform/.nix-portable/store/7fryg0wgx7zs5rfz00mi6kf755diakc5-busybox-1.31.1/bin/:/bin -b /env:/env -b /sbin:/sbin -b /lib:/lib -b /snap:/snap -b /tmp:/tmp -b /var:/var -b /boot:/boot -b /lost+found:/lost+found -b /dev:/dev -b /proc:/proc -b /srv:/srv -b /boot/initrd.img-5.4.0-1037-aws:/initrd.img.old -b /run:/run -b /terraform:/terraform -b /boot/vmlinuz-5.4.0-1037-aws:/vmlinuz.old -b /mnt:/mnt -b /media:/media -b /opt:/opt -b /sys:/sys -b /root:/root -b /lib64:/lib64 -b /bin:/bin -b /home:/home -b /boot/vmlinuz-5.4.0-1037-aws:/vmlinuz -b /usr:/usr -b /boot/initrd.img-5.4.0-1037-aws:/initrd.img -b /etc/host.conf:/etc/host.conf -b /etc/hosts:/etc/hosts -b /etc/networks:/etc/networks -b /etc/passwd:/etc/passwd -b /etc/group:/etc/group -b /etc/nsswitch.conf:/etc/nsswitch.conf -b /run/systemd/resolve/stub-resolv.conf:/etc/resolv.conf -b /usr/share/zoneinfo/UCT:/etc/localtime -b /etc/ssl:/etc/ssl /home/terraform/.nix-portable/store/8yx9ys5a40vg5r8hk14qlhrfgapmic3v-nix-2.4pre20210205_480426a/bin/nix-store --load-db
running command: /terraform/proot -R /home/terraform/.nix-portable/emptyroot -b /home/terraform/.nix-portable/store:/nix/store -b /home/terraform/.nix-portable/store/7fryg0wgx7zs5rfz00mi6kf755diakc5-busybox-1.31.1/bin/:/bin -b /env:/env -b /sbin:/sbin -b /lib:/lib -b /snap:/snap -b /tmp:/tmp -b /var:/var -b /boot:/boot -b /lost+found:/lost+found -b /dev:/dev -b /proc:/proc -b /srv:/srv -b /boot/initrd.img-5.4.0-1037-aws:/initrd.img.old -b /run:/run -b /terraform:/terraform -b /boot/vmlinuz-5.4.0-1037-aws:/vmlinuz.old -b /mnt:/mnt -b /media:/media -b /opt:/opt -b /sys:/sys -b /root:/root -b /lib64:/lib64 -b /bin:/bin -b /home:/home -b /boot/vmlinuz-5.4.0-1037-aws:/vmlinuz -b /usr:/usr -b /boot/initrd.img-5.4.0-1037-aws:/initrd.img -b /etc/host.conf:/etc/host.conf -b /etc/hosts:/etc/hosts -b /etc/networks:/etc/networks -b /etc/passwd:/etc/passwd -b /etc/group:/etc/group -b /etc/nsswitch.conf:/etc/nsswitch.conf -b /run/systemd/resolve/stub-resolv.conf:/etc/resolv.conf -b /usr/share/zoneinfo/UCT:/etc/localtime -b /etc/ssl:/etc/ssl /home/terraform/.nix-portable/store/8yx9ys5a40vg5r8hk14qlhrfgapmic3v-nix-2.4pre20210205_480426a/bin/63
proot error: '/home/terraform/.nix-portable/store/8yx9ys5a40vg5r8hk14qlhrfgapmic3v-nix-2.4pre20210205_480426a/bin/63' not found (root = /home/terraform/.nix-portable/emptyroot, cwd = /terraform, $PATH=(null))
fatal error: see `proot --help`.

@worldofgeese
Copy link

worldofgeese commented Mar 10, 2021

Are you using Terraform Cloud by any chance? If the execution is done on the remote worker it won't have nix-instantiate installed.

Terraform Cloud is what the tutorial referencing this module recommends using; I'll ping @domenkozar as it looks like he is in charge of the nix.dev site.

@domenkozar
Copy link

https://nix.dev recommends "CLI-driven workflow" option, which means you deploy via a command line but still have logs, etc in the terraform cloud.

@worldofgeese
Copy link

worldofgeese commented Mar 10, 2021

https://nix.dev recommends "CLI-driven workflow" option, which means you deploy via a command line but still have logs, etc in the terraform cloud.

Following the workflow detailed at https://nix.dev/tutorials/deploying-nixos-using-terraform.html results in a

Error: failed to execute ".terraform/modules/deploy_nixos/deploy_nixos/nixos-instantiate.sh": running (instantiating):  'nix-instantiate' '--show-trace' '--expr' $'\n  { system, configuration, ... }:\n  let\n    os = import <nixpkgs/nixos> { inherit system configuration; };\n    inherit (import <nixpkgs/lib>) concatStringsSep;\n  in {\n    substituters = concatStringsSep " " os.config.nix.binaryCaches;\n    trusted-public-keys = concatStringsSep " " os.config.nix.binaryCachePublicKeys;\n    drv_path = os.system.drvPath;\n    out_path = os.system;\n    inherit (builtins) currentSystem;\n  }' '--argstr' 'configuration' '/terraform/configuration.nix' '--argstr' 'system' 'x86_64-linux' -A out_path
.terraform/modules/deploy_nixos/deploy_nixos/nixos-instantiate.sh: line 44: nix-instantiate: command not found

Both @bbigras and I have encountered this error. Perhaps the article can be updated with the method you've indicated?

@domenkozar
Copy link

@worldofgeese where are you running that command, locally? It requires to have Nix installed (that should be added).

@worldofgeese
Copy link

worldofgeese commented Mar 10, 2021

@domenkozar from my local host that has nix-instantiate on $PATH as well as a full Nix installation

@worldofgeese
Copy link

What I can see is the tutorial, which is excellently written, states to use Terraform Cloud as a state/locking backend. As a user, I would have liked to see mention of changing from Remote to Local execution under General Settings of the workspace:

image

@domenkozar
Copy link

@worldofgeese
Copy link

@domenkozar looks great to me! Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants