Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add equivalent to aws cloudformation package #169

Open
jpb opened this issue Feb 5, 2019 · 5 comments
Open

Add equivalent to aws cloudformation package #169

jpb opened this issue Feb 5, 2019 · 5 comments

Comments

@jpb
Copy link
Contributor

jpb commented Feb 5, 2019

Add the equivalent to running aws cloudformation package (perhaps iidy create-stack --package <S3 location> and/or ArtifactLocation: s3://bucket/base/path/ stack args property) to upload resources to S3 and replace their property in the template with S3 location.

This command can upload local artifacts referenced in the following places:
...
Code property for the AWS::Lambda::Function resource
CodeUri property for the AWS::Serverless::Function resource
TemplateURL property for the AWS::CloudFormation::Stack resource
...
The command returns a template and replaces the local path with the S3 location: CodeUri: s3://mybucket/lambdafunction.zip.

https://docs.aws.amazon.com/cli/latest/reference/cloudformation/package.html

This pattern could be extended to support workflows like #161

@jpb
Copy link
Contributor Author

jpb commented Feb 8, 2019

It turns out that aws cloudformation package relies on having the local/S3 location hard coded into the template. I would much rather this work with the value as a parameter to avoid changing the template.

I've been playing around with the idea of introducing $uploads (to complement $imports and $defs) – something along the lines of:

$uploads:
  zip: my-file.zip

Parameters:
  Code: !$ zip

where my-file.zip would be uploaded to S3 and zip would hold the URL of the object.

I'm struggling with how the local and remote locations would be specified:

$uploads:
  zip:
    local: my-zip.zip
    remote: s3://bucket/path
  zip:
    file: my-zip.zip
    s3: s3://bucket/path
  zip:
    - my-zip.zip
    - s3://bucket/path
  zip: [my-zip.zip, s3://bucket/path]

The S3 path would need to change when the underlying file changes for CloudFormation to handle updates properly. Should $uploads:

  • rely on the user to change the path?
  • content hash the file and use that in the path?
  • reuse the same path and rely on object versioning?

What are your thoughts @tavisrudd @tuff?

@tuff
Copy link
Contributor

tuff commented Feb 8, 2019

Re: the API, I like a combo of your suggestions:

$uploads:
  zip:
    local: my-zip.zip
    s3: s3://bucket/path

Should $uploads rely on the user to change the path?

Meaning that if you're working with a lambda and you make a code change and upload a new bundle, you also have to update your stack args? That doesn't seem right 😕

I like the object versioning option, unless implementing that is ugly for reasons I can't see now.

@jpb
Copy link
Contributor Author

jpb commented Feb 8, 2019

Meaning that if you're working with a lambda and you make a code change and upload a new bundle, you also have to update your stack args? That doesn't seem right 😕

I imagine it would work something like:

$imports:
  version: env:VERSION

$uploads:
  zip:
    local: my-zip.zip
    s3: "s3://bucket/{{ version }}/my-zip.zip"

I like the object versioning option, unless implementing that is ugly for reasons I can't see now.

I haven't really thought this one through very much, but I think it would result in causing CloudFormation to update the resource, even if the file hasn't changed. iidy would upload the file to S3, which would create a new version, which would result in the URL changing (even if the file contents haven't changed).

The content hashing idea probably has issues for Lambda deployments - I doubt that npm build (or it's equivalent) would producing the exact same artifact for the same inputs, and you wouldn't want it to create a new Lambda version in that case.

I think I'm leaning towards the version example above because it is more obvious what is going on and puts the control in the hands of the developer.

(I'm also thinking the full object path should exist in the s3 property, including the "filename")

@tavisrudd
Copy link
Collaborator

I think the explicit proposal with $imports: version ... is compatible with content hashing. If you want to use a content hash you can always import the filehash: and use it for version. This would also support regional s3 buckets (cfn requires the bucket to be in the same region).

@jpb
Copy link
Contributor Author

jpb commented Feb 14, 2019

As an alternative, the workaround from #161 could be used instead of $uploads for a Lambda function's code with something like:

$imports:
  version: env:VERSION

$defs:
  s3Location: s3://some-bucket/{{ version }}/app.zip

Parameters:
  S3Location: !$ s3Location

CommandsBefore:
  - 'aws s3 cp app.zip {{ s3Location }}'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants