Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(deps): update terraform cloudposse/s3-bucket/aws to v4 (release/v0) #102

Open
wants to merge 1 commit into
base: release/v0
Choose a base branch
from

Conversation

renovate[bot]
Copy link
Contributor

@renovate renovate bot commented Mar 9, 2024

Mend Renovate

This PR contains the following updates:

Package Type Update Change
cloudposse/s3-bucket/aws (source) module major 3.0.0 -> 4.2.0

Release Notes

cloudposse/terraform-aws-s3-bucket (cloudposse/s3-bucket/aws)

v4.2.0

Compare Source

Added IP-based statement in bucket policy @​soya-miyoshi (#​216)

what

  • Allows users to specify a list of source IP addresses from which access to the S3 bucket is allowed.
  • Adds dynamic statement that uses the NotIpAddress condition to deny access from any IP address not listed in the source_ip_allow_list variable.

why

Use cases:

  • Restricting access to specific physical locations, such as an office or home network

references

v4.1.0

Compare Source

🚀 Enhancements

fix: use for_each instead of count in aws_s3_bucket_logging @​wadhah101 (#​212)

what

Replaced the count with a for_each inside aws_s3_bucket_logging.default

there's no point in the try since the type is clearly defined as list

why

When the bucket_name within logging attribute is dynamically defined, like in the case of referencing a bucket created by terraform for logging

  logging = [
    {
      bucket_name = module.logging_bucket.bucket_id
      prefix      = "data/"
    }
  ]

we get this error Screenshot 2024-02-05 at 12 50 30

For each can work better in this case and will solve the previous error

references

🤖 Automatic Updates

Update README.md and docs @​cloudpossebot (#​214)

what

This is an auto-generated PR that updates the README.md and docs

why

To have most recent changes of README.md and doc from origin templates

Update README.md and docs @​cloudpossebot (#​213)

what

This is an auto-generated PR that updates the README.md and docs

why

To have most recent changes of README.md and doc from origin templates

Update README.md and docs @​cloudpossebot (#​209)

what

This is an auto-generated PR that updates the README.md and docs

why

To have most recent changes of README.md and doc from origin templates

v4.0.1

Compare Source

🐛 Bug Fixes

Fix bug in setting dynamic `encryption_configuration` value @​LawrenceWarren (#​206)

what

  • When trying to create an S3 bucket, the following error is encountered:
Error: Invalid dynamic for_each value

  on .terraform/main.tf line 225, in resource "aws_s3_bucket_replication_configuration" "default":
 225:           for_each = try(compact(concat(
 226:             [try(rule.value.destination.encryption_configuration.replica_kms_key_id, "")],
 227:             [try(rule.value.destination.replica_kms_key_id, "")]
 228:           ))[0], [])
    ├────────────────
    │ rule.value.destination.encryption_configuration is null
    │ rule.value.destination.replica_kms_key_id is "arn:aws:kms:my-region:my-account-id:my-key-alias"

Cannot use a string value in for_each. An iterable collection is required.
  • This is caused in my case by having s3_replication_rules.destination.encryption_configuration.replica_kms_key_id set.

why

  • There is a bug when trying to create an S3 bucket, which causes an error that stops the bucket being created

    • Basically, there are two attributes that do the same thing (for backwards compatability)
      • s3_replication_rules.destination.encryption_configuration.replica_kms_key_id (newer)
      • s3_replication_rules.destination.replica_kms_key_id (older)
    • There is logic to:
      • A) use the newer of these two attributes
      • B) fall back to the older of the attributes if it is set and the newer is not
      • C) fall back to an empty array if nothing is set
    • There is a bug in steps A/B, where by selecting one or the other, we end up with the string value, and not an iterable
    • The simplest solution, which I have tested successfully on existing buckets, is to wrap the output of that logic in a list
  • This error is easily replicable by trying compact(concat([try("string", "")], [try("string", "")]))[0] in the Terraform console, which is a simplified version of the existing logic used above

  • The table below demonstrates the possible values of the existing code - you can see the outputs for value 2, value 3, and value 4 are not lists:

Key Value 1 Value 2 Value 3 Value 4
newer null "string1" null "string1"
older null null "string2" "string2"
output [] "string1" "string2" "string1"

v4.0.0

Compare Source

Bug fixes and enhancements combined into a single breaking release @​aknysh (#​202)

Breaking Changes

Terraform version 1.3.0 or later is now required.

policy input removed

The deprecated policy input has been removed. Use source_policy_documents instead.

Convert from

policy = data.aws_iam_policy_document.log_delivery.json

to

source_policy_documents = [data.aws_iam_policy_document.log_delivery.json]

Do not use list modifiers like sort, compact, or distinct on the list, or it will trigger an Error: Invalid count argument. The length of the list must be known at plan time.

Logging configuration converted to list

To fix #​182, the logging input has been converted to a list. If you have a logging configuration, simply surround it with brackets.

Replication rules brought into alignment with Terraform resource

Previously, the s3_replication_rules input had some deviations from the aws_s3_bucket_replication_configuration Terraform resource. Via the use of optional attributes, the input now closely matches the resource while providing backward compatibility, with a few exceptions.

  • Replication source_selection_criteria.sse_kms_encrypted_objects was documented as an object with one member, enabled, of type bool. However, it only worked when set to the string "Enabled". It has been replaced with the resource's choice of status of type String.
  • Previously, Replication Time Control could not be set directly. It was implicitly enabled by enabling Replication Metrics. We preserve that behavior even though we now add a configuration block for replication_time. To enable Metrics without Replication Time Control, you must set replication_time.status = "Disabled".

These are not changes, just continued deviations from the resources:

  • existing_object_replication cannot be set.
  • token to allow replication to be enabled on an Object Lock-enabled bucket cannot be set.

what

  • Remove local local.source_policy_documents and deprecated variable policy (because of that, pump the module to a major version)
  • Convert lifecycle_configuration_rules and s3_replication_rules from loosely typed objects to fully typed objects with optional attributes.
  • Use local bucket_id variable
  • Remove comments suppressing Bridgecrew rules
  • Update tests to Golang 1.20

why

  • The number of policy documents needs to be known at plan time. Default value of policy was empty, meaning it had to be removed based on content, which would not be known at plan time if the policy input was being generated.
  • Closes #​167, supersedes and closes #​163, and generally makes these inputs easier to deal with, since they now have type checking and partial defaults, meaning the inputs can be much smaller.
  • Incorporates and closes #​197. Thank you @​nikpivkin
  • Suppressing Bridgecrew rules Cloud Posse does not like should be done via external configuration so that users of this module can have the option of having those rules enforced.
  • Security and bug fixes

explanation

Any list manipulation functions should not be used in count since it can lead to the error:

│ Error: Invalid count argument

│   on ./modules/s3_bucket/main.tf line 462, in resource "aws_s3_bucket_policy" "default":
│  462:   count      = local.enabled && (var.allow_ssl_requests_only || var.allow_encrypted_uploads_only || length(var.s3_replication_source_roles) > 0 || length(var.privileged_principal_arns) > 0 || length(local.source_policy_documents) > 0) ? 1 : 0

│ The "count" value depends on resource attributes that cannot be determined until apply, so Terraform cannot predict how many instances will be created. To work around this, use the -target argument to
│ first apply only the resources that the count depends on.

Using the local like this

source_policy_documents = var.policy != "" && var.policy != null ? concat([var.policy], var.source_policy_documents) : var.source_policy_documents

would not work either if var.policy depends on apply-time resources from other TF modules.

General rules:

  • When using for_each, the map keys have to be known at plan time (the map values are not required to be know at plan time)

  • When using count, the length of the list must be know at plan time, the items inside the list are not. That does not mean that the list must be static with the length known in advance, the list can be dynamic and come from a remote state or data sources which Terraform evaluates first during plan, it just can’t come from other resources (which are only known after apply)

  • When using count, no list manipulating functions can be used in count - it will lead to the The "count" value depends on resource attributes that cannot be determined until apply error in some cases

v3.1.3

Compare Source

Unfortunately, this change makes count unknown at plan time in certain situations. In general, you cannot use the output of compact() in count.

The solution is to stop using the deprecated policy input and revert to 3.1.2 or upgrade to 4.0.

🚀 Enhancements

Fix `source_policy_documents` combined with `var.policy` being ignored @​johncblandii (#​201)

what

  • Changed var.source_policy_documents to local.source_policy_documents so var.policy usage was still supported

why

  • The ternary check uses var,source_policy_documents so var.policy being combined with var.source_policy_documents into local.source_policy_documents does not provide true for the ternary to execute

references

v3.1.2: Fix Public Bucket Creation

Compare Source

What's Changed

New Contributors

Full Changelog: cloudposse/terraform-aws-s3-bucket@3.1.1...3.1.2

v3.1.1

Compare Source

🐛 Bug Fixes

Revert change to Transfer Acceleration from #​178 @​Nuru (#​180)

what

  • Revert change to Transfer Acceleration from #​178

why

  • Transfer Acceleration is not available in every region, and the change in #​178 (meant to detect and correct drift) does not work (throws API errors) in regions where Transfer Acceleration is not supported

v3.1.0: Support new AWS S3 defaults (ACL prohibited)

Compare Source

Note: this version introduced drift detection and correction for Transfer Acceleration. Unfortunately, that change prevents deployment of buckets in regions that do not support Transfer Acceleration. Version 3.1.1 reverts that change so that S3 buckets can be deployed by this module in all regions. It does, however, mean that when var.transfer_acceleration_enabled is false, Terraform does not track or revert changes to Transfer Acceleration made outside of this module.

Make compatible with new S3 defaults. Add user permissions boundary. @​Nuru (#​178)

what

  • Make compatible with new S3 defaults by setting S3 Object Ownership before setting ACL and disabling ACL if Ownership is "BucketOwnerEnforced"
  • Add optional permissions boundary input for IAM user created by this module
  • Create aws_s3_bucket_accelerate_configuration and aws_s3_bucket_versioning resources even when the feature is disabled, to enable drift detection

why

  • S3 buckets with ACLs were failing to be provisioned because the ACL was set before the bucket ownership was changed
  • Requested feature
  • See #​171

references

Always include `aws_s3_bucket_versioning` resource @​mviamari (#​172)

what

  • Always create an aws_s3_bucket_versioning resource to track changes made to bucket versioning configuration

why

  • When there is no aws_s3_bucket_versioning, the expectation is that the bucket versioning is disabled/suspend for the bucket. If bucket versioning is turned on outside of terraform (e.g. through the console), the change is not detected by terraform unless the aws_s3_bucket_versioning resource exists.

references

  • Closes #​171
Add support for permission boundaries on replication IAM role @​mchristopher (#​170)

what

why

  • Our AWS environment enforces permission boundaries on all IAM roles to follow AWS best practices with security.

references

🤖 Automatic Updates

Update README.md and docs @​cloudpossebot (#​164)

what

This is an auto-generated PR that updates the README.md and docs

why

To have most recent changes of README.md and doc from origin templates


Configuration

📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Mend Renovate. View repository job log here.

@renovate renovate bot requested a review from a team as a code owner March 9, 2024 04:15
@renovate renovate bot added the auto-update This PR was automatically generated label Mar 9, 2024
@renovate renovate bot requested review from a team as code owners March 9, 2024 04:15
@renovate renovate bot requested review from jamengual and joe-niland and removed request for a team March 9, 2024 04:15
Copy link

mergify bot commented Mar 9, 2024

/terratest

@renovate renovate bot changed the title chore(deps): update terraform cloudposse/s3-bucket/aws to v4 (release/v0) Update Terraform cloudposse/s3-bucket/aws to v4 (release/v0) Apr 14, 2024
@renovate renovate bot changed the title Update Terraform cloudposse/s3-bucket/aws to v4 (release/v0) chore(deps): update terraform cloudposse/s3-bucket/aws to v4 (release/v0) May 3, 2024
@renovate renovate bot changed the title chore(deps): update terraform cloudposse/s3-bucket/aws to v4 (release/v0) Update Terraform cloudposse/s3-bucket/aws to v4 (release/v0) May 9, 2024
@renovate renovate bot changed the title Update Terraform cloudposse/s3-bucket/aws to v4 (release/v0) chore(deps): update terraform cloudposse/s3-bucket/aws to v4 (release/v0) May 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto-update This PR was automatically generated
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

0 participants