Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

grpc: received message larger than max #28

Closed
ghost opened this issue Jun 13, 2019 · 27 comments
Closed

grpc: received message larger than max #28

ghost opened this issue Jun 13, 2019 · 27 comments

Comments

@ghost
Copy link

ghost commented Jun 13, 2019

This issue was originally opened by @tebriel as hashicorp/terraform#21709. It was migrated here as a result of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.12.2
+ provider.archive v1.2.1
+ provider.aws v2.14.0
+ provider.local v1.2.2
+ provider.template v2.1.1

Terraform Configuration Files

// Nothing exceptionally important at this time

Debug Output

https://gist.github.com/tebriel/08f699ce69555a2670884343f9609feb

Crash Output

No crash

Expected Behavior

It should've completed the plan

Actual Behavior

Error: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (9761610 vs. 4194304)

Steps to Reproduce

terraform plan on my medium sized project.

Additional Context

Running within make, but has same profile outside of make. This applies fine in 0.11.14.

References

@apparentlymart
Copy link
Member

After some investigation and discussion in hashicorp/terraform#21709, I moved this here to represent a change to add a file size limit to this provider (smaller than the 4MB limit imposed by Terraform Core so that users will never hit that generic error even when counting protocol overhead) and to document that limit for both the local_file data source and the local_file resource type.

@jukie
Copy link

jukie commented Oct 3, 2019

Is this still open? I'd like to pick this up if so.
Could you clarify/confirm the request?

  1. Add file size limit of 4mb in the local provider through a validator
  2. Update docs to reflect the size limit

@itessential
Copy link

Hello

Do you plan to fix this problem? If so, when?

@mikea
Copy link

mikea commented Dec 20, 2019

Is this still open? I'd like to pick this up if so.
Could you clarify/confirm the request?

1. Add file size limit of 4mb in the local provider through a validator

2. Update docs to reflect the size limit

I think the best fix will be to support files >4Mb

@itessential
Copy link

Yes, this problem still persist.

@Prototyped
Copy link

Yes, I ran into this issue today on the local_file data source pointing at a prospective AWS Lambda archive file.

@fsantos2019
Copy link

fsantos2019 commented Feb 24, 2020

Hello, is there any progress on this issue or was it parked? This can become a bigger issue if we use template file from Kubernetes and must store the file to disk. Since kubernetes Yaml files can become pretty big.
my work around is to split the file in 2. The initial file size was 2Mb, now I have 2 files of a bit less than 1Mb each and it does work.
Thanks

@chexov
Copy link

chexov commented Apr 16, 2020

Ran into this by using aws_lambda_function resource...


data "local_file" "lambda" {
  filename = "${path.module}/out.zip"
}

resource "aws_s3_bucket_object" "lambda" {
  bucket = var.lambda_bucket
  key    = "${local.name}.zip"
  source = data.local_file.lambda.filename
  etag = filemd5(data.local_file.lambda.filename)
}

resource "aws_lambda_function" "login_api" {
  function_name    = local.name
  role             = aws_iam_role.lambda_role.arn
  handler          = "lambda.handler"
  s3_bucket        = aws_s3_bucket_object.lambda.bucket
  s3_key           = aws_s3_bucket_object.lambda.key
  source_code_hash = filebase64sha256(data.local_file.lambda.filename)

@jukie
Copy link

jukie commented May 2, 2020

Is there any agreement on how we can move forward?
Files over 4mb only worked previously due to a lack of safety checks (See hashicorp/terraform#21709 (comment)) so the error is valid and it doesn’t sound like changing the limit in terraform core will be an option either (Re: “not a bug, it’s a feature”).

We could possibly handle it locally by splitting files into 4mb chunks within the provider but I’m not sure if that would create it’s own issues. I can pursue that but before I waste time would that even be acceptable @apparentlymart ?

@AdamWorley
Copy link

Using Terraform 0.12.23 and aws provider 2.61.0, Getting the same error Error: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (18182422 vs. 4194304)

It looks as though the core package has been updated to allow 64MB - hashicorp/terraform#20906

And according to the lambda limits docs 50MB files are able to be uploaded.

Would it not be best to set the saftey check to 50MB?

@maxcbc
Copy link

maxcbc commented Jun 29, 2020

Just as an FYI for anyone having this issue.

If you put your zip file in a s3 bucket you shouldn't face this problem. But remember to use the aws_s3_bucket_object.lambda_zip.content_base64 rather than the filebase64(path) function, then you won't have this issue (or at least that was the fix for me).

@cmaurer
Copy link

cmaurer commented Jul 15, 2020

Another option is using an external data source.

for example, given a filename with the variable deployment_package, generate the base64 hash with the following:

data "external" "deployment_package" {
  program = ["/bin/bash", "-c", <<EOS
#!/bin/bash
set -e
SHA=$(openssl dgst -sha256 ${var.deployment_package} | cut -d' ' -f2 | base64)
jq -n --arg sha "$SHA" '{"filebase64sha256": $sha }'
EOS
  ]
}

and use it as such:

source_code_hash = data.external.deployment_package.result.filebase64sha256

which should give you

+ source_code_hash = "ZjRkOTM4MzBlMDk4ODVkNWZmMDIyMTAwMmNkMDhmMTJhYTUxMDUzZmIzOThkMmE4ODQyOTc2MjcwNThmZmE3Nwo="

@realn0whereman
Copy link

+1 this issue, it's causing us much pain as we intentionally want to inline larger files into the terraform.

I see that hashicorp/terraform#20906 has been merged over a year ago, but the symptom described above still persists.

Can the limit for grpc transfer be increased all around the project to allow downstream service which can accept such payloads to work properly without workarounds?

@anilkumarnagaraj
Copy link

Still happening with Terraform 0.12.24. Any workaround to fix the GRPC limit error ?

@finferflu
Copy link

finferflu commented Nov 11, 2020

This is still happening with Terraform 0.13.5, when using body with an API Gateway (v2), using version 3.14.1 of the AWS provider.

To add more clarity, I'm using the file function in my case:

body = file(var.body)

The file in question is on 1.5MB in size.

If I remove the body declaration, Terraform runs successfully.

Update

I have used jq to compress and reduce the size of the body to ~500KB, and there was no error. It looks like the threshold might be lower than 4MB, 1MB, perhaps?

@atamgp
Copy link

atamgp commented Nov 13, 2020

I still have this issue with
Terraform v0.12.29
provider.archive v2.0.0
provider.aws v3.15.0
provider.template v2.2.0

Need filebase64 to support file > 4mb because using it in combination with archive_file is the only way to make it idempotent.
Using a local_file in between brakes that....


data "archive_file" "this" {
  type        = "zip"
  output_path = "${path.module}/test.zip"

  source {
    filename = "test.crt"
    content  = file("${path.module}/archive/test.crt")
  }

  source {
    filename = "binary-file"
    content  = filebase64("${path.module}/archive/binary-file")
  }

  source {
    filename = "config.yml"
    content  = data.template_file.this.rendered
  }
}

@reitermarkus
Copy link

I also have this issue trying to deploy a Rust function to IBM Cloud. Similarly to @atamgp, I have a data "archive_file" which fails with

grpc: received message larger than max (11484267 vs. 4194304)

But even if this succeeded (or the .zip file is created manually), the resource "ibm_function_action" would still fail with

grpc: received message larger than max (7074738 vs. 4194304)
Terraform v0.14.3
+ provider registry.terraform.io/hashicorp/archive v2.0.0
+ provider registry.terraform.io/hashicorp/local v2.0.0
+ provider registry.terraform.io/ibm-cloud/ibm v1.12.0

@mo4islona
Copy link

mo4islona commented Feb 25, 2021

Faced same issue with kubernetes config map

resource "kubernetes_config_map" "nginx" {
  metadata {
    name      = "geoip"
    namespace = "ingress"
  }
  
  binary_data = {
    "GeoLite2-Country.mmdb" = filebase64("${path.module}/config/GeoLite2-Country.mmdb")
  }
}
Acquiring state lock. This may take a few moments...

Error: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (5248767 vs. 4194304)
Terraform v0.14.4
+ provider registry.terraform.io/hashicorp/kubernetes v1.13.3

@jankozuchowski
Copy link

I've encountered same issue - it looks like there's limitation on how many characters are in resource code.

Using file uploaded to bucket (without compressing it) fixed my issue - I'm assuming, that what helped is the fact, that .body from s3 is usually a stream, opposing to .rendered (which I was using before), which generates more characters in resource source.

@brettcave
Copy link

This is still happening with Terraform 0.13.5, when using body with an API Gateway (v2), using version 3.14.1 of the AWS provider.

To add more clarity, I'm using the file function in my case:

body = file(var.body)

The file in question is on 1.5MB in size.

If I remove the body declaration, Terraform runs successfully.

Update

I have used jq to compress and reduce the size of the body to ~500KB, and there was no error. It looks like the threshold might be lower than 4MB, 1MB, perhaps?

@finferflu - have found the same thing, we were running into this with a 1.5mb openapi json file. I was under the impression that it was not the actual file handle on the JSON that was causing this, but the "body" of the REST API now contains this which is then included in the state - and there's probably a lot of escape characters and other items in the state - so the statefile exceeds 4mb. To avoid a local file for the swagger, we uploaded to S3 and used an s3 data object in TF and the same problem occurred - so a strong indicator to support this.

@kabads
Copy link

kabads commented Jun 28, 2021

Still getting this issue with v0.15.4 and terraform cloud. We imported some infrastructure while using terraform cloud and then tried a plan, but cannot get the state file out:


│ Error: Plugin error

│ with okta_group.user_type_non_service_accounts,
│ on groups.tf line 174, in resource "okta_group" "user_type_non_service_accounts":
│ 174: resource "okta_group" "user_type_non_service_accounts" {

│ The plugin returned an unexpected error from plugin.(*GRPCProvider).UpgradeResourceState: rpc error: code = ResourceExhausted desc = grpc: received message larger than max (6280527 vs. 4194304)

@VikramVasudevan
Copy link

My file is around 2.4 MB and I am facing this issue even today.

resource "local_file" "parse-template" {
  content =  templatefile(local.template-name, {
    var1 = value1
    var2 = value2
  }) 
  filename = "${local.script-name}"
}

any workarounds for this please ?

@filipvh-sentia
Copy link

We ran into this error when using swagger JSON files and API gateway.
We temporarily fixed this issue by compressing the JSON swagger file to shrink the files which was sufficient. swagger size went from 1.4Mb to 950Kb.

It's not a real workaround, but maybe it helps somebody who is also close to the limit.
Strangely, the error kept persisting even though we didn't use any local.template_file or local.file data/resource ( we used the templatefile function instead ).

@atamgp
Copy link

atamgp commented Nov 2, 2021

Can this get more attention please?

@dduleep
Copy link

dduleep commented Jun 13, 2022

could we get the target timeline for these fixes or any challenges at the present architecture?

@bflad
Copy link
Member

bflad commented Jun 14, 2022

Hi folks 👋 This issue, while not mentioned in the CHANGELOG, may have been addressed with some underlying dependency updates that would have been included in the (latest) v2.2.3 release of this provider. In particular, this limit should be closer to 256MB. Does upgrading to this version of the provider help prevent this error?

@bflad
Copy link
Member

bflad commented Apr 14, 2023

Closing due to lack of response -- if this issue still exists after v2.2.3, please open a new issue and we'll investigate further.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests