Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

HTTP Error 412 while using recent Turbo versions #323

Open
trappar opened this issue Mar 18, 2024 · 8 comments
Open

HTTP Error 412 while using recent Turbo versions #323

trappar opened this issue Mar 18, 2024 · 8 comments
Labels
wontfix This will not be worked on

Comments

@trappar
Copy link

trappar commented Mar 18, 2024

馃悰 Bug Report

I updated Turbo from 1.10.16 to 1.12.5 today and started seeing this in CI:

 WARNING  failed to contact remote cache: Error making HTTP request: HTTP status client error (412 Precondition Failed) for url (http://0.0.0.0:45045/v8/artifacts/e946449d9e73b6d1?slug=ci)

There is some discussion around this in this turbo issue, where people mention that this is likely due to using this remote cache server along with S3 specifically.

To Reproduce

I doubt that it will be possible for me to create reproduction instructions / repo for this issue considering that others have failed to reliably reproduce this in the thread above.

Expected behavior

To not get the http status errors.

Your Environment

  • Using this package with my GH action. I'm specifically using trappar/turborepo-remote-cache-gh-action@v2, which is a new version I've been working on in order to support the up-to-date version of this package.
  • Turbo version 1.12.5
  • Seeing the failures in CI while using GitHub Actions, where I'm using ubuntu-latest
  • Server is configured to connect to an S3 bucket
@spacedawwwg
Copy link

we've been having the same issue using Azure. Locked in at turbo v1.10 for now until we have time to fully investigate

@matteovivona
Copy link
Collaborator

Super weird. We're using a remote-cache server and Turbo 1.12.5 in other projects, and so far, we haven't had any problems at all

Screenshot 2024-03-21 at 10 15 10

@trappar
Copy link
Author

trappar commented Mar 21, 2024

I've found what is causing this in my particular case.

I have two different workflows. I was only seeing this issue appear in one of them.

Both of them utilize my GH Action to start a cache server like this:

- uses: trappar/turborepo-remote-cache-gh-action@v2
  env:
    AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
    AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
  with:
    storage-provider: s3
    storage-path: turborepo-cache

However, the one that was failing had the following proceeding it:

- name: Configure AWS credentials
  uses: aws-actions/configure-aws-credentials@v4
  with:
    role-to-assume: arn:aws:iam::${{ secrets.ACCOUNT_ID }}:role/[REDACTED]
    aws-region: us-east-1

If I simply switch the order of these so that the remote cache server starts before configuring AWS credentials, then the error disappears.

So this may or may not be a bug depending on which credentials should take precedence. I assumed that the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY env variables would take precedence over anything else, but that is clearly incorrect. I'm not actually 100% sure what the output of that aws action is (does it create an ~/.aws/credentials file or something?), but it looks like it's taking over, and those credentials don't have permission to view the S3 bucket I'm telling the cache server to use.

Regardless of if this app is doing something wrong or not, it does seem like there's room to improve the handling of this authentication failure case. These 412 Precondition Failed error are super opaque for an end user.

Maybe someone who knows more about AWS authentication could help here?

@matteovivona matteovivona added wontfix This will not be worked on and removed needs triage labels Mar 26, 2024
@trappar
Copy link
Author

trappar commented Mar 27, 2024

I don't think wontfix is necessarily appropriate here for two reasons:

  1. This only appears upon switching to Turbo versions above 1.10.16. Why is this same setup valid on 1.10.16? Seems like there's more to the story that's worth investigating.
  2. The error handling needs to be improved.

@spacedawwwg
Copy link

just tried with turbo v1.13... 412 still persists :(

image

@spacedawwwg
Copy link

Does anybody know of alternatives to ducktors / turborepo-remote-cache that don't have this issue?

@MisterJimson
Copy link

I'm seeing this as well, 1.10.16 works but newer versions do not.

@NullVoxPopuli
Copy link

NullVoxPopuli commented May 3, 2024

last compatible turbo version is 1.12.0, for me/us

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
wontfix This will not be worked on
Projects
None yet
Development

No branches or pull requests

5 participants