Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

S3 API Doesn't Respect linode Private Bucket Endpoint After the Upload #1782

Open
mustafa519 opened this issue Mar 13, 2024 · 6 comments
Open
Assignees
Labels
needs-investigation Potential bug. Needs investigation

Comments

@mustafa519
Copy link

mustafa519 commented Mar 13, 2024

Version:

  • listmonk: listmonk/listmonk:v3.0.0-amd64 docker image
  • OS: Ubuntu 22

Description of the bug and steps to reproduce:
Hello, I am trying to setup listmonk.

I set the linode s3 credentials like below screenshot. Linode object storage endpoint works well, however previewing it is broken. It tries to get the media from AWS s3 endpoint rather than the linode endpoint. It's happening on the campaign add attachment step.

E.g:
https://mybucket.s3.amazonaws.com/folder/thumb_388291.jpg?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=RN7IENFKQAG4IU64E7TW%2F20240313%2Feu-central-1%2Fs3%2Faws4_request&X-Amz-Date=20240313T210501Z&X-Amz-Expires=601200&X-Amz-SignedHeaders=host&X-Amz-Signature=dca06ed307f4beacf5da282ab3234d98c746165756fc1084435417bbbcb4e9f7

I am not familiar with the go language, so I couldn't find the broken part of the code.

I hope those information helps!

Edit: Just figured it out, on this issue #669 mentioned the same bug.

Screenshots:
image

@mustafa519 mustafa519 added the bug Something isn't working label Mar 13, 2024
@knadh
Copy link
Owner

knadh commented Mar 25, 2024

Hi @mustafa519. #669 was closed as the issue couldn't be replicated.

  1. Perhaps the bucket type should be public?
  2. Tried setting the custom public URL?

@knadh knadh added needs-investigation Potential bug. Needs investigation and removed bug Something isn't working labels Mar 25, 2024
@mustafa519
Copy link
Author

Hello @knadh

I currently ended up using the public bucket type. Public bucket or providing custom public url works well. However, when I set a private bucket, it uses bucketName.s3.amazonaws.com endpoint, doesn't respect my custom endpoint.

Then I used aws bucket instead linode, it still didn't work.

In Summary:

  • Uploading works in any case.
  • Public bucket type works normally.
  • When I set different endpoint(linode) other than aws, it tries to fetch from aws.
  • Private buckets don't work with neither aws nor different object storage services.

Those all I can provide as the details.

Thank you for the investigation.

@knadh
Copy link
Owner

knadh commented Mar 26, 2024

That's very helpful. Sounds like a URL generation issue when type=private. Will investigate this.

@knadh knadh self-assigned this Mar 26, 2024
@osmantuna
Copy link

@knadh
Copy link
Owner

knadh commented Apr 13, 2024

hm, when the bucket type is empty AND there is no public URL set (the "Custom Public URL" field in settings), then a pre-signed URL is generated.

// Generate a private S3 pre-signed URL if it's a private bucket, and there

Assuming that the S3-compatible private endpoints also support generating signed URLs, the https://github.com/rhnvrm/simples3 lib should ideally support custom URLs.

@rhnvrm could you confirm whether setting the Endpoint field to the root of the custom backend will work as intended?

@rhnvrm
Copy link
Collaborator

rhnvrm commented Apr 15, 2024

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-investigation Potential bug. Needs investigation
Projects
None yet
Development

No branches or pull requests

4 participants