New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mc doesn't work with GCS buckets containing underscores in the name #1664
Comments
Currently yes buckets are restricted based on S3 bucket naming requirements i.e following bucket restrictions from here http://docs.aws.amazon.com/AmazonS3/latest/dev/BucketRestrictions.html Will see if there is a way to relax this just for gcs. |
Can we have bucket naming/validation conventions specific to each storage provider. It can be very handy when we support multiple providers each with their own restrictions. One's policy changes will not break the other. |
The problem with Google Cloud Storage is that it is not fully S3 compatible. 'mc' is a tool only written for 'S3' compatible object storage like S3, Minio, Ceph or Swift and Filesystems. - that is why adding verbatim solutions per provider doesn't make sense since it's not a generic data transfer tool for all types of providers. What can be done is relaxing bucket restrictions just for 'GCS' specifically, it requires code changes in minio-go. |
To make minio work, I transfer (rename) my backet with name without underscore. |
We should only encourage users to follow best practice. If we relax it, data migration between GCS, AWS and Minio becomes harder and users will always be dependent on mc. |
This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
The validation mc does on bucket names restricts them to a DNS-able subset of characters. However Google Cloud Storage seems to allow other characters. The leads to the situation where mc can't perform operations on some GCS buckets.
Is there a way to relax that restriction on a per-alias basis? My use-case here is to be able to transfer data between GCS and S3, and I can't currently do that for all the buckets I'm working with. See below for the error I'm getting.
The text was updated successfully, but these errors were encountered: