Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Read-Only Access Control to a Bucket #1155

Open
schwienbier opened this issue Jul 6, 2023 · 15 comments
Open

Read-Only Access Control to a Bucket #1155

schwienbier opened this issue Jul 6, 2023 · 15 comments

Comments

@schwienbier
Copy link

Environment info

NooBaa Operator Version: 5.11.0
Platform: Kubernetes v1.25.8

When I create an objectbucketclaim, the give accesskey and secretkey has read-write access to the target bucket. Whether we can somehow create a pair of accesskey and secretkey which can only download and list, while uploading and deleting are not allowed? Thank you.

@nimrod-becker
Copy link
Contributor

This is not supported with the OBC model.
However, the OBC model is deprecated and we are moving to COSI which I believe should support this.
@romayalon

@romayalon
Copy link
Contributor

@nimrod-becker isn't it supported on namespace OBCs? #983
And I'm not sure about COSI, will check

@nimrod-becker
Copy link
Contributor

nimrod-becker commented Jul 6, 2023

It might be worth checking, @schwienbier See the PR Romy has referenced. It's only in 5.12 and up and you are using 5.11. We are working on releasing 5.12 and once its out it would be great if you can try it

That would set the NS resource to read only mode and won't allow writes.
You can also create an S3 bucket policy allowing the account created by the OBC to read/list only

@schwienbier
Copy link
Author

Hi @nimrod-becker ,

thank you. That will be very handy. The aforementioned S3 bucket policy is also in 5.12 and up, right? Thank you.

Best regards

@Alansyf
Copy link

Alansyf commented Jul 7, 2023

Hi @nimrod-becker / @romayalon

Actually we (@schwienbier and me) are looking for same thing, my ticket was #1150

We would like to have a simple / clear and straight forward way to have access level control for different account.
It does not matter it is noobaa account or obc.

Eventually one technical account user can be able to configured to have
write access to multiple different buckets
+
read access to multiple different buckets

Or just read access to multiple different buckets.

Is this something we are going to have in 5.12?
And may we know when 5.12 planned to be released?

1 similar comment
@Alansyf
Copy link

Alansyf commented Jul 7, 2023

Hi @nimrod-becker / @romayalon

Actually we (@schwienbier and me) are looking for same thing, my ticket was #1150

We would like to have a simple / clear and straight forward way to have access level control for different account.
It does not matter it is noobaa account or obc.

Eventually one technical account user can be able to configured to have
write access to multiple different buckets
+
read access to multiple different buckets

Or just read access to multiple different buckets.

Is this something we are going to have in 5.12?
And may we know when 5.12 planned to be released?

@Alansyf
Copy link

Alansyf commented Jul 11, 2023

Hi @nimrod-becker / @romayalon ,

I have tried the latest release v5.13.0, i can see the new options called access_mode when creating namespacestore.

I want to have two different namespacestore to point same remote bucket name, the other namespacestore will be used for read-only and the other namespacestore used for write.

apiVersion: noobaa.io/v1alpha1
kind: NamespaceStore
metadata:
  name: servicenow-devops-coll-1-ns-ro
  namespace: noobaa
spec:
  accessMode: read-only
  s3Compatible:
    endpoint: https://objectstore-3.eu-de-2.cloud.sap
    secret:
      name: s3-secret
      namespace: noobaa
    signatureVersion: v4
    targetBucket: servicenow.devops.coll-1
  type: s3-compatible

--- 

apiVersion: noobaa.io/v1alpha1
kind: NamespaceStore
metadata:
  name: servicenow-devops-coll-1-ns-rw
  namespace: noobaa
spec:
  accessMode: read-write
  s3Compatible:
    endpoint: https://objectstore-3.eu-de-2.cloud.sap
    secret:
      name: s3-secret
      namespace: noobaa
    signatureVersion: v4
    targetBucket: servicenow.devops.coll-1
  type: s3-compatible

But this won't work when I applied:

NAME                             TYPE            TARGET-BUCKET              PHASE      AGE
servicenow-devops-coll-1-ns-ro   s3-compatible   servicenow.devops.coll-1   Creating   16m20s
servicenow-devops-coll-1-ns-rw   s3-compatible   servicenow.devops.coll-1   Ready      23m46s

In operator it showed:

time="2023-07-11T03:02:02Z" level=error msg="⚠️ RPC: pool.create_namespace_resource() Response Error: Code=IN_USE Message=Target already in use"

@schwienbier
Copy link
Author

Dear Noobaa Developer/Supports,

btw, actually Alansyf and I belong to one team...

Please imagine a senerio, where one Noobaa account has read-only access to a remote swift bucket, and the other Noobaa account has read-write access to the same remote swift bucket. Is it somehow possible in Noobaa v5.13.0? Thank you in advance!

Best regards

@nimrod-becker
Copy link
Contributor

@Alansyf I don't see how #1150 applies here, the original request there was to support multiple write targets.
This thread is about access control

@dannyzaken
Copy link
Contributor

@schwienbier @Alansyf, can you achieve what you want by creating two different namespace buckets, each with a different bucket policy for access?
e.g.:

  • let's say you have 2 NS resources, nsr1, nsr2.
  • user1 should read from both and write to nsr1.
  • user2 should only read from both without write access
  • bucket_with_write_access - namespace bucket { wr: nsr1, read: [nsr2] }
  • bucket_with_read_only_access - namespace bucket { wr: "" , read: [nsr1, nsr2] }
    • AFAIR, in 5.13 it should be possible not to supply a write-resource.

I didn't test it myself, but it should work.
Please let me know if this isn't what you're looking for or if something is not working as expected.

@Alansyf
Copy link

Alansyf commented Jul 17, 2023

Hi @dannyzaken ,

  • bucket_with_write_access - namespace bucket { wr: nsr1, read: [nsr2] }
  • bucket_with_read_only_access - namespace bucket { wr: "" , read: [nsr1, nsr2] }

If I want wr: [ns1, ns3], read: [ns2], it hits the other issue #1150 , right?

@dannyzaken
Copy link
Contributor

dannyzaken commented Jul 17, 2023

yes.
I have a few questions regarding the multiple write targets:
* What is the use case for that?
* What is the expected behavior?
* Assuming the expectation is that when writing to the NS_Bucket, the object will be written to all write targets, what is the expected behavior when one of the writes fails?

I copied this comment to #1150. It seems more relevant to that discussion

@schwienbier
Copy link
Author

@schwienbier @Alansyf, can you achieve what you want by creating two different namespace buckets, each with a different bucket policy for access? e.g.:

  • let's say you have 2 NS resources, nsr1, nsr2.

  • user1 should read from both and write to nsr1.

  • user2 should only read from both without write access

  • bucket_with_write_access - namespace bucket { wr: nsr1, read: [nsr2] }

  • bucket_with_read_only_access - namespace bucket { wr: "" , read: [nsr1, nsr2] }

    • AFAIR, in 5.13 it should be possible not to supply a write-resource.

I didn't test it myself, but it should work. Please let me know if this isn't what you're looking for or if something is not working as expected.

Hi @dannyzaken ,

thank you for your reply. I tested read-only access on my side by applying the following obc.yaml

apiVersion: noobaa.io/v1alpha1
kind: NamespaceStore
metadata:
  name: aittest2
  namespace: noobaa
spec:
  s3Compatible:
    endpoint: ...
    secret:
      name: swift-access-secret
      namespace: noobaa
    signatureVersion: v4
    targetBucket: aittest2
  type: s3-compatible
  
    
---

apiVersion: noobaa.io/v1alpha1
kind: BucketClass
metadata:
  name: aittest2-ro
  namespace: noobaa
spec:
  namespacePolicy:
    type: Multi
    multi:
      writeResource: ""
      readResources:
      - aittest2
      
---

apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
  name: aittest2-read-only
  namespace: test-ns
spec:
  additionalConfig:
    bucketclass: aittest2-ro
  bucketName: aittest2.ro
  storageClassName: noobaa.noobaa.io

And then I used the following python code to download some files from target remote swift bucket.

import logging
from botocore.exceptions import ClientError
import os
import urllib3
urllib3.disable_warnings()
import os

import boto3
s3_client = boto3.resource('s3', 
    endpoint_url='https://s3.noobaa.svc.cluster.local/',
    aws_access_key_id='obc_access_key',
    aws_secret_access_key='obc_secret_key',        
    config=boto3.session.Config(
                                signature_version = 's3v4',
                                s3={'addressing_style': 'path'},
                                ),
    use_ssl=True,
    verify=False,
    region_name='eu-de-2',)

s3_client.Bucket('aittest2.ro').download_file('ca.crt', 'ca.crt')

The client error message is as follows,

ClientError: An error occurred (500) when calling the HeadObject operation (reached max retries: 4): Internal Server Error

Noobaa core seems to have the following error message,

TypeError: Cannot read properties of undefined (reading 'resource')

Can you reproduce the error on your side? Thank you!

Best regards

@schwienbier
Copy link
Author

btw, may I ask can Noobaa still connect to remote swift/AWS/Azure bucket after COSI replaces OBC? Thank you.

@schwienbier
Copy link
Author

Hi @dannyzaken ,

I tried with the following yamls. Now I can not upload files to the target bucket.

apiVersion: noobaa.io/v1alpha1
kind: BucketClass
metadata:
  name: aittest2-ro
  namespace: noobaa
spec:
  namespacePolicy:
    type: Multi
    multi:
      writeResource: "unwritable-bucket"
      readResources:
      - aittest2

But I can still delete file from the bucket using the following python code

import logging
from botocore.exceptions import ClientError
import os
import urllib3
urllib3.disable_warnings()
import os

import boto3


s3_client = boto3.resource('s3', 
    endpoint_url='http://s3.noobaa.svc.cluster.local/',
    aws_access_key_id='...',
    aws_secret_access_key='...',        
    config=boto3.session.Config(
                                signature_version = 's3v4',
                                s3={'addressing_style': 'path'},
                                ),
    use_ssl=False,
    verify=False,
    region_name='eu-de-2',)

s3_client.Object('aittest2.ro','dummy.txt').delete()

We can delete files from the read-only bucket. Is that a bug?

Best regards

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants