Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disable Access Point Usage #538

Closed
MarkSpencerTan opened this issue Aug 18, 2021 · 11 comments · May be fixed by #732
Closed

Disable Access Point Usage #538

MarkSpencerTan opened this issue Aug 18, 2021 · 11 comments · May be fixed by #732
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@MarkSpencerTan
Copy link

Is your feature request related to a problem? Please describe.
We've run into an issue with the Dynamic Provisioning of the EFS-CSI-Driver due to hitting the limit of the number of access points one EFS can have. Using EFS for EKS doesn't necessarily require using Access Points to persist data right? I think it severely limits the scalability aspect of using EFS in EKS for certain situations with this driver because it would limit the number of PVCs you can provision with it to 120. Any more than 120 then the user would have to create a new EFS with a new storageclass, which is problematic if they are trying to save money using a long term solution and it'll require manual intervention.

Would it be possible to have an option to disable the Access Points if they are not needed by the user to lift this limit?

Describe the solution you'd like in detail
Have an option for the StorageClass in the provisioningMode section to have a different type for non-access point usage. Maybe call it "nfs" or something else.

@kbasv
Copy link

kbasv commented Aug 19, 2021

Hey @MarkSpencerTan ,

Would a provisioning mode which would provision file systems in place of access points help your use case? #310

@MarkSpencerTan
Copy link
Author

@kbasv we would like to keep the file system count at minimum so we can have a price friendly backup plan/long term option

@MarkSpencerTan
Copy link
Author

@kbasv also if the EFS CSI driver can support reusing just one access point and then creating subdirectories inside of it per pvc, that would be another way we can just use one EFS... not sure if that's possible already with the current implementation since I have seen mentions of mountOptions and volumeHandle examples where you can specify subdirectories inside an access point.

@innovia
Copy link

innovia commented Aug 23, 2021

@kbasv we need this ability too and we would love to get this escalated, we have over 120 mounts

I checked with AWS, it doesn't seem like they support service limit increase for this.

I can contrib code if you show me where it needs a fix

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 21, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 21, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@paalkr
Copy link

paalkr commented Apr 1, 2022

I know this issue is closed, but I will like to make a comment for future Googlers ;) Creating an Access Points for each PVC is in my opinion a huge shortcoming for the dynamic provisioner. In one of the K8s clusters I manage we have more then 1000 ReadWriteMany PVCs (and more then 4500 PODs). The efs csi-driver dynamic provisioner doesn't work for us due to the Access Points limitations pr filesystem. Sure, creating more filesystem is possible, but then you also have to create several storage classes and make sure to not have more then 120 PVCs pr Storage Class, not very user friendly. Also the ability to share IOPS and provisioned throughput of one filesystem across all pods save us money. Not all pods do "spike" at the same time.

So in our use case a better model would be that each PVC just get's it's own subfolder (yes I know that an Access Point provide more features when it comes to file owners and access constraints, then a plain subfolder does). There is actually an old, and unfortunately not maintained, provisioner out there that does exactly this. Provision a subfolder in a shared EFS pr PVC, https://github.com/goto-opensource/aws-efs-csi-pv-provisioner

We have been running with this provisioner in a large scale cluster for years, and it's working great. Most of the "hard work" (mount / dismount EFS etc) is anyway handled by the official EFS CSI driver.

@jgoeres
Copy link

jgoeres commented Apr 7, 2022

@paalkr Your post pretty much exactly matches our experience with the old provisioner and the trouble we are having with the AP limitation since we had to make the switch. A mode that just creates subdirectories would be what we need.

@thesuperzapper
Copy link

@paalkr @jgoeres it seems like @jonathanrainer is proposing a directory-based approach (not using access points) in PR #732, you may want to check it out!

But in the mean time, the official NFS CSI Driver already uses a directory-based approach (so has no 120 limit), and works with any arbitrary NFS servers (including EFS).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants