Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kerberos ticket #512

Open
kandolfp opened this issue Jul 4, 2022 · 16 comments · Fixed by #606
Open

Kerberos ticket #512

kandolfp opened this issue Jul 4, 2022 · 16 comments · Fixed by #606

Comments

@kandolfp
Copy link

kandolfp commented Jul 4, 2022

Would it be possible to not go for username and password but to use a Kerberos ticket for the mount instead?

It would be the equivalent of a mount on the host as:
//server/share /mnt cifs multiuser,sec=krb5,user 0 0

Users will have access to it after a kinit to get a kerberos ticket. Equivalently the pod would need a kerberos ticket but that could be provided by an init script.

@andyzhangx
Copy link
Member

so what's the mount options in this case?

@kandolfp
Copy link
Author

kandolfp commented Jul 5, 2022

the options above are multiuser,sec=krb5,user this way the mount uses kerberos and it separate per user. So if the drive is mounted but your users has no kerberos ticket you will not be able to access it.

get a ticket with kinit and you can access it.

It just exchanges user, passwd with a kerberos ticket. the ticket would be generated in the pod.

@avishefi
Copy link

avishefi commented Sep 8, 2022

Working with Kerberos also solves working under FIPS since NTLM-based SMB requires hmac-md5.

@shaunrampersad
Copy link

@kandolfp did you manage to get this to work with Kerberos? If so, how did you do it?

My issue is as @avishefi states, the cluster is installed with FIPS enabled so NTLM wont work.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 16, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 15, 2023
@andyzhangx andyzhangx removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jan 17, 2023
@yrro
Copy link

yrro commented Jan 31, 2023

the options above are multiuser,sec=krb5,user this way the mount uses kerberos and it separate per user. So if the drive is mounted but your users has no kerberos ticket you will not be able to access it.

Not sure what user means here. Isn't it the option used to specify username? It's probably just redundant when multiuser is specified.

In this mode, when a process tries to access a share, the kernel makes a cifs.spnego upcall to request encryption keys.

It's not clear how this can work when the process triggering the upcall is running in a container. I found a linux-security-module mailing list thread proposing changes to make it work smoothly but I don't know if it was adopted.

@andyzhangx
Copy link
Member

for Kerberos ticket support, it requires agent node domain join first, here is one example:
https://learn.microsoft.com/en-us/azure/azure-netapp-files/configure-nfs-clients#ubuntu-configuration

@andyzhangx
Copy link
Member

andyzhangx commented Jan 10, 2024

since this feature is already implemented by this driver, when the kerberos ticket is expired, how does this driver handle this issue? do we need to unmount the pv and mount again?

@yrro
Copy link

yrro commented Jan 10, 2024

I think the existing Kerberos support is a little bit incomplete. As a user I would like to put a keytab into a secret and have that be used to authenticate the mount; I don't want to have to manage a directory of credentials cache files on every node...

@yrro
Copy link

yrro commented Jan 10, 2024

"To pass a ticket through secret, it needs to be acquired" - there's the problem. The tickets in a credential cache expire within a few hours.

What I'd like as a user is to have csi-driver-smb handle obtaining the ticket automatically. To do this, a long term secret (in the form of the user's Kerberos keys stored in a keytab file) are required. I'd like to provide the keytab file to csi-driver-smb via a secret object. The mount performed by csi-driver-smb would use the sec=krb5i option.

The difficultly comes when the kernel makes the cifs.spengo upcall in order to obtain Kerberos keys. csi-driver-smb would have to service this upcall, and at that point it would need to use the keytab to obtain a TGT, and then use the TGT to obtain a ST and return that back to the kernel (I'm handwaving here, but hopefully that makes sense to someone who knows how kerberos-authenticated SMB mounts on Linux is implemented).

@andyzhangx andyzhangx reopened this Jan 11, 2024
@cccsss01
Copy link

I'd like this feature as well

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 14, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 14, 2024
@yrro
Copy link

yrro commented May 15, 2024

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label May 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants