You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug:
When vault-configurer is down, other pods in the cluster can not pull secrets.
Here is the log from vault pod while the other pods trying to pull secrets and the configurer is not up:
login unauthorized due to: lookup failed: service account unauthorized; this could mean it has been deleted or recreated with a new token
This happens because vault configurer is configuring k8s_auth with token_reviewer_jwt of its own (if we aren't set it explicitly) which prevents us using the local Vault pod token as reviewer JWT as described here
So when the configurer is down, k8s deletes the token used, to configure k8s_auth method so then when Vault k8s auth tries to use the token with the API, it says that token has expired.
Expected behaviour:
Pods should be able to pull secrets when configurer is down as well.
We should not set kubernetes_host when running vault on the same cluster.
Steps to reproduce the bug:
1 ) Omit kubernetes_host, token_reviewer_jwt from your kubernetes auth config.
2) Sync vault configurer
3) Scale vault configurer replicas to 0
4) Try to pull secrets with k8s_auth
Additional context:
Add any other context about the problem here.
Environment details:
Kubernetes version: v1.21.5
Cloud-provider/provisioner: EKS
bank-vaults version: 1.15.2
/kind bug
The text was updated successfully, but these errors were encountered:
Thank you for your contribution! This issue has been automatically marked as stale because it has no recent activity in the last 60 days. It will be closed in 20 days, if no further activity occurs. If this issue is still relevant, please leave a comment to let us know, and the stale label will be automatically removed.
Describe the bug:
When vault-configurer is down, other pods in the cluster can not pull secrets.
Here is the log from vault pod while the other pods trying to pull secrets and the configurer is not up:
login unauthorized due to: lookup failed: service account unauthorized; this could mean it has been deleted or recreated with a new token
This happens because vault configurer is configuring k8s_auth with
token_reviewer_jwt
of its own (if we aren't set it explicitly) which prevents us using the local Vault pod token as reviewer JWT as described herehttps://github.com/banzaicloud/bank-vaults/blob/52cf23528eb30c90c6612d94c046697dcb3f06aa/internal/vault/auth_methods.go#L217-L222
So when the configurer is down, k8s deletes the token used, to configure k8s_auth method so then when Vault k8s auth tries to use the token with the API, it says that token has expired.
The only way to overcome it is to set
kubernetes_host
key and avoid using the default config as you can see herehttps://github.com/banzaicloud/bank-vaults/blob/52cf23528eb30c90c6612d94c046697dcb3f06aa/internal/vault/auth_methods.go#L64-L66
Expected behaviour:
Pods should be able to pull secrets when configurer is down as well.
We should not set
kubernetes_host
when running vault on the same cluster.Steps to reproduce the bug:
1 ) Omit
kubernetes_host
,token_reviewer_jwt
from your kubernetes auth config.2) Sync vault configurer
3) Scale vault configurer replicas to 0
4) Try to pull secrets with k8s_auth
Additional context:
Add any other context about the problem here.
Environment details:
/kind bug
The text was updated successfully, but these errors were encountered: