Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failure to run on AKS cluster #1602

Open
muandane opened this issue Feb 12, 2024 · 13 comments
Open

Failure to run on AKS cluster #1602

muandane opened this issue Feb 12, 2024 · 13 comments
Assignees
Labels
bug Something isn't working

Comments

@muandane
Copy link

Description

Environment

OS: it's container so what ever is the dockerfile set to, but the aks nodepool OS is on Ubuntu 22.04
Version: v3.0.3

Steps To Reproduce

Expected behavior

it should work, i tested kubescape on a k3s node with my armo accountID and secret which works without issues

Actual Behavior

kubescape

Additional context

@muandane muandane added the bug Something isn't working label Feb 12, 2024
@muandane
Copy link
Author

@EtienneDeneuve

@matthyx
Copy link
Contributor

matthyx commented Feb 12, 2024

@muandane we have just released a new Chart to address this issue, can you try again?

@matthyx
Copy link
Contributor

matthyx commented Feb 12, 2024

1.18.3 released 2 minutes ago

@muandane
Copy link
Author

thanks @matthyx i dont get that erreur anymore, but i'm getting this instead is it because of kubescape not being fully compatible with AKS ?

image

@muandane
Copy link
Author

muandane commented Feb 12, 2024

does it have to do with 1 kubevuln pod being down due to ephemeral-storage limitation ?
Pod ephemeral local storage usage exceeds the total limit of containers 4Gi
image

@matthyx
Copy link
Contributor

matthyx commented Feb 12, 2024

thanks @matthyx i dont get that erreur anymore, but i'm getting this instead is it because of kubescape not being fully compatible with AKS ?

It means we didn't detect you're running on AKS... you can try setting cloudProviderMetadata.cloudProviderEngine=aks

@matthyx
Copy link
Contributor

matthyx commented Feb 12, 2024

does it have to do with 1 kubevuln pod being down due to ephemeral-storage limitation ? `Pod ephemeral local storage

kubevuln doesn't have to do with compliance scan, it's the component that does vulnerability scans... however it would be nice to have it running :)
on your cluster you don't have PV support?

@muandane
Copy link
Author

I'm installing kubescape using flux, this is the values structure i'm giving flux

image

ps:

  • i have also added the secrets to the kuebscape deployment but in a form a secret instead of passing them as stings using envFrom.secretRef
  • for the PV yes my aks cluster supports it i added as you can see the storage class needed
  • for the cloudProviderMetadata.cloudProviderEngine=aks i checked the chart locally and it's not used anywhere i think that's the issue

@matthyx
Copy link
Contributor

matthyx commented Feb 12, 2024

right, this parameter was deprecated looooong ago... sorry
maybe you could try setting some of the AKS specific values cloudProviderMetadata.aks* from https://github.com/kubescape/helm-charts/tree/main/charts/kubescape-operator ?

@muandane
Copy link
Author

yes i added all of them as a secret
Screenshot 2024-02-12 at 5 12 45 PM

Screenshot 2024-02-12 at 5 14 50 PM image

@matthyx
Copy link
Contributor

matthyx commented Feb 12, 2024

hmm, @dwertent any idea?

@dwertent
Copy link
Contributor

dwertent commented May 2, 2024

I think this is because we dont get the cloud provider from the nodes.
I will change this behavior.

@dwertent
Copy link
Contributor

dwertent commented May 2, 2024

I revisited the issue and found that Kubescape identifies the cloud provider.
Have you reviewed the documentation here? It mentions both User-Assigned Managed Identities and System-Assigned Managed Identities. Ensure that you have defined the correct one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants