Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Azure Kubernetes not supported? #124

Open
0xElessar opened this issue Oct 5, 2023 · 8 comments
Open

Azure Kubernetes not supported? #124

0xElessar opened this issue Oct 5, 2023 · 8 comments

Comments

@0xElessar
Copy link

Hello,

thanks for amazing tool. Unfortunately, I cannot use it in my current assessment. My kubeconfig works fine, I can get all information from kubectl I want, but when I run './kubehound.sh run', I am getting this error:

INFO[0000] Starting KubeHound (run_id: 6d31917e-4ebe-4797-8169-2b50b18e35a8)  component=kubehound run_id=6d31917e-4ebe-4797-8169-2b50b18e35a8 service=kubehound
INFO[0000] Initializing launch options                   component=kubehound run_id=6d31917e-4ebe-4797-8169-2b50b18e35a8 service=kubehound
INFO[0000] Loading application configuration from file config.yaml  component=kubehound run_id=6d31917e-4ebe-4797-8169-2b50b18e35a8 service=kubehound
INFO[0000] Initializing application telemetry            component=kubehound run_id=6d31917e-4ebe-4797-8169-2b50b18e35a8 service=kubehound
INFO[0000] Loading cache provider                        component=kubehound run_id=6d31917e-4ebe-4797-8169-2b50b18e35a8 service=kubehound
INFO[0000] Loaded MemCacheProvider cache provider        component=kubehound run_id=6d31917e-4ebe-4797-8169-2b50b18e35a8 service=kubehound
INFO[0000] Loading store database provider               component=kubehound run_id=6d31917e-4ebe-4797-8169-2b50b18e35a8 service=kubehound
INFO[0000] Loaded MongoProvider store provider           component=kubehound run_id=6d31917e-4ebe-4797-8169-2b50b18e35a8 service=kubehound
INFO[0000] Loading graph database provider               component=kubehound run_id=6d31917e-4ebe-4797-8169-2b50b18e35a8 service=kubehound
INFO[0000] Loaded JanusGraphProvider graph provider      component=kubehound run_id=6d31917e-4ebe-4797-8169-2b50b18e35a8 service=kubehound
INFO[0001] Starting Kubernetes raw data ingest           component=kubehound run_id=6d31917e-4ebe-4797-8169-2b50b18e35a8 service=kubehound
INFO[0001] Loading Kubernetes data collector client      component=kubehound run_id=6d31917e-4ebe-4797-8169-2b50b18e35a8 service=kubehound
Error: raw data ingest: collector client creation: getting kubernetes config: no Auth Provider found for name "azure"
Usage:
  kubehound-local [flags]

Flags:
  -c, --config string   application config file
  -h, --help            help for kubehound-local

FATA[0001] raw data ingest: collector client creation: getting kubernetes config: no Auth Provider found for name "azure"  component=kubehound run_id=6d31917e-4ebe-4797-8169-2b50b18e35a8 service=kubehound


Any suggestion, please?

thanks

@d0g0x01
Copy link
Contributor

d0g0x01 commented Oct 5, 2023

I will take a proper look next week but it seems like a manifestation of this issue: gruntwork-io/terratest#976 (comment)

Could just be a case of including.

import _ "k8s.io/client-go/plugin/pkg/client/auth"

in https://github.com/DataDog/KubeHound/blob/main/pkg/collector/k8s_api.go

@0xElessar
Copy link
Author

I will take a proper look next week but it seems like a manifestation of this issue: gruntwork-io/terratest#976 (comment)

Could just be a case of including.

import _ "k8s.io/client-go/plugin/pkg/client/auth"

in https://github.com/DataDog/KubeHound/blob/main/pkg/collector/k8s_api.go

Thank you, @d0g0x01 .

This helped a bit, but another error came up, which I have no clue how to deal with :(

FATA[0001] raw data ingest: collector client creation: getting kubernetes config: The azure auth plugin has been removed.
Please use the https://github.com/Azure/kubelogin kubectl/client-go credential plugin instead.
See https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins for further details  component=kubehound run_id=8b0ab0e7-341a-4b7e-bf1a-e2588df908dc service=kubehound

thank you

@0xElessar
Copy link
Author

Apologies for reminder, @d0g0x01 . But is there any quick fix? I will lose the access to the current cluster in 2-3 days (my assessment time is ending), but I am really interested to see your tool in action in this environment.

thank you

@d0g0x01
Copy link
Contributor

d0g0x01 commented Oct 6, 2023

I dont think so via the api collector :( BUT If you have kubectl access you could use the offline mode.

  1. collect the data using the https://github.com/DataDog/KubeHound/blob/main/scripts/collectors/collect.sh script (or similar)
  2. configure kubehound to use the file collector https://github.com/DataDog/KubeHound/blob/main/configs/etc/kubehound-reference.yaml#L21

@0xElessar
Copy link
Author

Much appreciated. Thank you for that, @d0g0x01 !

@0xElessar
Copy link
Author

hello @d0g0x01 ,

that almost worked!

Unfortunately, it seems the collector is aware of namespaces and creates all the folder structure for it, but the ingestor tool is not aware of it. It looks for the roles* files in the main folder instead of in namespace subfolders. Also, there is some error about: "could not write in bulk to mongo: context canceled", which I am not sure how to deal with it too.

./kubehound.sh run
INFO[0001] Creating file collector from directory /opt/kubehound/  component=kubehound run_id=a6deb72e-a125-485f-9ac1-b89bef17c5bf service=kubehound
INFO[0001] Loaded local-file-collector collector client  component=kubehound run_id=a6deb72e-a125-485f-9ac1-b89bef17c5bf service=kubehound
[...]
INFO[0001] Running ingest k8s-cluster-role-ingest        component=kubehound run_id=a6deb72e-a125-485f-9ac1-b89bef17c5bf service=kubehound
INFO[0001] Running ingest k8s-role-ingest                component=kubehound run_id=a6deb72e-a125-485f-9ac1-b89bef17c5bf service=kubehound
ERRO[0001] k8s-role-ingest run: file collector stream roles: read file /opt/kubehound/test-cluster/roles.rbac.authorization.k8s.io.json: open /opt/kubehound/test-cluster/roles.rbac.authorization.k8s.io.json: no such file or directory  component=kubehound run_id=a6deb72e-a125-485f-9ac1-b89bef17c5bf service=kubehound
ERRO[0001] k8s-cluster-role-ingest run: 1 error occurred:
        * could not write in bulk to mongo: context canceled
  component=kubehound run_id=a6deb72e-a125-485f-9ac1-b89bef17c5bf service=kubehound
ERRO[0001] ingestor sequence core-pipeline run: group k8s-role-group ingest: file collector stream roles: read file /opt/kubehound/test-cluster/roles.rbac.authorization.k8s.io.json: open /opt/kubehound/test-cluster/roles.rbac.authorization.k8s.io.json: no such file or directory  component=kubehound run_id=a6deb72e-a125-485f-9ac1-b89bef17c5bf service=kubehound
Error: raw data ingest: ingest: group k8s-role-group ingest: file collector stream roles: read file /opt/kubehound/test-cluster/roles.rbac.authorization.k8s.io.json: open /opt/kubehound/test-cluster/roles.rbac.authorization.k8s.io.json: no such file or directory
Usage:
  kubehound-local [flags]

Flags:
  -c, --config string   application config file
  -h, --help            help for kubehound-local

FATA[0001] raw data ingest: ingest: group k8s-role-group ingest: file collector stream roles: read file /opt/kubehound/test-cluster/roles.rbac.authorization.k8s.io.json: open /opt/kubehound/test-cluster/roles.rbac.authorization.k8s.io.json: no such file or directory  component=kubehound run_id=a6deb72e-a125-485f-9ac1-b89bef17c5bf service=kubehound

Anyway, it was really close. Thank you for all the assistance.

@d0g0x01
Copy link
Contributor

d0g0x01 commented Oct 6, 2023

Sorry - its not a fully support feature yet and we mainly use it for debugging/dev. However if you move/rename the files to the structure defined here you should be good

https://github.com/DataDog/KubeHound/blob/main/pkg/collector/file.go#L24

@0xElessar
Copy link
Author

I fully understand. Don't worry. The offline ingestion is another great feature to have though.

Thank you for quick responses and help, @d0g0x01 !!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants