You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I already have a working installation of Kube Stack Prometheus (includes Prometheus, Grafana, Alertmanager, lots of exporters as one bundle). I recently migrated my UniFi Network Application (controller) to Kubernetes and wanted a nice dashboard and found this project. Within a few hours I was able to convert the Docker instructions to Kubernetes manifest files.
My use case, is just Unpoller running with Prometheus plugin enabled, everything else is disabled. The only thing I had to add was a PodMonitor which makes the Unpoller exporter discoverable to Prometheus. Hopefully the steps below help someone.
First Item is a secret which holds the Unifi ID & Password credentials (values here should be base64 encoded). These will be exposed as environment variables to the Unpoller container.
Next a configMap which holds the contents of the up.conf file and will be mounted in the container in /etc/unifi-poller directory. The url is the internal Kubernetes DNS name to my Unfi Controller container. You would need to adjust to whatever you named your container.
Next is a deployment file, which links everything together. Instructs where to mount the secret and ConfigMap, defines the image and version of the container, which port to expose, how many copies of unpoller to run, etc. Since I'm only using the Unpoller Prometheus plugin I only defined that port and named that port metrics.
I also added a podAffinity section above instructing Kubernetes to run this container on the same node (hostname) where the Unifi Controller software is running, where ever that is, this will follow. If that is not started, then this won't start. It is looking for a label app with value unifi-controller, if you name yours something else then adjust as needed. By using pod affinity to keep the two applications next to each other, it keeps all the polling chatter local to reduce network traffic.
Lastly is a PodMonitor which is what Prometheus will look for. While the unpoller deployment I placed in the unifi namespace where the Unifi Controller is also placed, I put this podmonitor in the monitoring namespace where Prometheus is located.
The installation of Prometheus I have will automatically discover any pod or service monitor with a label release: kube-stack-prometheus thus no configuration is needed. This is not universal, if you used a different Prometheus package it likely is looking for a different label, adjust as needed if unpoller metrics are not seen within Prometheus after a few seconds. This podmonitor will scrape data off the metrics port named in the deployment and will use URI path /metrics.
FYI - As to the number and content of labels, they were auto-generated by Kustomize. If you were to write labels manually, you probably would use more meaningful descriptions. But these work fine.
Ideally this would also have a service account defined and associated RBAC roles limiting what that service account could do. I haven't gotten around to that yet.
Unpoller and Unifi Contoller running side by side in same namespace:
$ kubectl get pods -n unifi -o wide
NAME READY STATUS RESTARTS AGE IP NODE
unifi-controller-76758cfcf-mlfct 1/1 Running 0 8d 10.42.0.252 k3s01
unpoller-5dbb8857f5-56zks 1/1 Running 0 16h 10.42.0.36 k3s01
And the pod monitoring running:
$ kubectl get podmonitor -n monitoring
NAME AGE
unpoller-prometheus-podmonitor 16h
And if you want to test if metrics are being pulled by Unpoller and made available easy to test, just point to the IP address assigned to the Unpoller container:
$ curl http://10.42.0.36:9130/metrics
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 4.7439e-05
go_gc_duration_seconds{quantile="0.25"} 6.4822e-05
go_gc_duration_seconds{quantile="0.5"} 7.5783e-05
go_gc_duration_seconds{quantile="0.75"} 0.00010628
go_gc_duration_seconds{quantile="1"} 0.000329951
go_gc_duration_seconds_sum 0.542228114
go_gc_duration_seconds_count 5420
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 12
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.16.3"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge
(Hundreds of lines removed from output for brevity).
All of the dashboards documented work just fine in Grafana.
If someone has defined reasonable alerts for Alertmanager based on these metrics that would be handy.
The text was updated successfully, but these errors were encountered:
hey! I already made a helm chart for this. Should I do a PR so it stays upstream (your repo)? the only thing a bit "complicated" (not crazy) is the release process of the chart.
I also have integration with GrafanaOperator so it automatically imports the charts into the cluster.
I already have a working installation of Kube Stack Prometheus (includes Prometheus, Grafana, Alertmanager, lots of exporters as one bundle). I recently migrated my UniFi Network Application (controller) to Kubernetes and wanted a nice dashboard and found this project. Within a few hours I was able to convert the Docker instructions to Kubernetes manifest files.
My use case, is just Unpoller running with Prometheus plugin enabled, everything else is disabled. The only thing I had to add was a
PodMonitor
which makes the Unpoller exporter discoverable to Prometheus. Hopefully the steps below help someone.First Item is a secret which holds the Unifi ID & Password credentials (values here should be base64 encoded). These will be exposed as environment variables to the Unpoller container.
Next a configMap which holds the contents of the
up.conf
file and will be mounted in the container in/etc/unifi-poller
directory. Theurl
is the internal Kubernetes DNS name to my Unfi Controller container. You would need to adjust to whatever you named your container.Next is a deployment file, which links everything together. Instructs where to mount the secret and ConfigMap, defines the image and version of the container, which port to expose, how many copies of unpoller to run, etc. Since I'm only using the Unpoller Prometheus plugin I only defined that port and named that port
metrics
.I also added a
podAffinity
section above instructing Kubernetes to run this container on the same node (hostname) where the Unifi Controller software is running, where ever that is, this will follow. If that is not started, then this won't start. It is looking for a labelapp
with valueunifi-controller
, if you name yours something else then adjust as needed. By using pod affinity to keep the two applications next to each other, it keeps all the polling chatter local to reduce network traffic.Lastly is a PodMonitor which is what Prometheus will look for. While the unpoller deployment I placed in the
unifi
namespace where the Unifi Controller is also placed, I put this podmonitor in themonitoring
namespace where Prometheus is located.The installation of Prometheus I have will automatically discover any pod or service monitor with a label
release: kube-stack-prometheus
thus no configuration is needed. This is not universal, if you used a different Prometheus package it likely is looking for a different label, adjust as needed if unpoller metrics are not seen within Prometheus after a few seconds. This podmonitor will scrape data off themetrics
port named in the deployment and will use URI path/metrics
.FYI - As to the number and content of labels, they were auto-generated by Kustomize. If you were to write labels manually, you probably would use more meaningful descriptions. But these work fine.
Ideally this would also have a service account defined and associated RBAC roles limiting what that service account could do. I haven't gotten around to that yet.
Unpoller and Unifi Contoller running side by side in same namespace:
And the pod monitoring running:
And if you want to test if metrics are being pulled by Unpoller and made available easy to test, just point to the IP address assigned to the Unpoller container:
(Hundreds of lines removed from output for brevity).
All of the dashboards documented work just fine in Grafana.
If someone has defined reasonable alerts for Alertmanager based on these metrics that would be handy.
The text was updated successfully, but these errors were encountered: