Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docs for Kubernetes Setup #410

Open
reefland opened this issue May 7, 2022 · 4 comments
Open

Docs for Kubernetes Setup #410

reefland opened this issue May 7, 2022 · 4 comments
Labels
docs Documentation request or update enhancement New feature or request

Comments

@reefland
Copy link

reefland commented May 7, 2022

I already have a working installation of Kube Stack Prometheus (includes Prometheus, Grafana, Alertmanager, lots of exporters as one bundle). I recently migrated my UniFi Network Application (controller) to Kubernetes and wanted a nice dashboard and found this project. Within a few hours I was able to convert the Docker instructions to Kubernetes manifest files.

My use case, is just Unpoller running with Prometheus plugin enabled, everything else is disabled. The only thing I had to add was a PodMonitor which makes the Unpoller exporter discoverable to Prometheus. Hopefully the steps below help someone.

First Item is a secret which holds the Unifi ID & Password credentials (values here should be base64 encoded). These will be exposed as environment variables to the Unpoller container.

---
apiVersion: v1
kind: Secret
metadata:
  labels:
    app: unpoller
    app.kubernetes.io/instance: unpoller
    app.kubernetes.io/name: unpoller
  name: unpoller-secrettype: Opaque
data:
  unifi-user: cmVkY2FjdGVk
  unifi-pass: YWxzby1yZWRjYWN0ZWQ=

Next a configMap which holds the contents of the up.conf file and will be mounted in the container in /etc/unifi-poller directory. The url is the internal Kubernetes DNS name to my Unfi Controller container. You would need to adjust to whatever you named your container.

---
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app: unpoller
    app.kubernetes.io/instance: unpoller
    app.kubernetes.io/name: unpoller
  name: unpoller-config-file
data:
  up.conf: |
    # See Github example page for more details.
    # https://github.com/unpoller/unpoller/blob/master/examples/up.conf.example

    [poller]
      debug = false
      quiet = false
      plugins = []

    [prometheus]
      disable = false
      http_listen = "0.0.0.0:9130"
    
    [influxdb]
      disable = true

    # Loki is disabled with no URL
    [loki]
      url = ""
    
    [datadog]
      disable = true

    [webserver]
      enable = false
      port   = 37288
      html_path     = "/usr/lib/unifi-poller/web"
    
    [unifi]
      dynamic = false

    [unifi.defaults]
      url =  "https://unifi-controller:8443"
      sites = ["all"]

Next is a deployment file, which links everything together. Instructs where to mount the secret and ConfigMap, defines the image and version of the container, which port to expose, how many copies of unpoller to run, etc. Since I'm only using the Unpoller Prometheus plugin I only defined that port and named that port metrics.

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: unpoller
    app.kubernetes.io/instance: unpoller
    app.kubernetes.io/name: unpoller
  name: unpoller
spec:
  replicas: 1
  selector:
    matchLabels:
      app: unpoller
      app.kubernetes.io/instance: unpoller
      app.kubernetes.io/name: unpoller
  template:
    metadata:
      labels:
        app: unpoller
        app.kubernetes.io/instance: unpoller
        app.kubernetes.io/name: unpoller
    spec:
      affinity:
        podAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - unifi-controller
            topologyKey: "kubernetes.io/hostname"

      containers:
      - name: unpoller
        image: golift/unifi-poller:2.1.3
        ports:
        - name: metrics
          containerPort: 9130
          protocol: TCP
        env:
        - name: UP_UNIFI_DEFAULT_USER
          valueFrom:
            secretKeyRef:
              name: unpoller-secret
              key: unifi-user
        - name: UP_UNIFI_DEFAULT_PASS
          valueFrom:
            secretKeyRef:
              name: unpoller-secret
              key: unifi-pass
        volumeMounts:
        - mountPath: /etc/unifi-poller
          name: unpoller-config
      volumes:
      - configMap:
          name: unpoller-config-file
        name: unpoller-config

I also added a podAffinity section above instructing Kubernetes to run this container on the same node (hostname) where the Unifi Controller software is running, where ever that is, this will follow. If that is not started, then this won't start. It is looking for a label app with value unifi-controller, if you name yours something else then adjust as needed. By using pod affinity to keep the two applications next to each other, it keeps all the polling chatter local to reduce network traffic.

Lastly is a PodMonitor which is what Prometheus will look for. While the unpoller deployment I placed in the unifi namespace where the Unifi Controller is also placed, I put this podmonitor in the monitoring namespace where Prometheus is located.

The installation of Prometheus I have will automatically discover any pod or service monitor with a label release: kube-stack-prometheus thus no configuration is needed. This is not universal, if you used a different Prometheus package it likely is looking for a different label, adjust as needed if unpoller metrics are not seen within Prometheus after a few seconds. This podmonitor will scrape data off the metrics port named in the deployment and will use URI path /metrics.

---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
  labels:
    app: unpoller
    app.kubernetes.io/instance: unpoller
    app.kubernetes.io/name: unpoller
    name: unpoller-prometheus-podmonitor
    release: kube-stack-prometheus
  name: unpoller-prometheus-podmonitor
  namespace: monitoring
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: unpoller
  namespaceSelector:
    matchNames:
    - unifi
  podMetricsEndpoints:
  - port: metrics
    path: /metrics

FYI - As to the number and content of labels, they were auto-generated by Kustomize. If you were to write labels manually, you probably would use more meaningful descriptions. But these work fine.

Ideally this would also have a service account defined and associated RBAC roles limiting what that service account could do. I haven't gotten around to that yet.

Unpoller and Unifi Contoller running side by side in same namespace:

$ kubectl get pods -n unifi -o wide
NAME                               READY   STATUS    RESTARTS   AGE   IP            NODE
unifi-controller-76758cfcf-mlfct   1/1     Running   0          8d    10.42.0.252   k3s01
unpoller-5dbb8857f5-56zks          1/1     Running   0          16h   10.42.0.36    k3s01

And the pod monitoring running:

$ kubectl get podmonitor -n monitoring
NAME                             AGE
unpoller-prometheus-podmonitor   16h

And if you want to test if metrics are being pulled by Unpoller and made available easy to test, just point to the IP address assigned to the Unpoller container:

$ curl  http://10.42.0.36:9130/metrics

# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 4.7439e-05
go_gc_duration_seconds{quantile="0.25"} 6.4822e-05
go_gc_duration_seconds{quantile="0.5"} 7.5783e-05
go_gc_duration_seconds{quantile="0.75"} 0.00010628
go_gc_duration_seconds{quantile="1"} 0.000329951
go_gc_duration_seconds_sum 0.542228114
go_gc_duration_seconds_count 5420
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 12
# HELP go_info Information about the Go environment.
# TYPE go_info gauge
go_info{version="go1.16.3"} 1
# HELP go_memstats_alloc_bytes Number of bytes allocated and still in use.
# TYPE go_memstats_alloc_bytes gauge

(Hundreds of lines removed from output for brevity).

All of the dashboards documented work just fine in Grafana.

If someone has defined reasonable alerts for Alertmanager based on these metrics that would be handy.

@platinummonkey platinummonkey added the enhancement New feature or request label Dec 4, 2022
@platinummonkey platinummonkey added the docs Documentation request or update label Dec 21, 2022
@ndlanier
Copy link

ndlanier commented Jan 8, 2023

Why not just make a helm chart instead of setting up manifests?

@jlpedrosa
Copy link

jlpedrosa commented May 4, 2024

hey! I already made a helm chart for this. Should I do a PR so it stays upstream (your repo)? the only thing a bit "complicated" (not crazy) is the release process of the chart.

I also have integration with GrafanaOperator so it automatically imports the charts into the cluster.

cc: @platinummonkey

@platinummonkey
Copy link
Contributor

Yeah feel free to make a PR!

@jlpedrosa
Copy link

jlpedrosa commented May 5, 2024

@platinummonkey here you have the PR: #708 BTW, I was wrong I had a kustomize application, not a helm chart, so I did it from scratch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
docs Documentation request or update enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants