Skip to content

barasher/prometheus-to-opentsdb

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Prometheus to Opentsdb exporter

Build Status go report card GoDoc codecov

Description

Prometheus-to-Opentsdb is a tool that executes queries on Prometheus and stores results to Opentsdb.

It is not a remote storage, the typical use-case is metrology.

Here is an example : let's consider a big Kubernetes cluster with a lots of pod running on it. A lots a metrics will be generated, usually consumed by Prometheus. Prometheus is not really designed to store metrics over time, it is rather used to deal with daily supervision. Only a subset of these metrics (eventually downsampled) are relevent to draw trends : here comes Prometheus-to-Opentsdb.

It executes use-defined queries on Prometheus, binds data tags and stores the results on a long term time-series storage system : Opentsdb.

Usage

Configuration

Prometheus-to-Opentsdb's configuration is divided into three parts.

The first part, the exporter configuration file defines the "where": where are the backends ?

{
  "PrometheusURL" : "http://127.0.0.1:9090",
  "OpentsdbURL" : "http://127.0.0.1:4242",
  "LoggingLevel" : "debug",
  "BulkSize" : 20,
  "TheadCount" : 2,
  "PushTimeout" : "500ms"
}
  • PrometheusURL defines the Prometheus URL - required
  • OpentsdbURL defines the Opentsdb URL - required
  • LoggingLevel defines the logging level (possible values: debug, info, warn, error, fatal, panic) - default value: info
  • BulkSize defines the size of the bulk pushed to Opentsdb - default value: 50
  • ThreadCount defines how many goroutines will push data to Opentsdb - default value: 1
  • PushTimeout defines the timeout when pushing data to Opentsdb - default value: 1 minute, format: [quantity][unit] (valid unit values: ms, s, m or h), example: 10s, 500ms, ...

The second part, the query description file defines the "what": what's my query and how do I map the results ?

{
    "MetricName" : "myMetric",
    "Query" : "prometheus_http_requests_total{code!=\"302\"}",
    "Step" : "30s",
    "AddTags" : {
      "aTagNameIWantToAdd" : "aTagValueIWantToAdd"
    },
    "RemoveTags" : [ "aTagNameIDoNotWantToKeepFromPrometheus" ],
    "RenameTags" : {
      "aTagNameIWantToRename" : "aNewTagName"
    }
}
  • MetricName defines the metric name for the gathered data - required
  • Query defines the Prometheus query that has to be executed - required
  • Step defines the step for the Prometheus query - required
  • Tags are automatically mapped from Prometheus to Opentsdb but it can be tuned :
    • AddTags defines the tags that have to be added to the metrics
    • RemoveTags defines the tag names that have to be removed for the metrics
    • RenameTags defines the tag names that have to be renamed

The third part defines all the parameters (command line) relative to a specific execution :

  • -f and -t (both required) defines the date range for the execution. It supports RFC3339 date format.
    • YYYY-MM-DDThh:mm:ss.lllZ where YYYY is the year, MM the month, DD the day, hh the hour, mm the minutes, ss the seconds, lll the milliseconds and Z UTC+0. Sample : 2019-07-31T17:03:00.000Z.
    • YYYY-MM-DDThh:mm:ss.lll+09:00 or YYYY-MM-DDThh:mm:ss.lll-04:00 where you can describe time-zone with +xx:00 or -yy:00.
  • -s activates the simulation mode : data will be gathered from Prometheus, mapped as it should be for Opentsdb but it will not be sent but only printed. By default, simulation mode is disabled.

But why such a configuration mechanism ? The objective is in fact :

  • to define only one time the "where". You'll probably generate more than one metric from Prometheus : this configuration file will be reused.
  • to define only one time each metric definition ("what"), it will certainly be executed more than one time so this configuration file will also be reused
  • an execution combines an existing "where", an existing "what" and defines the date range.

Execution

Usage of Exporter:
  -e string
    	Exporter configuration file (where ?)
  -q string
    	Query description file (what ?)
  -f string
    	From / start date (when ?)
  -t string
    	To / end date (when ?)
  -s	Simulation mode (don't push to Opentsdb)

Sample:

  • ./main -q ~/conf/query.json -e ~/conf/exporter.conf -f 2019-07-23T00:00:00.000Z -t 2019-07-23T23:59:59.999Z : effective execution
  • ./main -q ~/conf/query.json -e ~/conf/exporter.conf -f 2019-07-23T00:00:00.000Z -t 2019-07-23T23:59:59.999Z -s : simulation

Return codes:

  • 0: everything was fine
  • 1: configuration problem
  • 2: execution problem

Docker

Get from docker hub

Docker images are available here.

docker pull barasher/prometheus_to_opentsdb:1.3

Build

docker build -t barasher/prometheus_to_opentsdb:latest .

How to use

Inside the container, configuration files are located :

  • /etc/p2o/exporter.json for the exporter configuration
  • etc/p2o/query.json for the query configuration The files can be injected as volume where executing the container.

It also uses 2 environment variables :

  • P2O_FROM that defines the start date
  • P2O_TO that defines the end date

Sample :

docker run \
  -v /home/barasher/conf/exporter.conf:/etc/p2o/exporter.json \
  -v /home/barasher/conf/query.json:/etc/p2o/query.json \
  --env P2O_FROM='2019-08-11T13:00:00.000Z' \
  --env P2O_TO='2019-08-11T13:32:00.000Z' \
  --rm \
  barasher/prometheus_to_opentsdb:latest

The idea is that :

  • you can provide to your clients a "base" Docker image that contains the exporter configuration.
  • your clients can build their own Docker image containing the query configuration if they want to (or they can just provide the configuration as volume at each execution)
  • your clients executes a Docker image (theirs or yours), specifying the date range for the query.

Metrics mapping

Metrics, tag names and tag values are normalized to fit Opentsdb constraints. Any character that is not [a-z], [A-Z], [0-9], ., -, / or _ is replaced by _.

Changelog