Skip to content

Example: CustomStats

Lorenzo Mangani edited this page Mar 12, 2016 · 36 revisions

HOMER 5

splitter

Custom JSON-HEP Stats

HOMER 5 implements a basic HTTP Push API powered by Kamailio and the xhttp and jansson modules. The new API supports parsing and indexing external custom data sources (in form of JSON objects) received via HTTP socket and using user customizable structures as illustrated in this example featuring RTP statistics produced by rtpsniff-json and statstrmr

Example JSON Report

The first thing is to analyze (or define) our delivery JSON format - in this example we will use rtp statistics. The objects are identified by a "type" field - the order of inner fields doesn't matter as the document will get parsed, but remember only string and int types are supported (ie: use doubles as int*100).

Let's look at a sample rtp_stat type JSON object from our set:

{ "timestamp": 1457782570, "interval": 10, "streams": 2, "packets": 937, "lost": 0, "late": 0, "lost_perc": 0, "late_perc": 0, "out_of_seq": 0, "delay_min": 19172, "delay_max": 40974, "delay_avg": 30073, "jitter": 3577, "mos": 439, "type": "rtp_stat" }

Kamailio Configuration

Knowing the fields and formats to target, we will extend the HOMER/Kamailio configuration to spawn an xHTTP socket for our API configured to handle our type-specific JSON object and use it to generate new statistics using a fantastic Kamailio contraption - Here's an Example:


#!substdef "!HOMER_STATS_SERVER!tcp:MY_IP_ADDR:8888!g"

listen=HOMER_STATS_SERVER
....
loadmodule "xhttp.so"
loadmodule "jansson.so"
loadmodule "avpops.so"
...
modparam("htable", "htable", "d=>size=8;autoexpire=400")
modparam("xhttp", "url_match", "^/api/v1/stat")

...

route[CHECK_STATS] {
....

#Generic stats
    sht_iterator_start("i1", "d");
    while(sht_iterator_next("i1")) {
                $var(tag) = $(shtitkey(i1){s.select,2,:});
                $var(key) = $(shtitkey(i1){s.select,4,:});
                sql_query("cb", "INSERT INTO stats_generic (from_date, to_date, type, tag, total) VALUES($var(f_date), $var(t_date), '$var(key)', '$var(tag)', $shtitval(i1)) ON DUPLICATE KEY UPDATE total=(total+$shtitval(i1))/2");
    }
    sht_iterator_end("i1");
    sht_rm_name_re("d=>.*");
...
}

event_route[xhttp:request] {
        set_reply_close();
        set_reply_no_connect();
        xlog("L_WARN", "HTTP request received on $Rp, $hu\n");
        if($hu =~ "/api/v1/stats/push") {
                #Json is our body
                $var(json) = $rb;
                jansson_get("type", $var(json), "$var(n)");
                xlog("L_WARN","Type is $var(n)");
                jansson_get("tag", $var(json), "$var(tag)");
                xlog("L_WARN","Tag is $var(tag)");
                if($var(n) == "rtp_stat") {
                         $var(i) = 0;
                         $(avp(x)[0]) = 'interval';
                         $(avp(x)[1]) = 'streams';
                         $(avp(x)[2]) = 'packets';
                         $(avp(x)[3]) = 'lost';
                         $(avp(x)[4]) = 'late';
                         $(avp(x)[5]) = 'lost_perc';
                         $(avp(x)[6]) = 'late_perc';
                         $(avp(x)[7]) = 'out_of_seq';
                         $(avp(x)[8]) = 'delay_min';
                         $(avp(x)[9]) = 'delay_max';
                         $(avp(x)[10]) = 'jitter';
                         $(avp(x)[11]) = 'mos';

                }
                else if($var(n) == "host_stat") {
                         $var(i) = 0;
                         $(avp(x)[0]) = 'cpu_idle';
                         $(avp(x)[1]) = 'cpu_system';
                         $(avp(x)[2]) = 'cpu_io';
                }
                else {
                        xhttp_reply("404", "Stats push method not found", "", "");
                        exit;
                }

               while(is_avp_set("$(avp(x)[$var(i)])")) {
                        xlog("L_WARN", "Array value [$var(i)]: $(avp(x)[$var(i)])\n");
                        if(jansson_get("$(avp(x)[$var(i)])", $var(json), "$var(d)"))
                        {
                                $var(n) = $(var(d){s.int});
                                xlog("L_WARN","VALUE is $var(n)");
                                if($sht(d=>generic::$var(tag)::$(avp(x)[$var(i)])) == $null) $sht(d=>generic::$var(tag)::$(avp(x)[$var(i)])) = $var(n);
                                else $sht(d=>generic::$var(tag)::$(avp(x)[$var(i)])) = ($sht(d=>generic::$var(tag)::$(avp(x)[$var(i)])) + $var(n))/2;
                        }
                        $var(i) = $var(i) + 1;
                }

                xhttp_reply("200", "Ok", "done", "");
                exit;
        }

        xhttp_reply("403", "Forbidden", "", "");
        exit;
}

Example Datasource (sipcapture charts)

In order for the data to be consumed, we need to make the widgets aware of it - this is achieved by extending the datasource.js file located inside the ui/widgets folder. An example follows:

{
            "name": "Generic",
            "type": "JSON",
            "settings": {
                "path": "statistic\/generic",
                        "query": "{\n   \"timestamp\": {\n          \"from\": \"@from_ts\",\n          \"to\":  \"@to_ts\"\n   },\n  \"param\": {\n        \"filter\": [ \n             \"@filters\"\n       ],\n       \"limit\": \"@limit\",\n       \"total\": \"@total\"\n   }\n}",
                "method": "GET",
                "limit": 200,
                "total": false,
                "eval": {
                    incoming: {
                        name: "test incoming",
                        value: "var object = @incoming; return object"
                    }
                },
                "timefields" : [
                    { "field": "from_ts", "desc": "From Timestamp" },
                    { "field": "to_ts", "desc": "To Timestamp" }
                ],
                "fieldvalues": [
                    { "field": "total", "desc": "All Packets" }
                ],
                "filters": [
		    { "type": "type", "desc": "Data Statistics", options: [
	                    { "value": "delay_max", "desc": "delay_max" },
	                    { "value": "interval", "desc": "interval" },
	                    { "value": "lost_perc", "desc": "lost * 100" },
	                    { "value": "late", "desc": "late count" },
	                    { "value": "streams", "desc": "number of streams" },
	                    { "value": "out_of_seq", "desc": "out-of-seq packets" },
	                    { "value": "late_perc", "desc": "late * 100" },
	                    { "value": "lost", "desc": "lost count" },
	                    { "value": "packets", "desc": "packets" },
	                    { "value": "delay_min", "desc": "delay_min" },
	                    { "value": "delay_max", "desc": "delay_max" },
	                    { "value": "jitter", "desc": "jitter" },
	                    { "value": "mos", "desc": "mos * 100" }
			] 
		    }
                ]
            }
        }

Once the datasource is configured, the new set will become available in connected widgets:

Ship your data!

We're ready to start sending some data! In this example we are using RTP statistics produced by rtpsniff-json and sent to HOMER API using statstrmr to produce general RTP usage reports on large ranges.

rtpsniff (example)
make MOD_OUT=out_json
./bin/rtpsniff -i eth0 -b 100 -f 'udp and portrange 10000-30000' -t 10 -v | tee /var/log/rtpstat.log 

To avoid the logs filling up your disk, add them to your logrotate.conf settings:

/var/log/rtpstat.log {
    missingok
    copytruncate
    daily
    rotate 1
    compress
}

statstrmr (example)
var config = {
        API_SERVER: 'http://127.0.0.1:8888',
        API_PATH: '/api/v1/stats/push',
        LOGS: [
                {
                  tag : 'rtp_stat',
                  host : 'rtp-probe-01',
                  pattern: 'rtp_stat', // report type
                  path : '/var/log/rtpstat.log' // logfile path (rotate!)
                }
              ]
};
npm start

And voila'! Your personalized chart using custom data is now ready!

Clone this wiki locally