Skip to content

f5devcentral/f5-waf-elk-dashboards

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ELK based dashboards for F5 WAFs

This is community supported repo providing ELK based dashboards for F5 WAFs.

How does it work?

ELK stands for elasticsearch, logstash, and kibana. Logstash receives logs from the F5 WAF, normalizes them and stores them in the elasticsearch index. Kibana allows you to visualize and navigate through logs using purpose built dashboards.

Requirements

The provided Kibana dashboards require a minimum version of 7.4.2. If you are using the provided docker-compose.yaml file, this version requirement is met.

Installation Overview

It is assumed you will be running ELK using the Quick Start directions below. The template in "logstash/conf.d" will create a new logstash pipeline to ingest logs and store them in elasticsearch. If you use the supplied docker-compose.yaml, this template will be copied into the docker container instance for you. Once the WAF logs are being ingested into the index, you will need to import files from the kibana folder to create all necessary objects including the index pattern, visualization and dashboards.

Quick Start

Deploying ELK Stack

Use docker-compose to deploy your own ELK stack.

$ docker-compose -f docker-compose.yaml up -d

NOTE

The ELK stack docker container will likely exceed the default host's virtual memory system limits. Use these directions to increase this limit on the docker host machine. If you do not, the ELK container will continually restart itself and never fully initialize.


Dashboards Installation

Import dashboards to kibana through UI (Kibana->Management->Saved Objects) or use API calls below.

KIBANA_URL=https://your.kibana:5601
jq -s . kibana/overview-dashboard.ndjson | jq '{"objects": . }' | \
curl -k --location --request POST "$KIBANA_URL/api/kibana/dashboards/import" \
    --header 'kbn-xsrf: true' \
    --header 'Content-Type: text/plain' -d @- \
    | jq

jq -s . kibana/false-positives-dashboards.ndjson | jq '{"objects": . }' | \
curl -k --location --request POST "$KIBANA_URL/api/kibana/dashboards/import" \
    --header 'kbn-xsrf: true' \
    --header 'Content-Type: text/plain' -d @- \
    | jq

NGINX App Protect Configuration

The logstash log ingestion pipeline in this solution assumes that you have configured NGINX App Protect to use the default log format, which is essentially a comma-delimited scheme. If you are using a custom logging profile JSON file, be sure that the default format is being used. Also, ensure that the logging destination in the app_protect_security_log directive in your nginx.conf file is configured with the hostname or ip address of the logstash instance, and the correct TCP port (the default in this solution is 5144). Take a look to official docs for examples.

NOTE The logstash listener in this solution is configured to listen for TCP syslog messages on a custom port (5144). If you have deployed NGINX App Protect on an SELinux protected system (such has Red Hat or CentOS), you will need to configure SELinux to allow remote syslog messages on a custom port. See the configuration instructions for an example of how to accomplish this.

BIG-IP Configuration

BIG-IP logging profile must be configured to use "splunk" logging format.

# tmsh list security log profile LOG_TO_ELK

security log profile LOG_TO_ELK {
    application {
            ...omitted...
            remote-storage splunk
            servers {
                logstash.domain:logstash-port { }
            }
        }
    }
}

Supported WAFs

  • NGINX App Protect
  • BIG-IP ASM, Advanced WAF

Screenshots

Overview Dashboard

screenshot1 screenshot2 screenshot3

False Positives Dashboard

screenshot1 screenshot2 screenshot3