Skip to content

Latest commit

 

History

History
153 lines (119 loc) · 5.9 KB

auto-discovery.md

File metadata and controls

153 lines (119 loc) · 5.9 KB

Service Discovery

The observers are responsible for discovering monitoring targets. Examples of targets are an open TCP port on a container, or a Kubernetes Node, or a listening TCP socket on localhost. For these discovered targets to result in a new monitor instance that is monitoring that target, you must apply discovery rules to your monitor configuration. Every monitor that supports monitoring specific services (i.e. not a static monitor like the cpu monitor) can be configured with a discoveryRule config option that specifies a rule using a mini rule language.

For example, to monitor a Redis instance that has been discovered by a container-based observer, you could use the following configuration:

  monitors:
    - type: collectd/redis
      discoveryRule: container_image =~ "redis" && port == 6379

Target Types

There are currently several target types that can be discovered by the various observers in the agent. You can match these target types explicitly in your discovery rules with the expression target == <type>, where <types> are:

  • pod: A Kubernetes pod as a whole. The host field will be populated with the Pod IP address, but no specific port.

  • hostport: A host and port combination, i.e. a network endpoint. This endpoint could be on any type of runtime, e.g. container, remote host, running on same host with no container. This type of target will always have host, port, and port_type fields set.

  • container: A container, e.g. a Docker container. The host field will be populated with the container IP address, but no port will be specified.

  • k8s-node: A Kubernetes Node -- the host field will be populated with the Node's Internal DNS name or IP address.

You don't have to specify the target in your discovery rules, but it can help to prevent ambiguity.

Rule DSL

A rule is an expression that is matched against each discovered target to determine if a monitor should be active for a particular target. The basic operators are:

Operator Description
== Equals
!= Not equals
< Less than
<= Less than or equal
> Greater than
>= Greater than or equal
=~ Regex matches
!~ Regex does not match
&& And
|| Or

For all available operators, see the expr language definition (this is what the agent uses under the covers). We have a shim set of logic that lets you use the =~ operator even though it is not actually part of the expr language -- mainly to preserve backwards compatibility with older agent releases before expr was used.

The variables available in the expression are dependent on which observer you are using and the type of target(s) it is producing. See the individual observer docs for details on the variables available.

For a list of observers and the discovery rule variables they provide, see Observers.

Additional Functions

In addition, these extra functions are provided:

  • Sprintf(format, args...): This is your typical printf style function that can be used to compose strings from a complex set of disparately typed variables. Underneath this is the Golang fmt.Sprintf function.

  • Getenv(envvar): Gets an environment variable set on the agent process, or a blank string if the specified envvar is not set.

There are no implicit rules built into the agent, so each rule must be specified manually in the config file, in conjunction with the monitor that should monitor the discovered target.

Endpoint Config Mapping

Sometimes it might be useful to use certain attributes of a discovered target. These discovered targets are created by observers and will usually contain a full set of metadata that the observer obtains coincidently when it is doing discovery (e.g. container labels). This metadata can be mapped directly to monitor configuration for the monitor that is instantiated for that target.

To do this, you can set the configEndpointMappings option on a monitor config block (endpoint was the old name for target). For example, the collectd/kafka monitor has the clusterName config option, which is an arbirary value used to group together broker instances. You could derive this from the cluster container label on the kafka container instances like this:

monitors:
 - type: collectd/kafka
   discoveryRule: 'container_image =~ "kafka" && port == 9999'
   configEndpointMappings:
     clusterName: 'Get(container_labels, "cluster")'

Troubleshooting

The simplest way to see what services an instance of the agent has discovered is to run the command signalfx-agent status endpoints. The fields shown will be the same values that can be used in discovery rules.

Manually Defined Services

While service discovery is useful, sometimes it is just easier to manually define services to monitor. This can be done by setting the host and port option in a monitor's config to the host and port that you need to monitor. These two values are the core of what the auto-discovery mechanism often provides for you automatically.

For example (making use of YAML references to reduce repetition):

  - &es
    type: elasticsearch
    username: admin
    password: s3cr3t
    host: es
    port: 9200

  - <<: *es
    host: es2
    port: 9300

This would monitor two ElasticSearch instances at the given hosts and ports and would use the same username and password given in the top-level monitor config to connect to both of them. If you needed different configuration for the two ES hosts, you could simply define two monitor configurations, each with one service endpoint.

It is invalid to have both manually defined service endpoints and a discovery rule on a single monitor configuration.