Grains: Files
The file monitoring grain is based on the Kprobe interface, and traces file system reads and writes that go through the VFS subsystem.
There are a number of limitations to this approach.
First off, only file systems that go through VFS are visible. The most interesting exceptions to this are probably tmpfs and OverlayFS.
Therefore, at the moment it is impossible to use this grain to monitor file
activity inside Docker containers, unless they happen on a passthrough volume
mount, eg. docker volume create vol; docker run -v vol:/data ...
,
or docker run -v $(pwd):/data ...
. This is assuming ingraind
is running
outside of a container.
Second, only events from the mount namespace of ingraind
are picked up. In
other words, if ingraind
is running inside a Docker container, all IO is going
through an OverlayFS layer, so no events will be picked up by the grain.
Similarly, if ingraind
is running inside a chroot, only events generated
within the chroot will be picked up. If ingraind
is running on the host, it
will have full visibility into the chroots.
Third, due to the fact that BPF probes don't support unbounded iterations,
and have a 512 byte stack size limit, and have a limit of 4096 on the
instruction count of a single program, the maximum resolved depth of the path is
8. This may change in the future depending on how much space we can golf into it.
Similarly, any path segment's length is limited to DNAME_INLINE_LEN
, which is
defined as 32
in the 4.17 kernel release. You might be wondering why these
don't add up to 42. I certainly am.
An example payload may look like so:
{'kind': 9,
'measurement': 981,
'name': file.read_byte',
'tags': {'ino': '1234568',
'path': '/tmp/file'
'process': 'emacs',
'task_id': '5368709121266'},
'timestamp': 1532538630372793284}
In addition, write volume information is available through the file.write_byte
metrics.