Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add logger configuration for json output #2222

Open
SaschaJohn opened this issue Apr 25, 2024 · 3 comments
Open

add logger configuration for json output #2222

SaschaJohn opened this issue Apr 25, 2024 · 3 comments

Comments

@SaschaJohn
Copy link

SaschaJohn commented Apr 25, 2024

Hello,

we're running strongswan in a container on Kubernetes.

We log to stderr and ingest those logs to a centralized system. According to our log configuration, the lines currently look like

"2024-04-25T06:14:35 06[IKE1] <az|3> received DELETE for ESP CHILD_SA with SPI 6ec3571d"

My personal perception is, that this format is not very parser friendly, although we managed to create a regex that fits the majority of messages.

Would it be possible to add a more verbose jsonFormat output, configurable from the Logger Configuration?
E.g.
charon.filelog..json_output no| yes

that results in log messages like:

{time:"2024-04-25T06:14:35", thread:"06", system:"IKE", loglevel:1, ikename:"az", ikeid:3, msg:"received DELETE for ESP CHILD_SA with SPI 6ec3571d"}

Thanks for taking this into considaration.

@tobiasbrunner
Copy link
Member

My personal perception is, that this format is not very parser friendly, although we managed to create a regex that fits the majority of messages.

Not sure what issue you had with parsing these messages with a regex as it seems pretty straight forward to me (even with the IKE name/ID being optional, it seems quite easy to parse).

Note that depending on your use case, there are already structured log messages available via vici protocol. And charon-systemd's journal logger also logs these elements separately.

What's your use case anyway?

Would it be possible to add a more verbose jsonFormat output, configurable from the Logger Configuration?

I guess it would be possible to write the messages in a more structured way to files. Note that your example isn't JSON at all, though. That would look more like this (changed some of the properties to what we use in the vici logger):

{"time": "2024-04-25T06:14:35", "thread": 6, "group": "IKE", "level": 1, "ikesa-name": "az", "ikesa-uniqueid": 3, "msg": "received DELETE for ESP CHILD_SA with SPI 6ec3571d"}

However, there are some potential issues. One is that JSON doesn't support multi-line strings, so newlines in log messages would either have to be escaped as \n, or used as separator to export the lines as array (i.e. something like "msg": [ "line1", "line2" ]). Similarly, " would have to be escaped as \".

Another possible issue is that logging each log message as a JSON object wouldn't result in a valid JSON file. We could theoretically start the file with [ and add a , between the objects, but that would also never be valid (the array is never closed, unless that's done manually before processing the file) and it would cause problems when appending messages to existing files. So what we'd basically produce is a "stream" of JSON objects and it would depend on the parser that processes it whether that's acceptable or not. For instance, jq accepts such input just fine (it actually won't accept it if the objects are comma-separated or as an incomplete array), and with the --slurp option it would even automatically convert the object "stream" into an array of objects in order to apply filters on it.

@tobiasbrunner tobiasbrunner removed the new label Apr 25, 2024
@SaschaJohn
Copy link
Author

SaschaJohn commented Apr 25, 2024

Hello @tobiasbrunner thanks for answering and also starting to think about this feature request.
We're initiating a VPN connection from a kubernetes container.

This means, that we have nothing running except charon-systemd and sleep to keep the container running.
Journal logger or vici consumer would be additional workload within the container.

Therefore, logging to stderr fits perfectly to ingest all log messages to our central grafana loki instance.

In loki we then split the message to label according to a regex. And as you said, this was pretty much straight forward.
We then use the parts, to create a Grafana log view, that contains the parts in table colums, each logline represented in a row.

The only drawback is that the regex syntax to split the message to labels is such beautiful in loki as:

| regexp "(?P<method>\\w+) (?P<path>[\\w|/]+) \\((?P<status>\\d+?)\\) (?P<duration>.*)"

You might get the point ;) Where as the json should be automatically splitted to labels. We might now see this as an argument to reconsider our logging environment.

I also know the ELK stack a bit for the same purpose, where json is also a better fit for kibana visualization.

And the system can automatically derive labels from the json keys. Samke applies to loki.

I don't know how likely you currently use " and multiline strings in logging, but I agree this is a point to take into considaration as well. This might complicate the implementation.

The idea is not having a valid json file as output, but log each line/message as json object.

@tobiasbrunner
Copy link
Member

This means, that we have nothing running except charon-systemd

What about systemd? Or are you running that daemon without systemd?

Journal logger or vici consumer would be additional workload within the container.

Maybe you could bind mount the journal socket (e.g. /run/systemd/journal/socket) from the host into the container so the daemon could log to the journald running on the host. Promtail seems to be able to get logs from there into Loki.

I don't know how likely you currently use " and multiline strings in logging

" is used in some outputs (e.g. subject DNs of certificates) but we more commonly use ' for such stuff as the former has to be escaped in C strings as well. Multi-line strings are only used on higher log levels e.g. to dump message data (see the not directly related #2066 for examples). We'd also have to escape \, by the way, as that can be part of the ASCII view of dumped binary data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants