New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide a way to implement conditional routing of events to specific outputs #3120
Comments
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Still a common ask, reopening for PM visibility |
Pinging @elastic/agent (Team:Agent) |
ping @nimarezainia is this something you have in your radar? |
Pinging @elastic/elastic-agent-data-plane (Team:Elastic-Agent-Data-Plane) |
I think with the v2 architecture of Elastic Agent his will become possible. Each input can select its output. I would not directly call it conditional routing but if you have 2 logfile inputs, each could use its own output. |
@ruflin @cmacknz, To address our Beats users who need this feature _ I would say migration to standalone is a viable option. However we do want to provide this functionality for the fleet managed agents also, where the output is configured at the datastream level (discussion point? i believe it can't be at the integration level. Users would need more granularity logs vs metrics as an example). Perhaps short term, the first step here would be to apply the config in an advanced yaml box with guidence on how it should be done. Proper solution imo is to have a UI configurable under each datastream to choose the output for that datastream. I'd be curious to hear if for this we need the V2 shipper to complete? |
I personally think this is overkill. I would rather set it on the level of the integration. If a user wants to ship logs and metrics to two different outputs, the integration has to be configured twice, once with logs and once with metrics enabled for example. |
I have a use case where we have different types logs and modules on a single server. Ideally I'd like to configure the module and logfile A to be shipped to elasticsearch, with the module using the resp. ingest pipeline. While having log file B shipped to Kafka, where a logstash cluster read from for further processing and enrichment of the data. While it's absolutely doable to setup 2 or even 3 filebeat instances, from a maintenance overhead POV it just seems like overkill. The configuration and automation effort is just really high. The filebeat RPM comes with all the right directories, config files, and the systemd unit. To setup an additional beat, we'd have to change all of this substantially. Having dedicates state directory (/var/lib), config directory (/etc/filebeat - which, on every upgrade we'd probably have to manually check and compare files to the previous version), the systemd unit file we'd also need to duplicate or change the existing one to handle instance (%i). Generally speaking, it's just really not convenient to run multiple instances of filebeat. (Which, to be fair, isn't a problem specific to filebeat, but a general "problem" for software that comes pre-packaged for running a single instance of that software, when you want/need to run multiple instances.) Maybe an alternative to supporting multiple outputs in filebeat could be to support the use case where you have one or more regular inputs shipped to a non-elasticsearch output, while also having module enabled which ship to elasticsearch. While technically this would be a 2-output setup, it's actually only the modules which use the ES output. Or, simply making it easier to run a multi-instance setup with the pre-built RPM provided by elastic. (Having an instance based systemd unit file, maybe also scripts that prepare necessary directories and default config files on first run, etc.) Curious to hear you thoughts on this. |
Hi! We're labeling this issue as |
This is still relevant. |
My use case would be foe reroute processor in filebeat |
Hi! We're labeling this issue as |
Still relevant |
Would be nice for beats to provide a feature to allow conditional routing of events to outputs, eg. type A log events to be routed to Logstash A instance, type B log events to be routed to Logstash B instance. Perhaps we can extend the generic filtering capability to provide this type of conditional routing. Currently, to implement this, the user will have to configure multiple beats instances, or send the events to a Logstash instance and perform the routing from there.
The text was updated successfully, but these errors were encountered: