Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bgp_agent_map - map multiple bgp_id to 0.0.0.0/0 (all sflow agent IDs). Is this supported? #551

Open
floatingstatic opened this issue Nov 30, 2021 · 4 comments

Comments

@floatingstatic
Copy link
Contributor

Description
Currently I have a bgp_agent_map that looks something like this:

bgp_ip=192.0.2.1 ip=0.0.0.0/0
bgp_ip=192.0.2.2 ip=0.0.0.0/0

The bgp_ip addresses are route collectors / servers that have all possible routes I expect to see in my flow data. I want to map this to any sflow agent from which we receive flow data. In normal operation bgp rib enrichment works just fine, the rib from 192.0.2.1 is used to map flows to prefixes. However if the bgp session for 192.0.2.1 is down we no longer have prefix information for sflow populating in aggregated flows (kafka), even though the bgp session for 192.0.2.2 is up and is a valid source for that data. It seems the bgp_agent_map operates on a first-match basis and sticks with that match as long as the bgp_agent_map is not reloaded.

Short of some external script that monitors logs and re-orders/hot-reloads (SIGUSR2) the bgp agent map, is there a supported way of providing a failover mechanism for bgp rib enrichment to sflow agents?

Version

sFlow Accounting Daemon, sfacctd 1.7.7-git [20211107-0 (ef37a415)]

Appreciation
Please consider starring this project to boost our reach on github!
DONE!

@floatingstatic
Copy link
Contributor Author

floatingstatic commented Dec 2, 2021

Related I see this verbiage in the docs which makes me think a primary/backup setup is possible (at least for pmacctd/uacctd), but I may be wrong (or this is not applicable fo sfacctd):

pmacctd and uacctd daemons are required to use this map with at most two "catch-all"
		entries working in a primary/backup fashion (see for more info bgp_agent.map.example
		in the examples section): this is because these daemons do not have a NetFlow/sFlow
		source address to match to.

In so few words a primary/backup bgp rib to sflow agent mapping is what I am looking to achieve here.

@paololucente
Copy link
Member

Dear Jeremiah ( @floatingstatic ),

Thanks for asking this. Effectively the way this is supported today, like you said, is with the help of something that would remove the map entry if the BGP session is down - or would push it down so that another entry would take priority. The verbiage leads to think that things are more auto-magical than that, instead.

Despite the status quo, i see how this feature would be useful - at least for the 0.0.0.0/0 (catch-all) use-case. I propose to keep this on the radar, leave this issue open and mark it as an enhancement.

Paolo

@floatingstatic
Copy link
Contributor Author

Thanks @paololucente . I was thinking a little deeper about our use case the other day and I think another solution (again for my use case) would be to have a feature flag where we could enable a global rib that all bgp peer routes are added to instead of keeping a rib per peer. A quick look at the source code makes me think this is not a trivial change however. I appreciate the response. I will look into managing this externally for now.

@paololucente
Copy link
Member

Indeed, @floatingstatic , i see how that could work. Unfortunately i confirm your assessment, it is by far not a quick win.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants