Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DstAS/SrcAs values always equal to 0 for Global Routing Table while correct for VRFs. #777

Open
doup123 opened this issue Apr 24, 2024 · 0 comments

Comments

@doup123
Copy link

doup123 commented Apr 24, 2024

I have configured pmacct to receive NetFlow v9 messages (including ingress and egress VRFID packet fields) from a Cisco router and have also established iBGP peering between them. The router sends both IPv4 and VPNv4 routes to pmacct which are correctly received.

I have also configured:

  • flow_to_rd_map: to associate interfaces with RDs
  • bgp_peer_src_as_map: to specify the src_as of specific interfaces
  • pre_tag_map: for enriching the flows with some selected data passed as labels (encoded as map)

Below you may find the corresponding config:

bgp_daemon: true
bgp_daemon_ip: 0.0.0.0
bgp_daemon_max_peers: 100
bgp_daemon_as: XXXXX
nfacctd_as: bgp
nfacctd_net: bgp


#bgp_table_dump_file: /var/log/pmacct/bgp-$peer_src_ip-%H%M.log
bgp_table_dump_refresh_time: 120
bgp_table_dump_kafka_broker_host: XXXXX
bgp_table_dump_kafka_topic: pmacct-bgp-dump

#https://github.com/pmacct/pmacct/blob/master/CONFIG-KEYS#L2833 #necessary for defining from where the src peering as should be added.
bgp_peer_src_as_type: map

nfacctd_port: 2055
! Set the plugin buffers and timeouts for performance tuning
aggregate: src_host, dst_host,peer_src_ip, peer_dst_ip, in_iface,timestamp_start, timestamp_end, src_as, dst_as, peer_src_as, peer_dst_as, label
plugins: kafka
plugin_buffer_size: 204800
plugin_pipe_size: 20480000
nfacctd_pipe_size: 20480000

! Configure the Kafka plugin
kafka_output: json
kafka_broker_host: XXXXX
kafka_topic: pmacct-enriched2
kafka_refresh_time: 60
kafka_history: 5m
kafka_history_roundoff: m

! MAPS DEFINITION
maps_entries: 2000000
!bgp_table_per_peer_buckets: 12
!aggregate_primitives: /etc/pmacct/primitives.lst
sampling_map: /etc/pmacct/sampling.map
pre_tag_map: pretag.map
pre_tag_label_encode_as_map: true
flow_to_rd_map: flow_to_rd.map
bgp_peer_src_as_map: peers.map
logfile: /var/log/pmacct1.log
daemonize: false

pmacct version

nfacctd -V
NetFlow Accounting Daemon, nfacctd 1.7.10-git [20240405-1 (6362a2c9)]

Arguments:
 'CFLAGS=-fcommon' '--enable-kafka' '--enable-jansson' '--enable-l2' '--enable-traffic-bins' '--enable-bgp-bins' '--enable-bmp-bins' '--enable-st-bins'

Libs:
cdada 0.5.0
libpcap version 1.10.3 (with TPACKET_V3)
rdkafka 2.0.2
jansson 2.14

Plugins:
memory
print
nfprobe
sfprobe
tee
kafka

System:
Linux 5.4.0-155-generic #172-Ubuntu SMP Fri Jul 7 16:10:02 UTC 2023 x86_64

Compiler:
gcc 12.2.0

I have bumped though into a very strange problem:
The dst_as for flows that are related to VPNv4 routes is correctly identified and injected in the aggregated result, but the dst_as for flows that are related to IPv4 routes is set to 0.

The dst_as in the original NetFlow pcap is in both cases 0 (in the NetFlow packets), but only in the VPNv4 case pmacct substitutes its value.

Should not routes that do not correspond to any rd (i.e. IPv4 routes), to be used to enrich all flows not matching flow_to_rd_map criteria?

I am posting the way I have constructed the flow_to_rd_map:

id=0:AS:1234	ip=1.2.3.4 in=111
id=0:AS:1235	ip=1.2.3.4 in=112

Am I missing anything?
P.S.
The rest maps (pretag.map and bgp_peer_src_as_map) work as expected enriching appropriately the flows.

Originally posted by @doup123 in #768 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant