New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance of open-nic-dpdk #2
Comments
Hi @aneesullah, Can you say more about your test setup? The QDMA Performance report was characterized separately by a different team and not using OpenNIC, however, performance should be similar based on using the QDMA. The performance using pktgen-dpdk should be something along the lines of the following, however, depending on machine capabilities:
|
Hi @cneely-amd, Architecture: x86_64 More details about hardware: H/W path Device Class Description from numactl --hardware: So, NUMA is not enabled in BIOS, is it required? 3) The server is connected to a U280 card, both the QSFPz are connected through a loopback cable and pktgen is run with the command: Following is the output \kPorts 0-1 of 2 Copyright (c) <2010-2020>, Intel Corporation 0/0Flags:Port : -------Single :0 -------Single :1 0/0 Link State : ---Total Rate--- Pkts/s Max/Rx : 12936320/12730810 12936288/12728434 25872608/25459244 Max/Tx : 12936704/12728481 12936321/12730847 25873025/25459328 MBits/s Rx/Tx : 8555/2443 8553/2444 17108/4888 Broadcast : 0 0 Multicast : 0 0 Sizes 64 : 1501510360 1501000821 65-127 : 0 0 128-255 : 0 0 256-511 : 0 0 512-1023 : 0 0 1024-1518 : 0 0 Runts/Jumbos : 0/0 0/0 ARP/ICMP Pkts : 0/0 0/0 Errors Rx/Tx : 0/0 0/0 Total Rx Pkts : 1497474002 1496964629 Tx Pkts : 1496985888 1497477119 Rx MBs : 1006302 1005960 Tx MBs : 287421 287515 Pkt Size/Tx Burst : 64 / 32 64 / 32 Pattern Type : abcd... abcd... Tx Count/% Rate : Forever /100% Forever /100% Pkt Size/Tx Burst : 64 / 32 64 / 32 TTL/Port Src/Dest : 4/ 1234/ 5678 4/ 1234/ 5678 Pkt Type:VLAN ID : IPv4 / TCP:0001 IPv4 / TCP:0001 802.1p CoS/DSCP/IPP : 0/ 0/ 0 0/ 0/ 0 VxLAN Flg/Grp/vid : 0000/ 0/ 0 0000/ 0/ 0 IP Destination : 192.168.1.1 192.168.0.1-------------- Source : 192.168.0.1/24 192.168.1.1/24 MAC Destination : 15:16:17:18:19:1a 15:16:17:18:19:1a -- Pktgen 20.11.3 (D: 10ee:903f/43:00.0 10ee:913f/43:00.1-------------- It seems from pktgen output that only 64 bytes packets are generated. How to generated larger size packets? |
Hi @aneesullah in your testing with pktgen-dpdk, can you try something like the following:
|
@aneesullah |
Hi @cneely-amd , Regards, |
Another related question: |
@aneesullah ~70-80Gbps (fluctuating in that range):
~95Gbps (fairly constant):
I also have a Ryzen 5950 with 32GB for testing, but right now my GPU is using up most of the lanes and I need to swap the order of my PCI cards around before I can test it. I'll try to do that as an experiment when I get a chance. Best regards, |
Hi @aneesullah, I tried my Ryzen 5950X machine and I'm getting the following:
This is with (as before):
lscpu reports:
(P.S. note: my Ryzen machine might have some overclocking settings enabled due to the latest Radeon software driver issue in the news.) |
Hi @cneely-amd, |
Hi @aneesullah, |
Hi @aneesullah and @cneely-amd , During packet transfer, I used to get "Timeout on request to dma internal csr register", "Packet length mismatch error" and "Detected Fatal length mismatch". This hinders further transfer. Please let me know how to resolve this? Thanking in advance. |
Hi,
How to reproduce the results reported in "Xilinx Answer 71453 QDMA Performance Report" with PKTGEN, only getting 10Gbps link speed on threadripper pro with U280 card. Are these results only for the QDMA example design or they apply to Open NIC also?
Regards,
Anees
The text was updated successfully, but these errors were encountered: