We are having two Intel X710 10G NIC on the host, but when we tried setup bonding with below config according to https://wiki.fd.io/view/VPP/Command-line_Arguments#.22dpdk.22_parameters
dpdk { uio-driver uio_pci_generic socket-mem 1024 dev 0000:0e:00.0 dev 0000:0e:00.1 vdev eth_bond0,mode=2,slave=0000:0e:00.0,slave=0000:0e:00.1,xmit_policy=l34 }
The bond could be successfully brought up, however if there's any package went through, it will show as carrier down
root@mhv2.ngena2.pv ~$ vppctl show error
Count Node Reason
,,,,
144 BondEthernet0-output interface is down
root@mhv2.ngena2.pv ~$ vppctl show hardware Name Idx Link Hardware BondEthernet0 1 down Slave-Idx: 2 3 Ethernet address 3c:fd:fe:a3:45:20 Ethernet Bonding carrier down rx queues 1, rx desc 512, tx queues 1, tx desc 512 cpu socket 0 tx bytes ok 7040 rx frames ok 746 rx bytes ok 126518 extended stats: rx good packets 746 rx good bytes 126518 tx good bytes 7040 TenGigabitEthernete/0/0 2 slave TenGigabitEthernete/0/0 Ethernet address 3c:fd:fe:a3:45:20 Intel X710/XL710 Family carrier down rx queues 1, rx desc 1024, tx queues 1, tx desc 1024 cpu socket 0 tx bytes ok 3520 rx frames ok 397 rx bytes ok 64470 extended stats: rx good packets 397 rx good bytes 64470 tx good bytes 3520 rx multicast packets 437 rx broadcast packets 49 rx unknown protocol packets 486 tx multicast packets 32 mac local errors 2 mac remote errors 2 rx size 64 packets 49 rx size 65 to 127 packets 376 rx size 256 to 511 packets 61 tx size 65 to 127 packets 32 TenGigabitEthernete/0/1 3 slave TenGigabitEthernete/0/1 Ethernet address 3c:fd:fe:a3:45:20 Intel X710/XL710 Family carrier down rx queues 1, rx desc 1024, tx queues 1, tx desc 1024 cpu socket 0 tx bytes ok 3520 rx frames ok 349 rx bytes ok 62048 extended stats: rx good packets 349 rx good bytes 62048 tx good bytes 3520 rx multicast packets 440 rx unknown protocol packets 440 tx multicast packets 32 mac local errors 2 mac remote errors 2 rx size 65 to 127 packets 378 rx size 256 to 511 packets 62 tx size 65 to 127 packets 32
If these two cards weren't controlled by vpp, they will be able to set up bonding and send traffic.
Any suggestion will be appreciated