-
Bug
-
Resolution: Unresolved
-
Medium
-
None
-
23.06, 23.10, 24.02
-
None
-
server 1 <
> server 2 <> server 3Sample config (server 1, can be easily adapted to the other nodes):
VPP:lcp default netns dataplane lcp lcp-sync on lcp lcp-auto-subint on create loopback interface instance 0 lcp create loop0 host-if loop0 set interface state loop0 up set interface ip address loop0 100.100.1.1/32 lcp create GigabitEthernet10/0/0 host-if e0 set interface state GigabitEthernet10/0/0 up set interface ip address GigabitEthernet10/0/0 2001:db8:fffa::/127
Linux:
ip -n dataplane route add 100.100.1.2/32 via 2001:db8:fffa::1 src 100.100.1.1 ip -n dataplane route add 100.100.1.3/32 via 2001:db8:fffa::1 src 100.100.1.1
server 1 < > server 2 < > server 3 Sample config (server 1, can be easily adapted to the other nodes): VPP: lcp default netns dataplane lcp lcp-sync on lcp lcp-auto-subint on create loopback interface instance 0 lcp create loop0 host- if loop0 set interface state loop0 up set interface ip address loop0 100.100.1.1/32 lcp create GigabitEthernet10/0/0 host- if e0 set interface state GigabitEthernet10/0/0 up set interface ip address GigabitEthernet10/0/0 2001:db8:fffa::/127 Linux: ip -n dataplane route add 100.100.1.2/32 via 2001:db8:fffa::1 src 100.100.1.1 ip -n dataplane route add 100.100.1.3/32 via 2001:db8:fffa::1 src 100.100.1.1
From vpp-dev ML
I've been looking into reducing the IPv4 addresses in our backbone and wanted to route the IPv4 traffic with an IPv6 nexthop (RFC 5549). Unfortunately, it looks like the traffic does not get reflected/forwarded/pleasecorrectmewiththecorrectword to the tap device.
The traffic gets forwarded within VPP when I tcpdump the interfaces on the underlying interface (Those are virtual machines with virtio-net. Traffic was captured between on every link when trying to connect from vpp-vm1 to vpp-vm3), but neither the traceroute works (VPP doesn't seem to create ICMP TTL messages correctly when an IPv6 next-hop is used for an IPv4 route) nor the loop0 ipv4 address of the direct-connected VPP node.
Checking the FIB on Linux it looks okay:
root@vpp-vm1:~# ip r 100.100.1.2 via inet6 2001:db8:fffa::1 dev e0 proto bird src 100.100.1.1 metric 32 100.100.1.3 via inet6 2001:db8:fffa::1 dev e0 proto bird src 100.100.1.1 metric 32
But for VPP, it looks like the netlink RTA_PREFSRC flag is not used for the FIB entry.
root@vpp-vm1:# vppctl
vpp# show ip fib 100.100.1.2
ipv4-VRF:0, fib_index:0, flow hash:[src dst sport dport proto flowlabel ] epoch:0 flags:none locks:[default-route:1, lcp-rt:1, ]
100.100.1.2/32 fib:0 index:29 locks:2
lcp-rt-dynamic refs:1 src-flags:added,contributing,active,
path-list:[47] locks:12 flags:shared, uPRF-list:31 len:1 itfs:[1, ]
path:[63] pl-index:47 ip6 weight=1 pref=32 attached-nexthop: oper-flags:resolved,
2001:db8:fffa::1 GigabitEthernet10/0/0
[@0]: ipv6 via 2001:db8:fffa::1 GigabitEthernet10/0/0: mtu:9000 next:6 flags:[] 52540013110052540013100086dd
forwarding: unicast-ip4-chain
[@0]: dpo-load-balance: [proto:ip4 index:31 buckets:1 uRPF:31 to:[0:0]]
[0] [@5]: ipv4 via 2001:db8:fffa::1 GigabitEthernet10/0/0: mtu:9000 next:6 flags:[] 5254001311005254001310000800
When I try to reach the IPv4 address from the linux-cp, it breaks:
root@vpp-vm1:~# ping 100.100.1.3 -c3
PING 100.100.1.3 (100.100.1.3) 56(84) bytes of data.
--- 100.100.1.3 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2031ms
root@vpp-vm1:~# ping 100.100.1.2 -c3
PING 100.100.1.2 (100.100.1.2) 56(84) bytes of data.
--- 100.100.1.2 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 2050ms
root@vpp-vm1:~# nc 100.100.1.3 179
asdfmovie
**nothing, also not on vm3 where I'd expect some traffic on the incoming tap interface**