-
Bug
-
Resolution: Unresolved
-
Undefined
-
None
-
4.18.0
-
Critical
-
None
-
Proposed
-
False
-
Description of problem:
After ovnkube pod is recreated. UDN interface for primary L3 cannot be accessed from another node.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
- Create ns and UDN cr
apiVersion: k8s.ovn.org/v1
kind: UserDefinedNetwork
metadata:
name: l3-primary
spec:
topology: Layer3
layer3:
role: Primary
joinSubnets:
- 100.100.100.0/16
- fd91::/64
mtu: 1300
subnets:
- cidr: "20.100.0.0/16"
hostSubnet: 24
- cidr: "2010:100:0::0/48"
hostSubnet: 64
2. Create 2 test pods on different nodes
- oc get pod -n z1
NAME READY STATUS RESTARTS AGE
test-rc-dq2pb 1/1 Running 0 9s
test-rc-l8pwt 1/1 Running 0 9s
- oc rsh -n z1 test-rc-dq2pb ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0@if23: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default
link/ether 0a:58:0a:81:02:0e brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.129.2.14/23 brd 10.129.3.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fd01:0:0:5::e/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::858:aff:fe81:20e/64 scope link
valid_lft forever preferred_lft forever
3: ovn-udn1@if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1300 qdisc noqueue state UP group default
link/ether 0a:58:14:64:03:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 20.100.3.4/24 brd 20.100.3.255 scope global ovn-udn1
valid_lft forever preferred_lft forever
inet6 2010:100:0:4::4/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::858:14ff:fe64:304/64 scope link
valid_lft forever preferred_lft forever
#oc rsh -n z1 test-rc-l8pwt ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0@if27: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default
link/ether 0a:58:0a:80:02:12 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.128.2.18/23 brd 10.128.3.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fd01:0:0:4::12/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::858:aff:fe80:212/64 scope link
valid_lft forever preferred_lft forever
3: ovn-udn1@if28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1300 qdisc noqueue state UP group default
link/ether 0a:58:14:64:02:04 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 20.100.2.4/24 brd 20.100.2.255 scope global ovn-udn1
valid_lft forever preferred_lft forever
inet6 2010:100:0:3::4/64 scope global
valid_lft forever preferred_lft forever
inet6 fe80::858:14ff:fe64:204/64 scope link
valid_lft forever preferred_lft forever
3. and test access during two pods from UDN interface and work now
- oc rsh -n z1 test-rc-l8pwt curl 20.100.3.4:8080
Hello OpenShift!
4. then delete the one of worker of pod located and make it recreated
5. replease step 3 again.
oc rsh -n z1 test-rc-l8pwt curl 20.100.3.4:8080
^Ccommand terminated with exit code 130
Actual results:
pod cannot be accessed after ovnkube pod recreated.
Expected results:
Additional info: