-
Bug
-
Resolution: Unresolved
-
Normal
-
None
-
4.18
-
Important
-
No
-
Rejected
-
False
-
Description of problem:
MultiNetworkPolicy does not function for UDN with topology/layer3 and role/Secondary
Version-Release number of selected component (if applicable):
4.18.0-0.test-2024-12-16-132411-ci-ln-308h4ht-latest built from build 4.18.0-0.nightly,openshift/api#1997
How reproducible:
Always
Steps to Reproduce:
1. Create UDN CR
2. Enable to useMultiNetworkPolicy
3. Create MultiNetworkPolicy to allow the ingress traffic to only one pod
4. Create three testing pods
Actual results:
All traffic is allowed between three specific pods
$ oc get UserDefinedNetwork -o yamlapiVersion: v1items:- apiVersion: k8s.ovn.org/v1 kind: UserDefinedNetwork metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"k8s.ovn.org/v1","kind":"UserDefinedNetwork","metadata":{"annotations":{},"name":"l3-secondary","namespace":"e2e-test-networking-udn-7kzvb"},"spec":{"layer3":{"mtu":9000,"role":"Secondary","subnets":[{"cidr":"20.200.0.0/16","hostSubnet":24}]},"topology":"Layer3"}} creationTimestamp: "2024-12-16T18:38:00Z" finalizers: - k8s.ovn.org/user-defined-network-protection generation: 1 name: l3-secondary namespace: e2e-test-networking-udn-7kzvb resourceVersion: "135846" uid: c25b2eb2-1272-40b7-ab9e-246c9831c8e9 spec: layer3: mtu: 9000 role: Secondary subnets: - cidr: 20.200.0.0/16 hostSubnet: 24 topology: Layer3 status: conditions: - lastTransitionTime: "2024-12-16T18:38:00Z" message: Network allocation succeeded for all synced nodes. reason: NetworkAllocationSucceeded status: "True" type: NetworkAllocationSucceeded - lastTransitionTime: "2024-12-16T18:38:00Z" message: NetworkAttachmentDefinition has been created reason: NetworkAttachmentDefinitionReady status: "True" type: NetworkReadykind: Listmetadata: resourceVersion: "" $ oc get networks.operator.openshift.io cluster -o jsonpath={.spec.useMultiNetworkPolicy}true$ oc get net-attach-def l3-secondary -o yamlapiVersion: k8s.cni.cncf.io/v1kind: NetworkAttachmentDefinitionmetadata: creationTimestamp: "2024-12-16T18:38:00Z" finalizers: - k8s.ovn.org/user-defined-network-protection generation: 1 labels: k8s.ovn.org/user-defined-network: "" name: l3-secondary namespace: e2e-test-networking-udn-7kzvb ownerReferences: - apiVersion: k8s.ovn.org/v1 blockOwnerDeletion: true controller: true kind: UserDefinedNetwork name: l3-secondary uid: c25b2eb2-1272-40b7-ab9e-246c9831c8e9 resourceVersion: "135844" uid: f228c845-5a58-49a5-8976-668ffa9c65b0spec: config: '{"cniVersion":"1.0.0","mtu":9000,"name":"e2e-test-networking-udn-7kzvb.l3-secondary","netAttachDefName":"e2e-test-networking-udn-7kzvb/l3-secondary","role":"secondary","subnets":"20.200.0.0/16/24","topology":"layer3","type":"ovn-k8s-cni-overlay"}'$ oc get multi-networkpolicy multinetworkipblock-dual-cidrs-ingress -o yamlapiVersion: k8s.cni.cncf.io/v1beta1kind: MultiNetworkPolicymetadata: annotations: k8s.v1.cni.cncf.io/policy-for: e2e-test-networking-udn-7kzvb/ipblockingress77656 kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"k8s.cni.cncf.io/v1beta1","kind":"MultiNetworkPolicy","metadata":{"annotations":{"k8s.v1.cni.cncf.io/policy-for":"e2e-test-networking-udn-7kzvb/ipblockingress77656"},"name":"multinetworkipblock-dual-cidrs-ingress","namespace":"e2e-test-networking-udn-7kzvb"},"spec":{"ingress":[{"from":[{"ipBlock":{"cidr":"20.200.5.4/32"}},{"ipBlock":{"cidr":""}}]}],"podSelector":{},"policyTypes":["Ingress"]}} creationTimestamp: "2024-12-16T18:39:14Z" generation: 1 name: multinetworkipblock-dual-cidrs-ingress namespace: e2e-test-networking-udn-7kzvb resourceVersion: "136394" uid: 12e528b3-a583-4604-962f-15b8e34ff465spec: ingress: - from: - ipBlock: cidr: 20.200.5.4/32 - ipBlock: cidr: "" podSelector: {} policyTypes: - Ingress$ $ oc get podNAME READY STATUS RESTARTS AGEmultihoming-pod-1-cm8rp 1/1 Running 0 5m16smultihoming-pod-2-7p68p 1/1 Running 0 5m2smultihoming-pod-3-2j9nw 1/1 Running 0 4m48s $ for podname in `oc get pod -o wide | grep pod | grep Running | awk '{print $1}'`; do echo $podname; oc exec $podname -- ip a | grep 20.200; donemultihoming-pod-1-cm8rp inet 20.200.3.4/24 brd 20.200.3.255 scope global net1multihoming-pod-2-7p68p inet 20.200.4.4/24 brd 20.200.4.255 scope global net1multihoming-pod-3-2j9nw inet 20.200.5.4/24 brd 20.200.5.255 scope global net1 $ ./ping-pods-net1-ipv4.shRetrieved net1 IPv4 for multihoming-pod-1-cm8rp: 20.200.3.4Retrieved net1 IPv4 for multihoming-pod-2-7p68p: 20.200.4.4Retrieved net1 IPv4 for multihoming-pod-3-2j9nw: 20.200.5.4Pinging from multihoming-pod-1-cm8rp: -> Pinging multihoming-pod-2-7p68p (20.200.4.4) from multihoming-pod-1-cm8rp...PING 20.200.4.4 (20.200.4.4) 56(84) bytes of data.64 bytes from 20.200.4.4: icmp_seq=1 ttl=62 time=2.48 ms64 bytes from 20.200.4.4: icmp_seq=2 ttl=62 time=0.998 ms64 bytes from 20.200.4.4: icmp_seq=3 ttl=62 time=0.905 ms64 bytes from 20.200.4.4: icmp_seq=4 ttl=62 time=0.988 ms --- 20.200.4.4 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3061msrtt min/avg/max/mdev = 0.905/1.342/2.477/0.656 ms -> Pinging multihoming-pod-3-2j9nw (20.200.5.4) from multihoming-pod-1-cm8rp...PING 20.200.5.4 (20.200.5.4) 56(84) bytes of data.64 bytes from 20.200.5.4: icmp_seq=1 ttl=62 time=2.55 ms64 bytes from 20.200.5.4: icmp_seq=2 ttl=62 time=1.01 ms64 bytes from 20.200.5.4: icmp_seq=3 ttl=62 time=0.965 ms64 bytes from 20.200.5.4: icmp_seq=4 ttl=62 time=0.944 ms --- 20.200.5.4 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3003msrtt min/avg/max/mdev = 0.944/1.366/2.547/0.682 msPinging from multihoming-pod-2-7p68p: -> Pinging multihoming-pod-1-cm8rp (20.200.3.4) from multihoming-pod-2-7p68p...PING 20.200.3.4 (20.200.3.4) 56(84) bytes of data.64 bytes from 20.200.3.4: icmp_seq=1 ttl=62 time=1.93 ms64 bytes from 20.200.3.4: icmp_seq=2 ttl=62 time=0.987 ms64 bytes from 20.200.3.4: icmp_seq=3 ttl=62 time=0.994 ms64 bytes from 20.200.3.4: icmp_seq=4 ttl=62 time=0.868 ms --- 20.200.3.4 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3003msrtt min/avg/max/mdev = 0.868/1.195/1.934/0.429 ms -> Pinging multihoming-pod-3-2j9nw (20.200.5.4) from multihoming-pod-2-7p68p...PING 20.200.5.4 (20.200.5.4) 56(84) bytes of data.64 bytes from 20.200.5.4: icmp_seq=1 ttl=62 time=1.82 ms64 bytes from 20.200.5.4: icmp_seq=2 ttl=62 time=0.344 ms64 bytes from 20.200.5.4: icmp_seq=3 ttl=62 time=0.257 ms64 bytes from 20.200.5.4: icmp_seq=4 ttl=62 time=0.403 ms --- 20.200.5.4 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3051msrtt min/avg/max/mdev = 0.257/0.706/1.823/0.646 msPinging from multihoming-pod-3-2j9nw: -> Pinging multihoming-pod-1-cm8rp (20.200.3.4) from multihoming-pod-3-2j9nw...PING 20.200.3.4 (20.200.3.4) 56(84) bytes of data.64 bytes from 20.200.3.4: icmp_seq=1 ttl=62 time=1.78 ms64 bytes from 20.200.3.4: icmp_seq=2 ttl=62 time=0.971 ms64 bytes from 20.200.3.4: icmp_seq=3 ttl=62 time=1.01 ms64 bytes from 20.200.3.4: icmp_seq=4 ttl=62 time=1.06 ms --- 20.200.3.4 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3004msrtt min/avg/max/mdev = 0.971/1.205/1.781/0.334 ms -> Pinging multihoming-pod-2-7p68p (20.200.4.4) from multihoming-pod-3-2j9nw...PING 20.200.4.4 (20.200.4.4) 56(84) bytes of data.64 bytes from 20.200.4.4: icmp_seq=1 ttl=62 time=1.11 ms64 bytes from 20.200.4.4: icmp_seq=2 ttl=62 time=0.334 ms64 bytes from 20.200.4.4: icmp_seq=3 ttl=62 time=0.341 ms64 bytes from 20.200.4.4: icmp_seq=4 ttl=62 time=0.354 ms --- 20.200.4.4 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3036msrtt min/avg/max/mdev = 0.334/0.535/1.114/0.333 ms
Expected results:
MultiNetworkPolicy for ingress traffic that allows any pods to communicate only with 20.200.5.4/32
Additional info:
The same MultiNetworkPolicy works correctly for UDN with topology/layer2 and role/Secondary.
The failure occurs only when using the MultiNetworkPolicy for UDN with topology/layer3 and role/Secondary.