-
Bug
-
Resolution: Unresolved
-
Undefined
-
None
-
4.18.0
-
No
-
SDN Sprint 265, SDN Sprint 266
-
2
-
False
-
Description of problem:
Using a network policy with namespaceSelector: {} does not allow traffic from all the namespaces sharing the same user defined network.
Version-Release number of selected component (if applicable):
build 4.18.0-0.nightly,openshift/api#2127,openshift/ovn-kubernetes#2413
4.18.0
How reproducible:
Always
Steps to Reproduce:
1. Create two namespace a1 and a2.
2. Create userdefined network with NAD shared across two namespaces.
apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: l2-network-nad namespace: a1 spec: config: | { "cniVersion": "0.3.1", "name": "l2-network", "type": "ovn-k8s-cni-overlay", "topology":"layer2", "subnets": "10.152.0.0/16", "mtu": 1300, "netAttachDefName": "a1/l2-network-nad", "role": "primary" }
NAD in second namespace
apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: l2-network-nad namespace: a2 spec: config: | { "cniVersion": "0.3.1", "name": "l2-network", "type": "ovn-k8s-cni-overlay", "topology":"layer2", "subnets": "10.152.0.0/16", "mtu": 1300, "netAttachDefName": "a2/l2-network-nad", "role": "primary" }
3. Create pods in both namespaces
oc -n a1 get pods
NAME READY STATUS RESTARTS AGE test-rc-vdrbg 1/1 Running 0 12s test-rc-wc5tz 1/1 Running 0 12s
oc -n a1 exec -it test-rc-vdrbg – ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0@if66: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1360 qdisc noqueue state UP group default link/ether 0a:58:0a:80:02:3a brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.128.2.58/23 brd 10.128.3.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::858:aff:fe80:23a/64 scope link valid_lft forever preferred_lft forever 3: ovn-udn1@if67: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1300 qdisc noqueue state UP group default link/ether 0a:58:0a:98:00:06 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.152.0.6/16 brd 10.152.255.255 scope global ovn-udn1 valid_lft forever preferred_lft forever inet6 fe80::858:aff:fe98:6/64 scope link valid_lft forever preferred_lft forever
oc -n a2 get pods
NAME READY STATUS RESTARTS AGE test-rc-c2qqh 1/1 Running 0 33s test-rc-sp8pg 1/1 Running 0 33s
oc -n a2 exec -it test-rc-c2qqh – ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0@if28: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1360 qdisc noqueue state UP group default link/ether 0a:58:0a:81:02:13 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.129.2.19/23 brd 10.129.3.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::858:aff:fe81:213/64 scope link valid_lft forever preferred_lft forever 3: ovn-udn1@if29: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1300 qdisc noqueue state UP group default link/ether 0a:58:0a:98:00:0b brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.152.0.11/16 brd 10.152.255.255 scope global ovn-udn1 valid_lft forever preferred_lft forever inet6 fe80::858:aff:fe98:b/64 scope link valid_lft forever preferred_lft forever
4. Test out traffic before creating network policies in namespace a1
oc -n a2 exec -it test-rc-c2qqh – curl -I 10.152.0.6:8080 --connect-timeout 5
HTTP/1.1 200 OK X-Request-Port: 8080 Date: Mon, 20 Jan 2025 14:39:03 GMT Content-Length: 17 Content-Type: text/plain; charset=utf-8
5. Create network policies in a1
oc -n a1 get networkpolicy
NAME POD-SELECTOR AGE
allow-from-all-namespaces <none> 8m52s
default-deny-ingress <none> 10m
oc -n a1 get networkpolicy default-deny-ingress -oyaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: a1
spec:
podSelector: {}
policyTypes:
- Ingress
oc -n a1 get networkpolicy allow-from-all-namespaces -oyaml
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-all-namespaces namespace: a1 spec: ingress: - from: - namespaceSelector: {} ports: - port: 8080 protocol: TCP podSelector: {} policyTypes: - Ingress
Actual results:
Incoming request from pod in a2 to pod in a1 fails
oc -n a2 exec -it test-rc-c2qqh – curl -I 10.152.0.6:8080 --connect-timeout 5
curl: (28) Connection timeout after 5001 ms command terminated with exit code 28
Expected results:
Curl request to be successful.
Additional info:
Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.
Affected Platforms:
Is it an
- internal CI failure
- customer issue / SD
- internal RedHat testing failure
If it is an internal RedHat testing failure:
- Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).
If it is a CI failure:
- Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
- Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
- Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
- When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
- If it's a connectivity issue,
- What is the srcNode, srcIP and srcNamespace and srcPodName?
- What is the dstNode, dstIP and dstNamespace and dstPodName?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
If it is a customer / SD issue:
- Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
- Don’t presume that Engineering has access to Salesforce.
- Do presume that Engineering will access attachments through supportshell.
- Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
- Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
- If the issue is in a customer namespace then provide a namespace inspect.
- If it is a connectivity issue:
- What is the srcNode, srcNamespace, srcPodName and srcPodIP?
- What is the dstNode, dstNamespace, dstPodName and dstPodIP?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
- Please provide the UTC timestamp networking outage window from must-gather
- Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
- If it is not a connectivity issue:
- Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
- When showing the results from commands, include the entire command in the output.
- For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
- For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
- Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
- Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
- For guidance on using this template please see
OCPBUGS Template Training for Networking components