-
Bug
-
Resolution: Duplicate
-
Major
-
None
-
4.16.z
-
None
-
Critical
-
None
-
False
-
Description of problem:
The issue is a clone of the bug OCPBUGS-36951 https://issues.redhat.com/browse/OCPBUGS-36951 The issue reported is ExternalIP is not accessible on 4.16 OVN-Kubernetes cluster.
Version-Release number of selected component (if applicable): v4.16
How reproducible: Always
Steps to Reproduce:
The configuration was done in these 2 namespaces, policy-test-b and policy-test-a
The two namespaces are identical except the httpd pod running in policy-test-a.
root@localhost:~# oc project policy-test-b Now using project "policy-test-b" on server "https://api.minilab3.plovdiv.eu:6443". root@localhost:~# oc get pods No resources found in policy-test-b namespace.
root@localhost:~# oc project policy-test-a Now using project "policy-test-a" on server "https://api.minilab3.plovdiv.eu:6443". root@localhost:~# oc get pods NAME READY STATUS RESTARTS AGE httpd-24-68ddf7749-5mpwn 1/1 Running 0 6d21h
**
**
Both namespaces have the same network policies applied:
root@localhost:~# oc project policy-test-a Already on project "policy-test-a" on server "https://api.minilab3.plovdiv.eu:6443". root@localhost:~# oc get networkpolicies NAME POD-SELECTOR AGE allow-from-kube-apiserver-operator <none> 19d allow-from-openshift-ingress <none> 19d allow-from-openshift-monitoring <none> 19d allow-same-namespace <none> 19d
root@localhost:~# oc project policy-test-b Now using project "policy-test-b" on server "https://api.minilab3.plovdiv.eu:6443". root@localhost:~# oc get networkpolicies NAME POD-SELECTOR AGE allow-from-kube-apiserver-operator <none> 21h allow-from-openshift-ingress <none> 21h allow-from-openshift-monitoring <none> 21h allow-same-namespace <none> 21h
For implementing External IP, Metallb in L2 mode is used.
root@localhost:~# oc project metallb-system Now using project "metallb-system" on server "https://api.minilab3.plovdiv.eu:6443". root@localhost:~# oc get l2advertisement NAME IPADDRESSPOOLS IPADDRESSPOOL SELECTORS INTERFACES l2-adv-sample1 [{"matchExpressions":[{"key":"zone","operator":"In","values":["all"]}]}]
The following are the YAML files/configuration used for testing:
1. Copy and applied the policy YAMLs directly from the documentation: https://docs.openshift.com/container-platform/4.12/networking/network_policy/multitenant-network-policy.html#multitenant-network-policy
2. The L2Advertisement:
root@localhost:~# oc describe l2advertisement Name: l2-adv-sample1 Namespace: metallb-system Labels: <none> Annotations: <none> API Version: metallb.io/v1beta1 Kind: L2Advertisement Metadata: Creation Timestamp: 2024-10-25T08:25:40Z Generation: 1 Managed Fields: API Version: metallb.io/v1beta1 Fields Type: FieldsV1 fieldsV1: f:spec: .: f:ipAddressPoolSelectors: Manager: Mozilla Operation: Update Time: 2024-10-25T08:25:40Z Resource Version: 11128666 UID: a0f45a3c-f927-428a-87b7-e9a72cacab80 Spec: Ip Address Pool Selectors: Match Expressions: Key: zone Operator: In Values: all Events: <none>
3. The service yaml (the NodePort is irrelevant):
root@localhost:~# oc project policy-test-a Now using project "policy-test-a" on server "https://api.minilab3.plovdiv.eu:6443". root@localhost:~# oc describe svc/httpd-24 Name: httpd-24 Namespace: policy-test-a Labels: zone=all Annotations: metallb.universe.tf/address-pool: ip-addresspool-sample1 metallb.universe.tf/ip-allocated-from-pool: ip-addresspool-sample1 Selector: app=httpd-24 Type: LoadBalancer IP Family Policy: SingleStack IP Families: IPv4 IP: 172.30.221.94 IPs: 172.30.221.94 IP: 192.168.88.81 LoadBalancer Ingress: 192.168.88.81 Port: <unset> 8080/TCP TargetPort: 8080/TCP NodePort: <unset> 30707/TCP Endpoints: 10.131.0.7:8080 Session Affinity: None External Traffic Policy: Cluster Events: <none> root@localhost:~# oc get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE httpd-24 LoadBalancer 172.30.221.94 192.168.88.81 8080:30707/TCP 19d root@localhost:~# oc get endpoints NAME ENDPOINTS AGE httpd-24 10.131.0.7:8080 19d
Below are the testing results:
Access the service form policy-test-a namespace:
root@localhost:~# oc project policy-test-a Already on project "policy-test-a" on server "https://api.minilab3.plovdiv.eu:6443". root@localhost:~# oc debug -n policy-test-a -- nc -zvi 30 192.168.88.81 8080 W1112 09:29:03.027310 708696 warnings.go:70] would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "debug" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "debug" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "debug" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "debug" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Starting pod/image-debug ... Ncat: Version 7.92 ( https://nmap.org/ncat ) Ncat: Connected to 192.168.88.81:8080. Ncat: 0 bytes sent, 0 bytes received in 0.03 seconds. Removing debug pod ...
Access the service form policy-test-b namespace:
root@localhost:~# oc project policy-test-b Now using project "policy-test-b" on server "https://api.minilab3.plovdiv.eu:6443". root@localhost:~# oc debug -n policy-test-b -- nc -zvi 30 192.168.88.81 8080 W1112 09:31:45.281199 708878 warnings.go:70] would violate PodSecurity "restricted:v1.24": allowPrivilegeEscalation != false (container "debug" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "debug" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "debug" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "debug" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Starting pod/image-debug ... Ncat: Version 7.92 ( https://nmap.org/ncat ) Ncat: TIMEOUT. Removing debug pod ...
Access the service form an external host:
root@localhost:~# nc -zvi 30 192.168.88.81 8080
Ncat: Version 7.92 ( https://nmap.org/ncat )
Ncat: Connected to 192.168.88.81:8080.
Ncat: 0 bytes sent, 0 bytes received in 0.53 seconds.
Actual results: The ExternalIP is not accessible on 4.16 OVN-Kubernetes cluster.
Expected results: The ExternalIP should be accessible on 4.16 OVN-Kubernetes cluster
Additional info:
The issue is a clone of the bug OCPBUGS-36951 https://issues.redhat.com/browse/OCPBUGS-36951
On that bug, the engineering team requested to reproduce the issue on v4.16 and closed the bug with the following reason:
This bug may be legitimate, but it will not be addressed in 4.12. If it can be re-produced in a release in "Full" support according to https://access.redhat.com/support/policy/updates/openshift (currently 4.15 or 4.16) then we'll need a new bug for that release
TAM rhn-support-pbunev managed to reproduce the issue in the v4.16 test cluster. Hence, reported this bug