-
Bug
-
Resolution: Won't Do
-
Major
-
None
-
4.8.z
-
Quality / Stability / Reliability
-
False
-
-
None
-
Important
-
No
-
None
-
None
-
Rejected
-
SDN Sprint 234
-
1
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
The customer is using some pods for which they have a multus bridge interface configured to allow the pod to establish connections to external networks.
They have implemented NodePort services (externalTrafficPolicy=Cluster) pointing to these pods. This allows us to access the workloads from outside of Cluster.
The multus annotation for one of the workloads shared by them
```yaml
metadata:
annotations:
k8s.v1.cni.cncf.io/networks: '[
{
"name": "frontend-egress",
"ips": ["140.223.56.203/24"],
"default-route": ["140.223.56.1"]
}
]'
```
The network attachment definition looks like this:
```yaml
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: frontend-egress
spec:
config: |-
{
"cniVersion": "0.3.1",
"name": "frontend-egress",
"type": "bridge",
"bridge": "cbr-egress",
"capabilities": {"ips": true},
"forceAddress": true,
"isGateway": true,
"isDefaultGateway": true,
"ipam": {
"type": "static"
}
}
```
Here are the addresses in the pod:
```
$ sudo ip netns exec $netns ip -br addr
lo UNKNOWN 127.0.0.1/8 ::1/128
eth0@if25 UP 172.19.10.5/26 fe80::858:acff:fe13:a05/64
net1@if27 UP 140.223.56.203/24 fe80::1c93:6eff:fec2:4ad6/64
```
Here is the routing table:
```
$ sudo ip netns exec $netns ip route
default via 140.223.56.1 dev net1
140.223.56.0/24 dev net1 proto kernel scope link src 140.223.56.203
172.19.0.0/20 via 172.19.10.1 dev eth0
172.19.10.0/26 dev eth0 proto kernel scope link src 172.19.10.5
192.168.48.0/20 via 172.19.10.1 dev eth0
```
Version-Release number of selected component (if applicable):
OCP 4.8 UPI CNI=OVNKubernetes with gatewaymode=Local
How reproducible:
Very Reproducible, Suggested to raise as an RFE here : https://bugzilla.redhat.com/show_bug.cgi?id=2117791
Steps to Reproduce:
1. 2. 3.
Actual results:
Incoming traffic isn't able to reach pods due to routing endpoint issues.
Expected results:
Traffic should flow with NodePort
Additional info:
Prior to upgrading to 4.10.20, the NodePort access to these workloads was working fine, but after upgrading it stopped working.