-
Bug
-
Resolution: Unresolved
-
Undefined
-
None
-
4.18.0, 4.19.0
-
Critical
-
No
-
Rejected
-
False
-
Description of problem:
On dualstack cluster, ipv6 BGP routes for default network subnet can be adverteised on external router, but not on each worker.
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. setup dualstack cluster and external router
2. enable the frr-k8s in cno
3. Create FRRConfiguration as below
apiVersion: frrk8s.metallb.io/v1beta1
kind: FRRConfiguration
metadata:
name: receive-ipv6
namespace: openshift-frr-k8s
spec:
bgp:
routers:
- asn: 64512
neighbors:
- address: 192.168.111.1
asn: 64512
toReceive:
allowed:
mode: all
- address: fd2e:6f44:5dd8:c956::1
asn: 64512
toReceive:
allowed:
mode: all
4. Create RouteAdvertisements as below
apiVersion: k8s.ovn.org/v1
kind: RouteAdvertisements
metadata:
name: default
spec:
networkSelector:
matchLabels:
k8s.ovn.org/default-network: ""
advertisements:
- "PodNetwork"
- "EgressIP"
5. we can see the ipv6 bgp route on external router
- ip -6 route | grep bgp
fd01::/64 nhid 1999 via fd2e:6f44:5dd8:c956::15 dev corebm proto bgp metric 20 pref medium
fd01:0:0:1::/64 nhid 1995 via fd2e:6f44:5dd8:c956::14 dev corebm proto bgp metric 20 pref medium
fd01:0:0:2::/64 nhid 2003 via fd2e:6f44:5dd8:c956::16 dev corebm proto bgp metric 20 pref medium
fd01:0:0:3::/64 nhid 2011 via fd2e:6f44:5dd8:c956::18 dev corebm proto bgp metric 20 pref medium
fd01:0:0:4::/64 nhid 2006 via fd2e:6f44:5dd8:c956::17 dev corebm proto bgp metric 20 pref medium
fd01:0:0:5::/64 nhid 1990 via fd2e:6f44:5dd8:c956::19 dev corebm proto bgp metric 20 pref medium
- ip route | grep bgp
10.128.0.0/23 nhid 1998 via 192.168.111.21 dev corebm proto bgp metric 20
10.128.2.0/23 nhid 2007 via 192.168.111.23 dev corebm proto bgp metric 20
10.129.0.0/23 nhid 1994 via 192.168.111.20 dev corebm proto bgp metric 20
10.129.2.0/23 nhid 1989 via 192.168.111.25 dev corebm proto bgp metric 20
10.130.0.0/23 nhid 2002 via 192.168.111.22 dev corebm proto bgp metric 20
10.131.0.0/23 nhid 2010 via 192.168.111.24 dev corebm proto bgp metric 20
11.131.100.100 nhid 1989 via 192.168.111.25 dev corebm proto bgp metric 20
6. But on each worker only ipv4 BGP route, no ipv6 bgp route
- ssh -i ~/.ssh/openshift-qe.pem -o StrictHostKeyChecking=no core@192.168.111.20 ip -6 route | grep bgp
2001:db8:: nhid 117 via fe80::5054:ff:febc:a094 dev br-ex proto bgp metric 20 pref medium
- ssh -i ~/.ssh/openshift-qe.pem -o StrictHostKeyChecking=no core@192.168.111.20 ip route | grep bgp
10.128.0.0/23 nhid 3387 via 192.168.111.21 dev br-ex proto bgp metric 20
10.128.2.0/23 nhid 3391 via 192.168.111.23 dev br-ex proto bgp metric 20
10.129.2.0/23 nhid 3384 via 192.168.111.25 dev br-ex proto bgp metric 20
10.130.0.0/23 nhid 3389 via 192.168.111.22 dev br-ex proto bgp metric 20
10.131.0.0/23 nhid 3393 via 192.168.111.24 dev br-ex proto bgp metric 20
11.131.100.100 nhid 3384 via 192.168.111.25 dev br-ex proto bgp metric 20
192.168.1.0/24 nhid 77 via 192.168.111.1 dev br-ex proto bgp metric 20
192.169.1.1 nhid 77 via 192.168.111.1 dev br-ex proto bgp metric 20
Actual results:
Expected results:
ipv6 BGP route should be advertised on each worker
Additional info:
Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.
Affected Platforms:
Is it an
- internal CI failure
- customer issue / SD
- internal RedHat testing failure
If it is an internal RedHat testing failure:
- Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).
If it is a CI failure:
- Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
- Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
- Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
- When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
- If it's a connectivity issue,
- What is the srcNode, srcIP and srcNamespace and srcPodName?
- What is the dstNode, dstIP and dstNamespace and dstPodName?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
If it is a customer / SD issue:
- Provide enough information in the bug description that Engineering doesn't need to read the entire case history.
- Don't presume that Engineering has access to Salesforce.
- Do presume that Engineering will access attachments through supportshell.
- Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
- Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
- If the issue is in a customer namespace then provide a namespace inspect.
- If it is a connectivity issue:
- What is the srcNode, srcNamespace, srcPodName and srcPodIP?
- What is the dstNode, dstNamespace, dstPodName and dstPodIP?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
- Please provide the UTC timestamp networking outage window from must-gather
- Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
- If it is not a connectivity issue:
- Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
- When showing the results from commands, include the entire command in the output.
- For OCPBUGS in which the issue has been identified, label with "sbr-triaged"
- For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with "sbr-untriaged"
- Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
- Note: bugs that do not meet these minimum standards will be closed with label "SDN-Jira-template"
- For guidance on using this template please see
OCPBUGS Template Training for Networking components