-
Bug
-
Resolution: Done
-
Major
-
None
-
4.18.0
-
Quality / Stability / Reliability
-
False
-
-
None
-
None
-
No
-
None
-
Rejected
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem: When trying to enable BGP on singlev6 cluster, after routeAdvertisements and frrConfiguration, BGP peer was not established, getting following error in ovnkube-node log:
Failed to reconcile management port ovn-k8s-mp0: could not update nftables rule for management port: /dev/stdin:9:62-69: Error: conflicting protocols specified: ip6 vs. ip
add rule inet ovn-kubernetes mgmtport-snat meta nfproto ipv6 ip saddr fd01:0:0:1::2
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. install singlev6 cluster, enable featuregate, patch CNO to apply AdditionalRoutingCapabilities and routeAdvertisements
2. run demo.sh
3. check BGP neighbor and BGP routes
[root@sdn-09 ~]# oc rsh -c frr frr-k8s-56gct
sh-5.1# vtysh
Hello, this is FRRouting (version 8.5.3).
Copyright 1996-2005 Kunihiro Ishiguro, et al.
worker-1.offload.openshift-qe.sdn.com# show ipv6 bgp
% Unknown command: show ipv6 bgp
worker-1.offload.openshift-qe.sdn.com# show bgp ipv6
BGP table version is 1, local router ID is 0.0.0.0, vrf id 0
Default local pref 100, local AS 64512
Status codes: s suppressed, d damped, h history, * valid, > best, = multipath,
i internal, r RIB-failure, S Stale, R Removed
Nexthop codes: @NNN nexthop's vrf id, < announce-nh-self
Origin codes: i - IGP, e - EGP, ? - incomplete
RPKI validation codes: V valid, I invalid, N Not found
Network Next Hop Metric LocPrf Weight Path
*> fd01:0:0:4::/64 :: 0 32768 i
Displayed 1 routes and 1 total paths
worker-1.offload.openshift-qe.sdn.com#
worker-1.offload.openshift-qe.sdn.com# show bgp neighbor
BGP neighbor is fd2e:6f44:5dd8:c956::1, remote AS 64512, local AS 64512, internal link
Local Role: undefined
Remote Role: undefined
BGP version 4, remote router ID 0.0.0.0, local router ID 0.0.0.0
BGP state = Active
Last read 03:05:09, Last write never
Hold time is 180 seconds, keepalive interval is 60 seconds
Configured hold time is 180 seconds, keepalive interval is 60 seconds
Configured conditional advertisements interval is 60 seconds
Graceful restart information:
Local GR Mode: Helper*
Remote GR Mode: NotApplicable
R bit: False
N bit: False
Timers:
Configured Restart Time(sec): 120
Received Restart Time(sec): 0
Message statistics:
Inq depth is 0
Outq depth is 0
Sent Rcvd
Opens: 0 0
Notifications: 0 0
Updates: 0 0
Keepalives: 0 0
Route Refresh: 0 0
Capability: 0 0
Total: 0 0
Minimum time between advertisement runs is 0 seconds
For address family: IPv4 Unicast
Not part of any update group
Community attribute sent to this neighbor(all)
Inbound path policy configured
Outbound path policy configured
Route map for incoming advertisements is *fd2e:6f44:5dd8:c956::1-in
Route map for outgoing advertisements is *fd2e:6f44:5dd8:c956::1-out
0 accepted prefixes
For address family: IPv6 Unicast
Not part of any update group
Community attribute sent to this neighbor(all)
Inbound path policy configured
Outbound path policy configured
Route map for incoming advertisements is *fd2e:6f44:5dd8:c956::1-in
Route map for outgoing advertisements is *fd2e:6f44:5dd8:c956::1-out
0 accepted prefixes
Connections established 0; dropped 0
Last reset 03:05:09, Waiting for peer OPEN
Internal BGP neighbor may be up to 255 hops away.
BGP Connect Retry Timer in Seconds: 120
Next connect timer due in 55 seconds
Read thread: off Write thread: off FD used: -1
Actual results: BGP peer was not established
Expected results: BGP peer should be established,
Additional info:
Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.
Affected Platforms:
Is it an
- internal CI failure
- customer issue / SD
- internal RedHat testing failure
If it is an internal RedHat testing failure:
- Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).
If it is a CI failure:
- Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
- Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
- Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
- When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
- If it's a connectivity issue,
- What is the srcNode, srcIP and srcNamespace and srcPodName?
- What is the dstNode, dstIP and dstNamespace and dstPodName?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
If it is a customer / SD issue:
- Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
- Don’t presume that Engineering has access to Salesforce.
- Do presume that Engineering will access attachments through supportshell.
- Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
- Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
- If the issue is in a customer namespace then provide a namespace inspect.
- If it is a connectivity issue:
- What is the srcNode, srcNamespace, srcPodName and srcPodIP?
- What is the dstNode, dstNamespace, dstPodName and dstPodIP?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
- Please provide the UTC timestamp networking outage window from must-gather
- Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
- If it is not a connectivity issue:
- Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
- When showing the results from commands, include the entire command in the output.
- For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
- For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
- Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
- Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
- For guidance on using this template please see
OCPBUGS Template Training for Networking components