-
Bug
-
Resolution: Cannot Reproduce
-
Critical
-
None
-
4.16, 4.16.0
-
Quality / Stability / Reliability
-
False
-
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem: The SDN - OVN cluster migration failed and was rolled back. OSD cluster was migrated using OCP docs. It worked for 3 clusters and failed for 1. One of the Master node is complaining about
Ready False Tue, 11 Mar 2025 15:10:34 +0100 Tue, 11 Mar 2025 14:24:22 +0100 KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?
One of the SDN pod is
F0311 14:31:58.446916 15502 cmd.go:119] Failed to start sdn: node SDN setup failed: error on port vxlan0: "could not add network device vxlan0 to ofproto (File exists)"
Causing master to be in
Ready,SchedulingDisabled master
and
Error while reconciling 4.16.37: authentication, image-registry, machine-config, network, openshift-apiserver has an unknown error: ClusterOperatorsDegraded
$ oc --context app.ci get daemonset -n openshift-multus NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE multus 27 27 26 27 26 kubernetes.io/os=linux 4y329d multus-additional-cni-plugins 27 27 27 27 27 kubernetes.io/os=linux 3y270d network-metrics-daemon 27 27 26 27 26 kubernetes.io/os=linux 4y149d
Version-Release number of selected component (if applicable): version 4.16.37
How reproducible:
Steps to Reproduce:
- Following the doc https://docs.openshift.com/container-platform/4.16/networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.html#nw-ovn-kubernetes-live-migration-about_migrate-from-openshift-sdn
- Apply the patch
oc patch Network.config.openshift.io cluster --type='merge' --patch '{"metadata":{"annotations":{"network.openshift.io/network-type-migration":""}},"spec":{"networkType":"OVNKubernetes"}}'
3.
network 4.16.37 True False True 4y3d Failed to process SDN live migration (MCP is degraded, network type migration cannot proceed). Use 'oc edit network.config.openshift.io cluster' to fix.
4.
The network CRD it's missing the value in the annotation:
apiVersion: config.openshift.io/v1 kind: Network metadata: annotations: network.openshift.io/network-type-migration: ""
5.
$ oc --context app.ci get Network.operator.openshift.io cluster -oyaml | yq .status conditions: - lastTransitionTime: "2024-02-26T08:40:50Z" message: "" reason: "" status: "False" type: ManagementStateDegraded - lastTransitionTime: "2025-03-11T12:25:47Z" message: Failed to process SDN live migration (MCP is degraded, network type migration cannot proceed). Use 'oc edit network.config.openshift.io cluster' to fix. reason: NetworkTypeMigrationFailed status: "True" type: Degraded - lastTransitionTime: "2025-03-11T11:16:03Z" message: "" reason: "" status: "True" type: Upgradeable - lastTransitionTime: "2025-03-11T11:28:58Z" message: "" reason: "" status: "False" type: Progressing - lastTransitionTime: "2024-02-26T08:40:50Z" message: "" reason: "" status: "True" type: Available readyReplicas: 0 version: 4.16.37
sh-5.1# find /etc/kubernetes/cni/net.d/ /etc/kubernetes/cni/net.d/ /etc/kubernetes/cni/net.d/multus.d /etc/kubernetes/cni/net.d/multus.d/multus.kubeconfig /etc/kubernetes/cni/net.d/whereabouts.d /etc/kubernetes/cni/net.d/whereabouts.d/whereabouts.conf /etc/kubernetes/cni/net.d/whereabouts.d/whereabouts.kubeconfig
6. Rollback performed as migration failed
https://docs.openshift.com/container-platform/4.16/networking/ovn_kubernetes_network_provider/rollback-to-openshift-sdn.html#nw-ovn-kubernetes-rollback_rollback-to-openshift-sdn doc followed
oc --context app.ci get Network.config cluster -o jsonpath='{.status.migration}' {"networkType":"OpenShiftSDN"}
authentication 4.18.4 True False True 5h33m OAuthServerConfigObservationDegraded: failed to apply IDP RedHat_Internal_SSO config: couldn't get https://idp.ci.openshift.org/.well-known/openid-configuration: unexpected response status 503 machine-config 4.18.4 True False True 209d Failed to resync 4.18.4 because: error during syncRequiredMachineConfigPools: [context deadline exceeded, error MachineConfigPool worker is not ready, retrying. Status: (pool degraded: true total: 80, ready 61, updated: 61, unavailable: 18)]
oc -n openshift-multus rollout status daemonset/multus
daemon set "multus" successfully rolled out
$ oc --context app.ci get nodes -A | grep Ready,SchedulingDisabled ip-10-0-128-21.ec2.internal Ready,SchedulingDisabled infra,worker 399d v1.29.14+7cf4c05 ip-10-0-137-108.ec2.internal Ready,SchedulingDisabled master 4y329d v1.29.14+7cf4c05
Below error from the SchedulingDisabled master
Ready False Tue, 11 Mar 2025 15:10:34 +0100 Tue, 11 Mar 2025 14:24:22 +0100 KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?
$ oc --context app.ci -n openshift-sdn get pods NAME READY STATUS RESTARTS AGE sdn-2wv98 2/2 Running 0 4m18s sdn-42dfs 2/2 Running 4 121m sdn-47rz9 2/2 Running 2 115m sdn-4mdpd 2/2 Running 4 121m sdn-56dv8 2/2 Running 4 121m sdn-6bkwl 2/2 Running 4 121m sdn-799ks 2/2 Running 4 121m sdn-7j7pv 2/2 Running 4 121m sdn-9d4f7 2/2 Running 4 121m sdn-9nf8v 2/2 Running 4 121m sdn-9q4fz 2/2 Running 2 115m sdn-9tvrm 2/2 Running 2 115m sdn-controller-5mxb7 2/2 Running 2 121m sdn-controller-lwj6p 2/2 Running 4 121m sdn-controller-n2tm2 2/2 Running 4 121m sdn-d8m9d 2/2 Running 5 121m sdn-gh8g7 2/2 Running 8 121m sdn-h9nrm 2/2 Running 0 98m sdn-hgv6w 2/2 Running 0 95m sdn-j5qqz 2/2 Running 4 121m sdn-jq8dp 2/2 Running 4 121m sdn-mt6p4 1/2 CrashLoopBackOff 10 (2m13s ago) 28m sdn-n8qn4 2/2 Running 4 121m sdn-nzrnr 2/2 Running 4 121m sdn-q42mz 2/2 Running 4 121m sdn-rxxz2 2/2 Running 6 121m sdn-s5fhn 2/2 Running 4 121m sdn-v6gk7 2/2 Running 4 121m sdn-vf2sp 2/2 Running 2 118m sdn-zfnmg 2/2 Running 4 121m sdn-zfqxc 2/2 Running 4 121m
Error from above sdn pod
F0311 14:31:58.446916 15502 cmd.go:119] Failed to start sdn: node SDN setup failed: error on port vxlan0: "could not add network device vxlan0 to ofproto (File exists)"
Actual results:
Expected results:
Additional info:
https://issues.redhat.com/browse/OHSS-42050
Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.
Affected Platforms:
Is it an
- internal CI failure
If it is an internal RedHat testing failure:
- Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).
If it is a CI failure:
- Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
- Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
- Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
- When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
- If it's a connectivity issue,
- What is the srcNode, srcIP and srcNamespace and srcPodName?
- What is the dstNode, dstIP and dstNamespace and dstPodName?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
If it is a customer / SD issue:
- Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
- Don’t presume that Engineering has access to Salesforce.
- Do presume that Engineering will access attachments through supportshell.
- Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
- Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
- If the issue is in a customer namespace then provide a namespace inspect.
- If it is a connectivity issue:
- What is the srcNode, srcNamespace, srcPodName and srcPodIP?
- What is the dstNode, dstNamespace, dstPodName and dstPodIP?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
- Please provide the UTC timestamp networking outage window from must-gather
- Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
- If it is not a connectivity issue:
- Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
- When showing the results from commands, include the entire command in the output.
- For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
- For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
- Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
- Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
- For guidance on using this template please see
OCPBUGS Template Training for Networking components