-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
4.14.z
-
Quality / Stability / Reliability
-
False
-
-
None
-
Critical
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. Deploy OCP with SDN network and Large scale workload which include 240,000 egresssnetworkpolicy
2. Execute offline migration using ansible tools
export CURRENT_TIME=`date +%Y%m%d%H%M%S` export ANSIBLE_LOG_PATH=./migration-${CURRENT_TIME}.log nohup ansible-playbook -v playbooks/playbook-migration.yml -vvv&
Actual results:
The ansbile tool will failed after mc upgraded, the ansible playbook will broken with below error
2025-07-30 07:10:48,275 p=3430287 u=ocpadmin n=ansible INFO| task path: /home/ocpadmin/.ansible/collections/ansible_collections/network/offline_migration_sdn_to_ovnk/roles/migration/tasks/main.yml:163 2025-07-30 07:10:48,291 p=3430287 u=ocpadmin n=ansible INFO| <127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: ocpadmin 2025-07-30 07:10:48,291 p=3430287 u=ocpadmin n=ansible INFO| <127.0.0.1> EXEC /bin/sh -c 'echo ~ocpadmin && sleep 0' 2025-07-30 07:10:48,295 p=3430287 u=ocpadmin n=ansible INFO| <127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/ocpadmin/.ansible/tmp `"&& mkdir "` echo /home/ocpadmin/.ansible/tmp/ansible-tmp-1753873848.2952876-3798667-119523878415528 `" && echo ansible-tmp-1753873848.2952876-3798667-119523878415528="` echo /home/ocpadmin/.ansible/tmp/ansible-tmp-1753873848.2952876-3798667-119523878415528 `" ) && sleep 0' 2025-07-30 07:10:48,398 p=3430287 u=ocpadmin n=ansible INFO| Using module file /home/ocpadmin/.ansible/collections/ansible_collections/network/offline_migration_sdn_to_ovnk/plugins/modules/wait_for_network_co.py 2025-07-30 07:10:48,399 p=3430287 u=ocpadmin n=ansible INFO| <127.0.0.1> PUT /home/ocpadmin/.ansible/tmp/ansible-local-3430287cee3h2a9/tmprf_o5zoh TO /home/ocpadmin/.ansible/tmp/ansible-tmp-1753873848.2952876-3798667-119523878415528/AnsiballZ_wait_for_network_co.py 2025-07-30 07:10:48,400 p=3430287 u=ocpadmin n=ansible INFO| <127.0.0.1> EXEC /bin/sh -c 'chmod u+rwx /home/ocpadmin/.ansible/tmp/ansible-tmp-1753873848.2952876-3798667-119523878415528/ /home/ocpadmin/.ansible/tmp/ansible-tmp-1753873848.2952876-3798667-119523878415528/AnsiballZ_wait_for_network_co.py && sleep 0' 2025-07-30 07:10:48,405 p=3430287 u=ocpadmin n=ansible INFO| <127.0.0.1> EXEC /bin/sh -c '/usr/local/bin/python3.13 /home/ocpadmin/.ansible/tmp/ansible-tmp-1753873848.2952876-3798667-119523878415528/AnsiballZ_wait_for_network_co.py && sleep 0' 2025-07-30 07:21:19,585 p=3430287 u=ocpadmin n=ansible INFO| <127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/ocpadmin/.ansible/tmp/ansible-tmp-1753873848.2952876-3798667-119523878415528/ > /dev/null 2>&1 && sleep 0' 2025-07-30 07:21:19,593 p=3430287 u=ocpadmin n=ansible INFO| fatal: [localhost]: FAILED! => { "changed": false, "invocation": { "module_args": { "timeout": 600 } }, "msg": "Timeout waiting for Network Cluster Operator to reach PROGRESSING=True." } 2025-07-30 07:21:19,595 p=3430287 u=ocpadmin n=ansible INFO| PLAY RECAP ********************************************************************* 2025-07-30 07:21:19,595 p=3430287 u=ocpadmin n=ansible INFO| localhost : ok=25 changed=4 unreachable=0 failed=1 skipped=10 rescued=0 ignored=1
The co get degrade, no recovery after 10 hours
Every 2.0s: oc get co openshift-qe-022.lab.eng.rdu2.redhat.com: Wed Jul 30 18:21:22 2025 NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.14.53 False False True 10h OAuthServerRouteEndpointAccessibleControllerAvailable: Get "https://oauth-openshift.apps.liqcui-oc4sdn2ovn.perfscale.devcluster.openshift.com/healthz": EOF... baremetal 4.14.53 True False False 20h cloud-controller-manager 4.14.53 True False False 20h cloud-credential 4.14.53 True False False 20h cluster-autoscaler 4.14.53 True False False 20h config-operator 4.14.53 True False False 20h console 4.14.53 False False False 10h RouteHealthAvailable: failed to GET route (https://console -openshift-console.apps.liqcui-oc4sdn2ovn.perfscale.devcluster.openshift.com): Get "https://console-openshift-console.apps.liqcui-oc4sdn2ovn.perfscale.devcl uster.openshift.com": EOF control-plane-machine-set 4.14.53 True False False 20h csi-snapshot-controller 4.14.53 True False False 20h dns 4.14.53 True True False 20h DNS "default" reports Progressing=True: "Have 505 availabl e DNS pods, want 507." etcd 4.14.53 True False False 20h image-registry 4.14.53 True False False 20h ingress 4.14.53 True False True 16h The "default" ingress controller reports Degraded=True: De gradedConditions: One or more other status conditions indicate a degraded state: CanaryChecksSucceeding=False (CanaryChecksRepetitiveFailures: Canary route checks for the default ingress controller are failing) insights 4.14.53 True False False 20h kube-apiserver 4.14.53 True False False 20h kube-controller-manager 4.14.53 True False False 20h kube-scheduler 4.14.53 True False False 20h kube-storage-version-migrator 4.14.53 True False False 12h machine-api 4.14.53 True False False 20h machine-approver 4.14.53 True False False 20h machine-config 4.14.53 True False False 149m marketplace 4.14.53 True False False 20h monitoring 4.14.53 True False False 14h network 4.14.53 True True False 20h DaemonSet "/openshift-network-diagnostics/network-check-ta rget" is not available (awaiting 2 nodes) node-tuning 4.14.53 True False False 20h openshift-apiserver 4.14.53 True False False 10h openshift-controller-manager 4.14.53 True False False 20h openshift-samples 4.14.53 True False False 20h operator-lifecycle-manager 4.14.53 True False False 20h operator-lifecycle-manager-catalog 4.14.53 True False False 20h operator-lifecycle-manager-packageserver 4.14.53 True False False 20h service-ca 4.14.53 True False False 20h
Expected results:
Additional info:
Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.
Affected Platforms:
Is it an
- internal CI failure
- customer issue / SD
- internal RedHat testing failure
If it is an internal RedHat testing failure:
- Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).
If it is a CI failure:
- Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
- Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
- Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
- When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
- If it's a connectivity issue,
- What is the srcNode, srcIP and srcNamespace and srcPodName?
- What is the dstNode, dstIP and dstNamespace and dstPodName?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
If it is a customer / SD issue:
- Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
- Don’t presume that Engineering has access to Salesforce.
- Do presume that Engineering will access attachments through supportshell.
- Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
- Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
- If the issue is in a customer namespace then provide a namespace inspect.
- If it is a connectivity issue:
- What is the srcNode, srcNamespace, srcPodName and srcPodIP?
- What is the dstNode, dstNamespace, dstPodName and dstPodIP?
- What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
- Please provide the UTC timestamp networking outage window from must-gather
- Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
- If it is not a connectivity issue:
- Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
- When showing the results from commands, include the entire command in the output.
- For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
- For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
- Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
- Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
- For guidance on using this template please see
OCPBUGS Template Training for Networking components