Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-60213

Rollback failure of offline SDN to OVN migration on OCP with 120 worker nodes

XMLWordPrintable

    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • Critical
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem:

      Version-Release number of selected component (if applicable):

      How reproducible:

      Steps to Reproduce:

      1. Deploy OCP with 120 worker node and create large scale workload using kube-burner-ocp

      2. Execute Offline migration using ansible tool, after the migration is done, execute rollback

      1. pip3 install ansible-core ansible-lint jmespath jq
      ansible-galaxy collection install network.offline_migration_sdn_to_ovnk
      
      2. clone the code and make install
      modified the code and run <make lint> to let code effect
      3. Execute migration and rollback
      export CURRENT_TIME=`date +%Y%m%d%H%M%S`
      export ANSIBLE_LOG_PATH=./migration-${CURRENT_TIME}-120.log
      nohup ansible-playbook -v playbooks/playbook-migration.yml -vvv&
      
      export CURRENT_TIME=`date +%Y%m%d%H%M%S`
      export ANSIBLE_LOG_PATH=./rollback-${CURRENT_TIME}-120.log
      nohup ansible-playbook -v playbooks/playbook-rollback.yml&
       

      3.

      Actual results:

      The rollback failure

      2025-08-04 02:15:13,165 p=3309251 u=ocpadmin n=ansible INFO| TASK [network.offline_migration_sdn_to_ovnk.rollback : Wait for Multus pods to restart] ***
      2025-08-04 02:15:13,883 p=3309251 u=ocpadmin n=ansible INFO| ok: [localhost] => {"changed": false, "msg": "Multus pods restarted successfully."}
      2025-08-04 02:15:13,889 p=3309251 u=ocpadmin n=ansible INFO| PLAY [Reboot nodes] ************************************************************
      2025-08-04 02:15:13,892 p=3309251 u=ocpadmin n=ansible INFO| TASK [network.offline_migration_sdn_to_ovnk.reboot_nodes : Reboot master nodes] ***
      2025-08-04 03:27:36,886 p=3309251 u=ocpadmin n=ansible WARNING| [WARNING]: Retrying in 3 seconds due to error: timed out waiting for the
      condition on nodes/ip-10-0-10-110.us-east-2.compute.internal timed out waiting
      for the condition on nodes/ip-10-0-10-41.us-east-2.compute.internal timed out
      waiting for the condition on nodes/ip-10-0-11-185.us-east-2.compute.internal
      timed out waiting for the condition on nodes/ip-10-0-11-30.us-
      east-2.compute.internal timed out waiting for the condition on
      nodes/ip-10-0-12-123.us-east-2.compute.internal timed out waiting for the
      condition on nodes/ip-10-0-13-100.us-east-2.compute.internal timed out waiting
      for the condition on nodes/ip-10-0-13-174.us-east-2.compute.internal timed out
      waiting for the condition on nodes/ip-10-0-13-98.us-east-2.compute.internal
      timed out waiting for the condition on nodes/ip-10-0-14-161.us-
      east-2.compute.internal timed out waiting for the condition on
      nodes/ip-10-0-15-163.us-east-2.compute.internal timed out waiting for the
      condition on nodes/ip-10-0-15-51.us-east-2.compute.internal timed out waiting
      for the condition on nodes/ip-10-0-15-67.us-east-2.compute.internal timed out
      waiting for the condition on nodes/ip-10-0-17-71.us-east-2.compute.internal
      timed out waiting for the condition on nodes/ip-10-0-18-145.us-
      east-2.compute.internal timed out waiting for the condition on
      nodes/ip-10-0-18-159.us-east-2.compute.internal timed out waiting for the
      condition on nodes/ip-10-0-2-84.us-east-2.compute.internal timed out waiting
      for the condition on nodes/ip-10-0-20-140.us-east-2.compute.internal timed out
      waiting for the condition on nodes/ip-10-0-22-72.us-east-2.compute.internal
      timed out waiting for the condition on nodes/ip-10-0-23-150.us-
      east-2.compute.internal timed out waiting for the condition on
      nodes/ip-10-0-23-77.us-east-2.compute.internal timed out waiting for the
      condition on nodes/ip-10-0-25-6.us-east-2.compute.internal timed out waiting
      for the condition on nodes/ip-10-0-25-65.us-east-2.compute.internal timed out
      waiting for the condition on nodes/ip-10-0-25-77.us-east-2.compute.internal
      timed out waiting for the condition on nodes/ip-10-0-26-246.us-
      east-2.compute.internal timed out waiting for the condition on
      nodes/ip-10-0-27-207.us-east-2.compute.internal timed out waiting for the
      condition on nodes/ip-10-0-28-64.us-east-2.compute.internal timed out waiting
      for the condition on nodes/ip-10-0-29-127.us-east-2.compute.internal timed out
      waiting for the condition on nodes/ip-10-0-29-23.us-east-2.compute.internal
      timed out waiting for the condition on nodes/ip-10-0-3-131.us-
      east-2.compute.internal timed out waiting for the condition on
      nodes/ip-10-0-30-214.us-east-2.compute.internal timed out waiting for the
      condition on nodes/ip-10-0-31-148.us-east-2.compute.internal timed out waiting
      for the condition on nodes/ip-10-0-32-192.us-east-2.compute.internal timed out
      waiting for the condition on nodes/ip-10-0-32-37.us-east-2.compute.internal
      timed out waiting for the condition on nodes/ip-10-0-34-244.us-
      east-2.compute.internal timed out waiting for the condition on
      nodes/ip-10-0-36-141.us-east-2.compute.internal timed out waiting for the
      condition on nodes/ip-10-0-38-238.us-east-2.compute.internal timed out waiting
      for the condition on nodes/ip-10-0-39-120.us-east-2.compute.internal timed out
      waiting for the condition on nodes/ip-10-0-39-139.us-east-2.compute.internal
      timed out waiting for the condition on nodes/ip-10-0-4-181.us-
      east-2.compute.internal timed out waiting for the condition on
      nodes/ip-10-0-40-133.us-east-2.compute.internal timed out waiting for the
      condition on nodes/ip-10-0-41-44.us-east-2.compute.internal timed out waiting
      for the condition on nodes/ip-10-0-44-84.us-east-2.compute.internal timed out
      waiting for the condition on nodes/ip-10-0-44-85.us-east-2.compute.internal
      timed out waiting for the condition on nodes/ip-10-0-45-104.us-
      east-2.compute.internal timed out waiting for the condition on
      nodes/ip-10-0-46-196.us-east-2.compute.internal timed out waiting for the
      condition on nodes/ip-10-0-46-63.us-east-2.compute.internal timed out waiting
      for the condition on nodes/ip-10-0-46-65.us-east-2.compute.internal timed out
      waiting for the condition on nodes/ip-10-0-46-7.us-east-2.compute.internal
      timed out waiting for the condition on nodes/ip-10-0-48-202.us-
      east-2.compute.internal timed out waiting for the condition on
      nodes/ip-10-0-48-215.us-east-2.compute.internal timed out waiting for the
      condition on nodes/ip-10-0-49-19.us-east-2.compute.internal timed out waiting
      for the condition on nodes/ip-10-0-49-191.us-east-2.compute.internal timed out
      waiting for the condition on nodes/ip-10-0-49-220.us-east-2.compute.internal
      timed out waiting for the condition on nodes/ip-10-0-5-33.us-
      east-2.compute.internal timed out waiting for the condition on
      nodes/ip-10-0-50-36.us-east-2.compute.internal timed out waiting for the
      condition on nodes/ip-10-0-50-83.us-east-2.compute.internal timed out waiting
      for the condition on nodes/ip-10-0-51-51.us-east-2.compute.internal timed out
      waiting for the condition on nodes/ip-10-0-51-66.us-east-2.compute.internal
      timed out waiting for the condition on nodes/ip-10-0-51-74.us-
      east-2.compute.internal timed out waiting for the condition on
      nodes/ip-10-0-52-175.us-east-2.compute.internal timed out waiting for the
      condition on nodes/ip-10-0-52-8.us-east-2.compute.internal timed out waiting
      for the condition on nodes/ip-10-0-53-167.us-east-2.compute.internal timed out
      waiting for the condition on nodes/ip-10-0-53-199.us-east-2.compute.internal
      timed out waiting for the condition on nodes/ip-10-0-53-203.us-
      east-2.compute.internal timed out waiting for the condition on
      nodes/ip-10-0-54-149.us-east-2.compute.internal timed out waiting for the
      condition on nodes/ip-10-0-54-191.us-east-2.compute.internal timed out waiting
      for the condition on nodes/ip-10-0-55-250.us-east-2.compute.internal timed out
      waiting for the condition on nodes/ip-10-0-57-148.us-east-2.compute.internal
      timed out waiting for the condition on nodes/ip-10-0-57-32.us-
      east-2.compute.internal timed out waiting for the condition on
      nodes/ip-10-0-58-40.us-east-2.compute.internal timed out waiting for the
      condition on nodes/ip-10-0-59-229.us-east-2.compute.internal timed out waiting
      for the condition on nodes/ip-10-0-59-47.us-east-2.compute.internal timed out
      waiting for the condition on nodes/ip-10-0-6-241.us-east-2.compute.internal
      timed out waiting for the condition on nodes/ip-10-0-61-170.us-
      east-2.compute.internal timed out waiting for the condition on
      nodes/ip-10-0-62-107.us-east-2.compute.internal timed out waiting for the
      condition on nodes/ip-10-0-62-194.us-east-2.compute.internal timed out waiting
      for the condition on nodes/ip-10-0-64-177.us-east-2.compute.internal timed out
      waiting for the condition on nodes/ip-10-0-65-125.us-east-2.compute.internal
      timed out waiting for the condition on nodes/ip-10-0-65-201.us-
      east-2.compute.internal timed out waiting for the condition on
      nodes/ip-10-0-66-245.us-east-2.compute.internal timed out waiting for the
      condition on nodes/ip-10-0-67-190.us-east-2.compute.internal timed out waiting
      for the condition on nodes/ip-10-0-67-60.us-east-2.compute.internal timed out
      waiting for the condition on nodes/ip-10-0-68-140.us-east-2.compute.internal
      timed out waiting for the condition on nodes/ip-10-0-68-164.us-
      east-2.compute.internal timed out waiting for the condition on
      nodes/ip-10-0-69-247.us-east-2.compute.internal timed out waiting for the
      condition on nodes/ip-10-0-70-237.us-east-2.compute.internal timed out waiting
      for the condition on nodes/ip-10-0-70-6.us-east-2.compute.internal timed out
      waiting for the condition on nodes/ip-10-0-72-83.us-east-2.compute.internal
      timed out waiting for the condition on nodes/ip-10-0-73-102.us-
      east-2.compute.internal timed out waiting for the condition on
      nodes/ip-10-0-74-34.us-east-2.compute.internal timed out waiting for the
      condition on nodes/ip-10-0-74-39.us-east-2.compute.internal timed out waiting
      for the condition on nodes/ip-10-0-75-122.us-east-2.compute.internal timed out
      waiting for the condition on nodes/ip-10-0-76-137.us-east-2.compute.internal
      timed out waiting for the condition on nodes/ip-10-0-76-160.us-
      east-2.compute.internal timed out waiting for the condition on
      nodes/ip-10-0-76-8.us-east-2.compute.internal timed out waiting for the
      condition on nodes/ip-10-0-77-216.us-east-2.compute.internal timed out waiting
      for the condition on nodes/ip-10-0-77-222.us-east-2.compute.internal timed out
      waiting for the condition on nodes/ip-10-0-77-74.us-east-2.compute.internal
      timed out waiting for the condition on nodes/ip-10-0-78-129.us-
      east-2.compute.internal timed out waiting for the condition on
      nodes/ip-10-0-79-52.us-east-2.compute.internal timed out waiting for the
      condition on nodes/ip-10-0-8-245.us-east-2.compute.internal timed out waiting
      for the condition on nodes/ip-10-0-8-84.us-east-2.compute.internal timed out
      waiting for the condition on nodes/ip-10-0-80-108.us-east-2.compute.internal
      timed out waiting for the condition on nodes/ip-10-0-80-55.us-
      east-2.compute.internal timed out waiting for the condition on
      nodes/ip-10-0-81-195.us-east-2.compute.internal timed out waiting for the
      condition on nodes/ip-10-0-83-104.us-east-2.compute.internal timed out waiting
      for the condition on nodes/ip-10-0-84-252.us-east-2.compute.internal timed out
      waiting for the condition on nodes/ip-10-0-85-167.us-east-2.compute.internal
      timed out waiting for the condition on nodes/ip-10-0-85-86.us-
      east-2.compute.internal timed out waiting for the condition on
      nodes/ip-10-0-86-0.us-east-2.compute.internal timed out waiting for the
      condition on nodes/ip-10-0-87-138.us-east-2.compute.internal timed out waiting
      for the condition on nodes/ip-10-0-87-172.us-east-2.compute.internal timed out
      waiting for the condition on nodes/ip-10-0-89-196.us-east-2.compute.internal
      timed out waiting for the condition on nodes/ip-10-0-9-20.us-
      east-2.compute.internal timed out waiting for the condition on
      nodes/ip-10-0-90-9.us-east-2.compute.internal timed out waiting for the
      condition on nodes/ip-10-0-92-144.us-east-2.compute.internal timed out waiting
      for the condition on nodes/ip-10-0-92-188.us-east-2.compute.internal timed out
      waiting for the condition on nodes/ip-10-0-92-60.us-east-2.compute.internal
      timed out waiting for the condition on nodes/ip-10-0-93-226.us-
      east-2.compute.internal timed out waiting for the condition on
      nodes/ip-10-0-93-8.us-east-2.compute.internal timed out waiting for the
      condition on nodes/ip-10-0-95-124.us-east-2.compute.internal2025-08-04 03:27:36,930 p=3309251 u=ocpadmin n=ansible INFO| fatal: [localhost]: FAILED! => {"changed": false, "msg": "❌ Nodes did not become ready within the timeout period."}
      2025-08-04 03:27:36,930 p=3309251 u=ocpadmin n=ansible INFO| PLAY RECAP *********************************************************************
      2025-08-04 03:27:36,930 p=3309251 u=ocpadmin n=ansible INFO| localhost                  : ok=22   changed=5    unreachable=0    failed=1    skipped=7    rescued=0    ignored=0 

      Expected results:

      The rollback should be succeed

      Additional info:

      Please fill in the following template while reporting a bug and provide as much relevant information as possible. Doing so will give us the best chance to find a prompt resolution.

      Affected Platforms:

      Is it an

      1. internal CI failure
      2. customer issue / SD
      3. internal RedHat testing failure

      If it is an internal RedHat testing failure:

      • Please share a kubeconfig or creds to a live cluster for the assignee to debug/troubleshoot along with reproducer steps (specially if it's a telco use case like ICNI, secondary bridges or BM+kubevirt).

      If it is a CI failure:

      • Did it happen in different CI lanes? If so please provide links to multiple failures with the same error instance
      • Did it happen in both sdn and ovn jobs? If so please provide links to multiple failures with the same error instance
      • Did it happen in other platforms (e.g. aws, azure, gcp, baremetal etc) ? If so please provide links to multiple failures with the same error instance
      • When did the failure start happening? Please provide the UTC timestamp of the networking outage window from a sample failure run
      • If it's a connectivity issue,
      • What is the srcNode, srcIP and srcNamespace and srcPodName?
      • What is the dstNode, dstIP and dstNamespace and dstPodName?
      • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)

      If it is a customer / SD issue:

      • Provide enough information in the bug description that Engineering doesn’t need to read the entire case history.
      • Don’t presume that Engineering has access to Salesforce.
      • Do presume that Engineering will access attachments through supportshell.
      • Describe what each relevant attachment is intended to demonstrate (failed pods, log errors, OVS issues, etc).
      • Referring to the attached must-gather, sosreport or other attachment, please provide the following details:
        • If the issue is in a customer namespace then provide a namespace inspect.
        • If it is a connectivity issue:
          • What is the srcNode, srcNamespace, srcPodName and srcPodIP?
          • What is the dstNode, dstNamespace, dstPodName and dstPodIP?
          • What is the traffic path? (examples: pod2pod? pod2external?, pod2svc? pod2Node? etc)
          • Please provide the UTC timestamp networking outage window from must-gather
          • Please provide tcpdump pcaps taken during the outage filtered based on the above provided src/dst IPs
        • If it is not a connectivity issue:
          • Describe the steps taken so far to analyze the logs from networking components (cluster-network-operator, OVNK, SDN, openvswitch, ovs-configure etc) and the actual component where the issue was seen based on the attached must-gather. Please attach snippets of relevant logs around the window when problem has happened if any.
      • When showing the results from commands, include the entire command in the output.  
      • For OCPBUGS in which the issue has been identified, label with “sbr-triaged”
      • For OCPBUGS in which the issue has not been identified and needs Engineering help for root cause, label with “sbr-untriaged”
      • Do not set the priority, that is owned by Engineering and will be set when the bug is evaluated
      • Note: bugs that do not meet these minimum standards will be closed with label “SDN-Jira-template”
      • For guidance on using this template please see
        OCPBUGS Template Training for Networking  components

              rhn-support-misalunk Miheer Salunke
              rhn-support-liqcui Liquan Cui
              None
              None
              Anurag Saxena Anurag Saxena
              None
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated: