Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-46605

[GCP] "destroy cluster" stucks, where additional compute nodes added and without infra_id as their name prefix

XMLWordPrintable

    • Important
    • Yes
    • Installer (PB) Sprint 263, Installer (PB) Sprint 265
    • 2
    • Proposed
    • False
    • Hide

      None

      Show
      None

      Description of problem:

          It's the testing scenario of QE test case OCP-24405, i.e. after a successful IPI installation, add an additional compute/worker node without infra_id as name prefix. The expectation is, "destroy cluster" could delete the additional compute/worker machine smoothly. But the testing results is, "destroy cluster" seems unaware of the machine. 

      Version-Release number of selected component (if applicable):

          4.18.0-0.nightly-multi-2024-12-17-192034

      How reproducible:

          Always

      Steps to Reproduce:

      1. install an IPI cluster on GCP and make sure it succeeds (see [1])
      2. add the additional compute/worker node, and ensure the node's name doesn't have the cluster infra ID (see [2])
      3. wait for the node ready and all cluster operators available
      4. (optional) scale ingress operator replica into 3 (see [3]), and wait for ingress operator finishing progressing
      5. check the new machine on GCP (see [4])
      6. "destroy cluster" (see [5])     

      Actual results:

          The additional compute/worker node is not deleted, which seems also leading to k8s firewall-rules / forwarding-rule / target-pool / http-health-check not deleted.
      

      Expected results:

          "destroy cluster" should be able to detect the additional compute/worker node by the label "kubernetes-io-cluster-<infra id>: owned" and delete it along with all resources of the cluster.

      Additional info:

          Alternatively, we also tested with creating the additional compute/worker machine by a machineset YAML (rather than a machine YAML), and we got the same issue in such case. 

              rh-ee-bbarbach Brent Barbachem
              rhn-support-jiwei Jianli Wei
              Jianli Wei Jianli Wei
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated: