Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-57425

forceWipeDevicesAndDestroyAllData does not remove Ceph cluster information

XMLWordPrintable

    • Quality / Stability / Reliability
    • False
    • Hide

      None

      Show
      None
    • None
    • Important
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem:

          I'm deploying RDS Hub configuration of OCP and setting flag forceWipeDevicesAndDestroyAllData to true, however that does not clean disks "enough" for ODF to be redeployed on same disks.

      Version-Release number of selected component (if applicable):

          OCP/LSO: 4.19

      How reproducible:

          100%

      Steps to Reproduce:

          1. Deploy cluster & configure ODF
          2. Redeploy cluster with same configuration
        
          

      Actual results:

       2025-06-12 17:45:46.662320 C | rookcmd: failed to configure devices: failed to get device already provisioned by ceph-volume raw: osd.2: "94334bad-3846-4483-8890-d400951f2a1d" belonging to
      a different ceph cluster "9a5b2164-c8b1-4b24-8b01-0598ef135098"
         

      Expected results:

          Drives are sufficiently wiped so ODF can be redeployed

      Additional info:

          This is a long standing issue and I was hoping that said flag would appropriately wipe drives, so system can be redeployed without manual intervention

              jdobson@redhat.com Jonathan Dobson
              agurenko@redhat.com Alexander Gurenko
              None
              None
              Wei Duan Wei Duan
              None
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

                Created:
                Updated: