-
Bug
-
Resolution: Unresolved
-
Undefined
-
None
-
4.19.0
-
None
-
Quality / Stability / Reliability
-
False
-
-
None
-
Important
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
-
None
Description of problem:
I'm deploying RDS Hub configuration of OCP and setting flag forceWipeDevicesAndDestroyAllData to true, however that does not clean disks "enough" for ODF to be redeployed on same disks.
Version-Release number of selected component (if applicable):
OCP/LSO: 4.19
How reproducible:
100%
Steps to Reproduce:
1. Deploy cluster & configure ODF 2. Redeploy cluster with same configuration
Actual results:
2025-06-12 17:45:46.662320 C | rookcmd: failed to configure devices: failed to get device already provisioned by ceph-volume raw: osd.2: "94334bad-3846-4483-8890-d400951f2a1d" belonging to a different ceph cluster "9a5b2164-c8b1-4b24-8b01-0598ef135098"
Expected results:
Drives are sufficiently wiped so ODF can be redeployed
Additional info:
This is a long standing issue and I was hoping that said flag would appropriately wipe drives, so system can be redeployed without manual intervention
- is related to
-
ODFRFE-19 Support for the ODF Operator to cleanup ceph bluestore metadata from OSD disks before deploying the cluster
-
- Backlog
-