-
Feature Request
-
Resolution: Done
-
Undefined
-
None
-
openshift-4.13
-
None
-
False
-
None
-
False
-
Not Selected
-
-
-
-
Proposed title of this feature request
Allow disks to wiped
What is the nature and description of the request?
We've installed ODF on one of our clusters with 4.13 after redeploying is detects that an old ODF installation is still on the disks:
" 2023-07-28 14:27:09.231097 C | multus-validation: failed to configure devices: failed to get device already provisioned by ceph-volume raw: osd.3: "bf77454c-f528-40f6-97b5-bc82a8a129ef" belonging to a different ceph cluster "12845707-c85b-48f6-859f-a60cb2707bf5""
oc logs rook-ceph-osd-prepare-4cb20d591cd109de8b6f23f57b1b04dd-7pgtc -n openshift-storage
2023-08-30 10:59:45.845880 I | cephosd: skipping device "/mnt/ocs-deviceset-localblock-2-data-1t9q44" because it contains a filesystem "ceph_bluestore"
When manually purging the disks with this script it works again:
DISKS="/dev/nvme2n1 /dev/nvme1n1 /dev/nvme0n1 /dev/nvme3n1"
for DISK in $DISKS; do
echo $DISK
sgdisk --zap-all $DISK
dd if=/dev/zero of="$DISK" bs=1M count=100 oflag=direct,dsync
blkdiscard $DISK
done
Previously (in 4.11) when bootstrapping a new cluster all disks are wiped before installing the new cluster. But something has changed in 4.13 so now when installing ODF on a worker node that had ODF installed in a previous deployment it fails.
The Agent-Based installer does not clean the disks.
Why does the customer need this? (List the business requirements here)
The customer needs this as part of the re-installation process they use with baremetal clusters, this causes the automated installs to fail.
List any affected packages or components.
- duplicates
-
RFE-2033 [LSO] Wipe local volumes before the first use
- Accepted