-
Bug
-
Resolution: Done
-
Blocker
-
DO370 - ODF4.16-en-1-20241120
-
None
-
False
-
-
False
-
2
-
en-US (English)
Issue description
Following ODF deployment as per Ch01GE02 (section 6), the "lab finish internal-cli" script removes the ODF cluster. The Ch02GE01 (section 4) "lab start workloads-file" script attempts to redeploy it, but CEPH fails to start up fully, ending in HEALTH_WARN state due to missing volumes.
Steps to reproduce:{}
- Run "lab start internal-cli".
- Use solutions to fast-forward the "internal-cli" exercise.
- Verify that cephcluster is healthy (HEALTH_OK).
- Run "lab finish internal-cli".
- Run "lab start workloads-file".
- In case of a timeout in "Installing odf-operator" step, re-run "lab start workloads-file".
- Check the status of cephcluster in openshift-storage project:
$ oc -n openshift-storage get cephcluster
NAME ... MONCOUNT AGE ... MESSAGE HEALTH...
ocs-sto... 3 11m ... Cluster created successfully HEALTH_WARN - Verify that the storage classes are missing:
$ oc get sc -o name
storageclass.storage.k8s.io/localvolume
storageclass.storage.k8s.io/nfs-storage
storageclass.storage.k8s.io/ocs-storagecluster-ceph-rgw
Workaround:
Reprovision labs, skip chapter 1, go directly to "lab start workloads-file".{}