-
Bug
-
Resolution: Won't Do
-
Minor
-
None
-
odf-4.16
-
None
Description of problem (please be detailed as possible and provide log
snippests):
When you create an additional data pool by adding it in StorageCluster CR (/spec/managedResources/cephFilesystems/additionalDataPools)
a cephfs data pool is created.
When you delete it from the CR, the cephfs pool is not deleted at the ceph level.
Version of all relevant components (if applicable):
ODF 4.16
Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
No.
Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
1
Can this issue reproducible?
Yes.
Can this issue reproduce from the UI?
No, you need rook-ceph-tools to verify.
If this is a regression, please provide more details to justify this:
Steps to Reproduce:
1. Create an additional cephfs data pool by editing StorageCluster CR. Example:
spec:
managedResources:
cephFilesystems:
additionalDataPools:
- compressionMode: aggressive
name: my-fs-pool
replicated:
size: 2
2. Verify the pool exists at the ceph level. Example:
oc exec rook-ceph-tools-7d8c4c54fc-sx9dg – ceph osd pool ls
3. Delete the above element under additionalDataPools.
Actual results:
The pool still exists at the ceph level.
Expected results:
It should be deleted (at least when there is no data in the pool).
Additional info: