-
Bug
-
Resolution: Unresolved
-
Critical
-
None
-
odf-4.17
-
None
Description of problem (please be detailed as possible and provide log
snippests):
when we do replacement of osd disk the new osd's are getting created with new osd id.
we followed the steps documented here https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/replacing_devices/openshift_data_foundation_deployed_using_local_storage_devices#replacing-operational-or-failed-storage-devices-on-ibm-power-systems_rhodf
We tried two osd replacement and got below results.
rook-ceph-osd-2-9884f9c44-nd9jw 2/2 Running 0 137m
rook-ceph-osd-3-5b48f59cbd-nhn7k 2/2 Running 0 3h51m
rook-ceph-osd-4-86c497596f-622mb 2/2 Running 0 154m
same behavior is seen in previous ODF versions as well.
Version of all relevant components (if applicable):
OCP - 4.17.1
ODF - odf-operator.v4.17.0-rhodf
Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?
Is there any workaround available to the best of your knowledge?
Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?
Can this issue reproducible?
Yes
Can this issue reproduce from the UI?
NA
If this is a regression, please provide more details to justify this:
Steps to Reproduce:
1. deploy ODF
2. follow steps for osd disk replacment - https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/replacing_devices/openshift_data_foundation_deployed_using_local_storage_devices#replacing-operational-or-failed-storage-devices-on-ibm-power-systems_rhodf
3. the new osd's are getting with new id's
Actual results:
We tried two osd replacement and got below results.
rook-ceph-osd-2-9884f9c44-nd9jw 2/2 Running 0 137m
rook-ceph-osd-3-5b48f59cbd-nhn7k 2/2 Running 0 3h51m
rook-ceph-osd-4-86c497596f-622mb 2/2 Running 0 154m
Expected results:
osd's should retain same disk id's
Additional info: