-
Bug
-
Resolution: Unresolved
-
Minor
-
None
-
DO370 - DO370-O4.16-en-2-20250317
-
None
-
False
-
-
False
-
-
-
en-US (English)
Please fill in the following information:
| URL: | https://rol.redhat.com/rol/app/courses/do370-4.16/pages/ch02s09 |
| Reporter RHNID: | wasim-rhls |
| Section Title: | Configuring Custom Storage Classes |
Issue description
This subsection is very poor guidance.
If we have a SC that's too slow/fast/resiliant etc. then we should adjust the underlaying pool's settings and/or relevant crush rule.
Recommending the client to move the data between pools would only make sense if the pvc was misplaced, upon creation.
In ceph we would never ask clients to move data between pools, to change size or device class.
I think there is one corner-case where it might make sense, which is if we need to move data between replicated and EC pools, but here we could export and import the rbd images, transparent for the client, if the client allows for downtime.
eg. if we need to change the replica size from 3 to 2, then we would run
ceph osd pool set size 2
followed by
ceph osd pool set min_size 1
to allow for failures without downtime, assuming backing storage is flash based.
Steps to reproduce:
Workaround:
Expected result: