-
Bug
-
Resolution: Unresolved
-
Major
-
DO370 - ODF4.16-en-3-20251212
-
None
-
False
-
-
False
-
-
-
en-US (English)
Please fill in the following information:
| URL: | https://role.rhu.redhat.com/rol-rhu/app/courses/do370-4.16/pages/ch02 |
| Reporter RHNID: | hemoller |
| Section Title: | OpenShift Data Foundation Storage Classes |
Issue description
Internal User Feedback
==================
Description: ch02s05 -> Block storage:
This makes no sense...
From a Ceph perspective, all standard protocols, rbd, cephfs and rgw, store data in ceph pools, which, by default all are spread across all associated storage media.
TL;DR Any read or write of >4MB will ALWAYS git more than one device, regardless of the client interface used (RBD, CephFS or RGW/S3)
Cephfs can be configured to split data and metadata between different pools. for that to make sense, the cephfs metadata pool should be pinned to faster devices than the data pool
For RBD we just put the data on a pool
For RGW, we have a pool for S3 metadata, S3 logs, S3 data and potentially S3 multipart upload, S3 IA(infrequently accessed) and a few others.
but again, ALL data that a ceph client writes or read, is spread out across a significant amount of devices, ensuring better performance, as the ceph storage scales out (and somewhat when it scales up)
======================
Steps to reproduce:
Workaround:
Expected result: