Uploaded image for project: 'Product Technical Learning'
  1. Product Technical Learning
  2. PTL-16018

DO370-4.16 Feedback: ch02s05 -> Block storage:...

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Major Major
    • DO370 - ODF4.16-en-3-20251212
    • DO370
    • None
    • False
    • Hide

      None

      Show
      None
    • False
    • en-US (English)

      Please fill in the following information:


      URL: https://role.rhu.redhat.com/rol-rhu/app/courses/do370-4.16/pages/ch02 
      Reporter RHNID: hemoller
      Section Title:  OpenShift Data Foundation Storage Classes                                                                        

      Issue description

      Internal User Feedback

      ==================

       Description:  ch02s05 -> Block storage:

      This makes no sense...
      From a Ceph perspective, all standard protocols, rbd, cephfs and rgw, store data in ceph pools, which, by default all are spread across all associated storage media.

      TL;DR Any read or write of >4MB will ALWAYS git more than one device, regardless of the client interface used (RBD, CephFS or RGW/S3)

      Cephfs can be configured to split data and metadata between different pools. for that to make sense, the cephfs metadata pool should be pinned to faster devices than the data pool

      For RBD we just put the data on a pool

      For RGW, we have a pool for S3 metadata, S3 logs, S3 data and potentially S3 multipart upload, S3 IA(infrequently accessed) and a few others.

      but again, ALL data that a ceph client writes or read, is spread out across a significant amount of devices, ensuring better performance, as the ceph storage scales out (and somewhat when it scales up) 
      ======================

      Steps to reproduce:

       

      Workaround:

       

      Expected result:

              gls-curriculum-ocp-core@redhat.com PTL - OCP Platform Team
              shasingh01 Shashi Singh
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

                Created:
                Updated: