Uploaded image for project: 'Product Technical Learning'
  1. Product Technical Learning
  2. PTL-13679

RHT2170880:DO370: User feedback on course

XMLWordPrintable

    • Icon: Story Story
    • Resolution: Done
    • Icon: Minor Minor
    • DO370 - ODF4.7-en-3-20221129
    • DO370
    • None
    • False
    • Hide

      None

      Show
      None
    • False
    • en-US (English)

      Please fill in the following information:


      URL: https://rol.redhat.com/rol/app/courses/do370-4.7/pages/
      Reporter RHNID: wasim-rhls
      Section Title:                                                                        

      Issue description

      Here are the things that I have learned, and I wish that those topics were known when I first started.

      The ODF HA setup with an ODF cluster that is stretched over multiple datacenters. It should be 2 data centers with storage devices, and 1 with an arbiter node. Can you add a description of the reference architecture for a stretched ODF cluster.

      With the streched cluster, I do not understand why 4 replicas are required, and we cannot do with 2 replicas spread over the 2 zones. I guess I could make a storage class for it, but I do not understand the consequences.

      Also, please describe how to configure ODF to run on tainted infra nodes. It turns out to work well, and it is just that little extra.

      Can you list potential Side effects of making your own storage classes. Not limited to being on your own for storage house keeping.

      Can you add an explanation of what it entails to refactor the storage cluster. For example, adding nodes, removing nodes, abandoning zones, migrating to new zones. I could not remove nodes from the storage cluster on my own.

      ODF needs housekeeping to cleanup de-referenced storage objects. This is a very important topic because if we do not do that, then the storage cluster will get full with dereferenced storage.

      In the documentation that means:

      • Annotating the PVC's with the reclaimspace.csiaddons.openshift.io/schedule
      • Running manual cleaning jobs.

      Can you please add a guided exercise for it?

      Can you provide an explanation of the ODF storage capacity GUI. It shows raw capacity and the usage of it. How is one to inspect the size that is logically available, and the usage of it. One could think of explaining the results that can be obtained by deploying the tools pod, and running the ceph cli. It would be even better if the GUI gets updated to provide such information.

      Can you provide a deep dive into the ceph cli. What can with do with it, example are:

       oc rsh deploy/rook-ceph-tools ceph status
       oc rsh deploy/rook-ceph-tools ceph osd tree
       oc rsh deploy/rook-ceph-tools ceph health detail
       oc rsh deploy/rook-ceph-tools ceph df

      That list is probably not complete, can you please add the most usefull commands and explain what we should be looking at, and why. For example, the state of the pg's.

      With that being said, I look forward to the new version of the training, and re-certification next year. Give my best regards to Asish, Manny and all the wonderfull people that are in the ODF team who have helped us with the support cases for our account.

      Steps to reproduce:

       

      Workaround:

       

      Expected result:

              rhn-support-fandrieu Francois Andrieu
              wraja@redhat.com Wasim Raja
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: