Uploaded image for project: 'Data Foundation Bugs'
  1. Data Foundation Bugs
  2. DFBUGS-3991

[GSS] ceph configuration option bdev_async_discard is removed

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Undefined Undefined
    • None
    • odf-4.18
    • ceph/RADOS/x86
    • None
    • False
    • Hide

      None

      Show
      None
    • False
    • ?
    • ?
    • x86_64
    • ?
    • ?
    • Critical
    • Proposed
    • None

       

      Description of problem - Provide a detailed description of the issue encountered, including logs/command-output snippets and screenshots if the issue is observed in the UI:

      Recently, we started deploying and upgrading our clusters to OCP 4.18.21 and ODF version 4.18.9.

      Normally, we run these 2 commands after installing ODF via the ceph-toolbox POD:

      $ oc -n openshift-storage exec ${TOOLS_POD} – ceph config set osd.${OSD} bdev_enable_discard true
      $ oc -n openshift-storage exec ${TOOLS_POD} – ceph config set osd.${OSD} bdev_async_discard true

      This is also described in your KB https://access.redhat.com/solutions/6975295

      However, the second command for the bdev_async_discard-parameter trows an error now:

      $ oc exec -it rook-ceph-tools-695fddf56d-9xr29 – ceph config set osd.0 bdev_async_discard true
      Error EINVAL: unrecognized config option 'bdev_async_discard'
      command terminated with exit code 22

      The OCP platform infrastructure and deployment type (AWS, Bare Metal, VMware, etc. Please clarify if it is platform agnostic deployment), (IPI/UPI):

      VMware

      The ODF deployment type (Internal, External, Internal-Attached (LSO), Multicluster, DR, Provider, etc):

      Internal

       

      The version of all relevant components (OCP, ODF, RHCS, ACM whichever is applicable):

       

      ODF version 4.18.9 seems to be running this Ceph version:

      ceph version 19.2.1-245.el9cp (45227ca7204586f8ebf0c2f98931aa70f5778f3c) squid (stable)

      Does this issue impact your ability to continue to work with the product?

       

      This issue might be blocking our upgrade towarde OCP 4.18 and ODF 4.18

      Is there any workaround available to the best of your knowledge?

      we will use bdev_async_discard_thread instead of using bdev_async_discard. 

      The setting bdev_async_discard_threads is a global setting, not one on the OSDs


      sh-5.1$ ceph config set global bdev_async_discard_threads 1
      sh-5.1$ ceph config dump
      WHO                                              MASK  LEVEL     OPTION                                 VALUE                               RO
      global                                                 advanced  bdev_async_discard_threads             1

      Can this issue be reproduced? If so, please provide the hit rate

      NA

       

      Can this issue be reproduced from the UI?

      NA

      If this is a regression, please provide more details to justify this:

      NA

      Actual results:

      This bug exactly describes this issue:
      https://tracker.ceph.com/issues/70327

       

      Expected results:

       

      Logs collected and log location:

      supportshell-1.sush-001.prod.us-west-2.aws.redhat.com/04244496

      drwxrwxrwx+ 3 yank yank 59 Sep  3 12:50 0010-must-gather-tc01.tgz
      drwxrwxrwx+ 3 yank yank 59 Sep  3 13:22 0020-odf-must-gather-tc01.tgz

      Additional info:

       
       

              rzarzyns@redhat.com Radoslaw Zarzynski
              rhn-support-abhishku Abhishek Kumar
              Harish NV Rao Harish NV Rao
              Votes:
              0 Vote for this issue
              Watchers:
              25 Start watching this issue

                Created:
                Updated: