Uploaded image for project: 'Data Foundation Bugs'
  1. Data Foundation Bugs
  2. DFBUGS-2563

mon_target_pg_per_osd flag value not updating in cephcluster after patching

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Cannot Reproduce
    • Icon: Undefined Undefined
    • None
    • odf-4.19
    • odf-operator
    • None
    • False
    • Hide

      None

      Show
      None
    • False
    • ?
    • ?
    • ppc64le
    • ?
    • ?
    • Proposed
    • None

       

      Description of problem - Provide a detailed description of the issue encountered, including logs/command-output snippets and screenshots if the issue is observed in the UI:

      mon_target_pg_per_osd flag value not updating in cephcluster after patching 

      The OCP platform infrastructure and deployment type (AWS, Bare Metal, VMware, etc. Please clarify if it is platform agnostic deployment), (IPI/UPI):

      IBM Power UPI

      The ODF deployment type (Internal, External, Internal-Attached (LSO), Multicluster, DR, Provider, etc):

      Internal-Attached (LSO)

       

      The version of all relevant components (OCP, ODF, RHCS, ACM whichever is applicable):

      OCP:  4.19.0-rc.0
      ODF:  4.19.0-68.stable

       

      Does this issue impact your ability to continue to work with the product?

      yes

       

      Is there any workaround available to the best of your knowledge?

      No

       

      Can this issue be reproduced? If so, please provide the hit rate

      100%

       

      Can this issue be reproduced from the UI?

      NA

      If this is a regression, please provide more details to justify this:

       

      Steps to Reproduce:

      1.Deploy ODF on new cluster

      2. Create Storage-system & wait for Storage-system to be ready state.

      3.apply patch command 

      oc patch storagecluster ocs-storagecluster -n openshift-storage --type merge --patch '{
      "spec": {
      "managedResources": {
      "cephCluster": {
      "cephConfig": {
      "global": {
      "mon_target_pg_per_osd": "100"
      }
      }
      }
      }
      }
      }' 

      The exact date and time when the issue was observed, including timezone details:

      19-05-2025:11:30 AM IST

      Actual results:

      mon_target_pg_per_osd values not changed change in cephcluster .

      [root@veera-419-a10e-bastion-0 ~]# oc get cephcluster -o yaml
      apiVersion: v1
      items:
      - apiVersion: ceph.rook.io/v1
        kind: CephCluster
        metadata:
          creationTimestamp: "2025-05-15T16:02:40Z"
          finalizers:
          - cephcluster.ceph.rook.io
          generation: 1
          labels:
            app: ocs-storagecluster
          name: ocs-storagecluster-cephcluster
          namespace: openshift-storage
          ownerReferences:
          - apiVersion: ocs.openshift.io/v1
            blockOwnerDeletion: true
            controller: true
            kind: StorageCluster
            name: ocs-storagecluster
            uid: dd4198ad-e37a-4994-8444-ab6d243aceb3
          resourceVersion: "2749405"
          uid: 9168d1f8-15b6-458b-a016-15b99cfd763e
        spec:
          cephConfig:
            global:
              mon_max_pg_per_osd: "1000"
              mon_target_pg_per_osd: "400"
       

      Expected results:

      mon_target_pg_per_osd values should be change in cephcluster .

      Logs collected and log location:

       

      Additional info:

       
       

        1. must-gather.tar.gz
          61.07 MB
          Veerareddy Tippireddy

              mparida@redhat.com Malay Kumar Parida
              rh-ee-vtippire Veerareddy Tippireddy
              Elad Ben Aharon Elad Ben Aharon
              Votes:
              0 Vote for this issue
              Watchers:
              17 Start watching this issue

                Created:
                Updated:
                Resolved: