Uploaded image for project: 'Data Foundation Bugs'
  1. Data Foundation Bugs
  2. DFBUGS-160

[2315464] [GSS] CephCluster progressing state due to validation check fail, sees two zones instead of three

XMLWordPrintable

    • False
    • Hide

      None

      Show
      None
    • False
    • ?
    • ?
    • None

      Description of problem (please be detailed as possible and provide log
      snippests):

      • For a stretch cluster, an upgrade was done. It seemed to be successful, later on it was noted that the ceph components still use the older hotfix images, this was fixed by removing the "reconcileStrategy: ignore"
      • Afterwards, it was seen that the storagecluster is in "error" state due to below error :
      • lastHeartbeatTime: "2024-09-28T11:07:32Z"
        lastTransitionTime: "2024-09-28T10:47:01Z"
        message: 'CephCluster error: failed to perform validation before cluster creation:
        expecting exactly three zones for the stretch cluster, but found 2'
        reason: ClusterStateError
        status: "True"
        type: Degraded
      • Additionally, registry is unable to mount the cephfs volumes due to unable to reach mon-service, I suspect this is due to the issue with the ceph mismatching versions. The rook-ceph-csi-config was missing the mon-IPs.
      • We tried applying the zone failureDomainKey and failureDomainValue to the ODF nodes but no effect.
      • Below is the config in the storagecluster yaml :

      failureDomain: zone
      failureDomainKey: topology.kubernetes.io/zone-principal
      failureDomainValues:

      • "true"

      <snip>

      kmsServerConnection: {}
      nodeTopologies:
      labels:
      kubernetes.io/hostname:

      • <node>-hnfz8
      • <node>-whnrh
      • <node>-9xv56
      • <node>-pgjxm
        topology.kubernetes.io/zone-principal:
      • "true"
        ---------------

      Version of all relevant components (if applicable):
      ODF 4.14.10

      Does this issue impact your ability to continue to work with the product
      (please explain in detail what is the user impact)?
      Unable to mount volumes, ceph version mismatch

      Is there any workaround available to the best of your knowledge?
      NA

      Rate from 1 - 5 the complexity of the scenario you performed that caused this
      bug (1 - very simple, 5 - very complex)?

      Can this issue reproducible?
      NA

      Can this issue reproduce from the UI?

      Actual results:
      ODF is unable to detect three zones when they are present.

      Expected results:
      ODF should detect three zones.

      Additional info:
      Next update.

              sapillai Santosh Pillai
              smulay@redhat.com Shriya Mulay
              Elad Ben Aharon Elad Ben Aharon
              Votes:
              0 Vote for this issue
              Watchers:
              11 Start watching this issue

                Created:
                Updated: