Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-1351

[Logging 5.1]EO shouldn't try to upgrade ES cluster after adding/removing storage.

XMLWordPrintable

    • False
    • False
    • NEW
    • NEW
    • Hide
      * Previously, as a cluster administrator, if tried to add or remove storage from an Elasticsearch cluster, the OpenShift Elasticsearch Operator (EO) incorrectly tried to upgrade the Elasticsearch cluster, displaying `scheduledUpgrade: "True"`, `shardAllocationEnabled: primaries`, and the change volumes. The current release fixes this issue, so the EO does not try to upgrade the Elasticsearch cluster.
      +
      The EO status displays the following new status information to indicate when you have tried to make an unsupported change to the Elasticsearch storage that it has ignored:
      +
       - `StorageStructureChangeIgnored` when you try to change between using ephemeral and persistent storage structures.
       - `StorageClassNameChangeIgnored` when you try to change the storage class name.
       - `StorageSizeChangeIgnored` when you try to change the storage size.

      [NOTE]
      ====
      If you configure the `ClusterLogging` custom resource (CR) to switch from ephemeral to persistent storage, the EO creates a persistent volume claim (PVC) but does not create a persistent volume (PV). To clear the `StorageStructureChangeIgnored` status, you must revert the change to the `ClusterLogging` CR and delete the persistent volume claim (PVC).
      ====
      Show
      * Previously, as a cluster administrator, if tried to add or remove storage from an Elasticsearch cluster, the OpenShift Elasticsearch Operator (EO) incorrectly tried to upgrade the Elasticsearch cluster, displaying `scheduledUpgrade: "True"`, `shardAllocationEnabled: primaries`, and the change volumes. The current release fixes this issue, so the EO does not try to upgrade the Elasticsearch cluster. + The EO status displays the following new status information to indicate when you have tried to make an unsupported change to the Elasticsearch storage that it has ignored: +  - `StorageStructureChangeIgnored` when you try to change between using ephemeral and persistent storage structures.  - `StorageClassNameChangeIgnored` when you try to change the storage class name.  - `StorageSizeChangeIgnored` when you try to change the storage size. [NOTE] ==== If you configure the `ClusterLogging` custom resource (CR) to switch from ephemeral to persistent storage, the EO creates a persistent volume claim (PVC) but does not create a persistent volume (PV). To clear the `StorageStructureChangeIgnored` status, you must revert the change to the `ClusterLogging` CR and delete the persistent volume claim (PVC). ====
    • Logging (LogExp) - Sprint 202

      Description of problem:

      Deploy logging, don't enable storage for ES, after all ES pods running, add storage to ES, then the ES cluster upgrade status becomes `scheduledUpgrade: "True"`, and `shardAllocationEnabled` is set to `primaries`: 

          nodes:
          - deploymentName: elasticsearch-cdm-ns8u94oy-1
            upgradeStatus:
              scheduledUpgrade: "True"
              underUpgrade: "True"
              upgradePhase: preparationComplete
          - deploymentName: elasticsearch-cdm-ns8u94oy-2
            upgradeStatus:
              scheduledUpgrade: "True"
          - deploymentName: elasticsearch-cdm-ns8u94oy-3
            upgradeStatus:
              scheduledUpgrade: "True"
          pods:
            client:
              failed: []
              notReady: []
              ready:
              - elasticsearch-cdm-ns8u94oy-2-698d6d5c5b-zlk4n
              - elasticsearch-cdm-ns8u94oy-3-5d86576b94-5kx69
              - elasticsearch-cdm-ns8u94oy-1-64f97c8746-hr4mq
            data:
              failed: []
              notReady: []
              ready:
              - elasticsearch-cdm-ns8u94oy-3-5d86576b94-5kx69
              - elasticsearch-cdm-ns8u94oy-1-64f97c8746-hr4mq
              - elasticsearch-cdm-ns8u94oy-2-698d6d5c5b-zlk4n
            master:
              failed: []
              notReady: []
              ready:
              - elasticsearch-cdm-ns8u94oy-3-5d86576b94-5kx69
              - elasticsearch-cdm-ns8u94oy-1-64f97c8746-hr4mq
              - elasticsearch-cdm-ns8u94oy-2-698d6d5c5b-zlk4n
          shardAllocationEnabled: primaries
      

      EO logs:

      {"cluster":"elasticsearch","component":"elasticsearch-operator","error":{"msg":"timed out waiting for node to rollout","node":"elasticsearch-cdm-ns8u94oy-1"},"go_arch":"amd64","go_os":"linux","go_version":"go1.15.7","level":"0","message":"unable to update node","name":{"Namespace":"openshift-logging","Name":"kibana"},"namespace":"openshift-logging","node":"elasticsearch-cdm-ns8u94oy-1","operator_version":"4.7.0","ts":"2021-05-07T00:57:37.671156772Z"}
      {"cluster":"elasticsearch","component":"elasticsearch-operator","error":{"msg":"timed out waiting for node to rollout","node":"elasticsearch-cdm-ns8u94oy-1"},"go_arch":"amd64","go_os":"linux","go_version":"go1.15.7","level":"0","message":"unable to update node","name":{"Namespace":"openshift-logging","Name":"kibana"},"namespace":"openshift-logging","node":"elasticsearch-cdm-ns8u94oy-1","operator_version":"4.7.0","ts":"2021-05-07T00:58:09.855651975Z"}
      {"cluster":"elasticsearch","component":"elasticsearch-operator","error":{"msg":"timed out waiting for node to rollout","node":"elasticsearch-cdm-ns8u94oy-1"},"go_arch":"amd64","go_os":"linux","go_version":"go1.15.7","level":"0","message":"unable to update node","name":{"Namespace":"openshift-logging","Name":"kibana"},"namespace":"openshift-logging","node":"elasticsearch-cdm-ns8u94oy-1","operator_version":"4.7.0","ts":"2021-05-07T00:58:40.714884796Z"}
      {"cluster":"elasticsearch","component":"elasticsearch-operator","error":{"msg":"timed out waiting for node to rollout","node":"elasticsearch-cdm-ns8u94oy-1"},"go_arch":"amd64","go_os":"linux","go_version":"go1.15.7","level":"0","message":"Unregistering future events","name":{"Namespace":"openshift-logging","Name":"kibana"},"namespace":"openshift-logging","node":"elasticsearch-cdm-ns8u94oy-1","operator_version":"4.7.0","ts":"2021-05-07T00:59:05.448910336Z"}
      {"cluster":"elasticsearch","component":"elasticsearch-operator","error":{"msg":"timed out waiting for node to rollout","node":"elasticsearch-cdm-ns8u94oy-1"},"go_arch":"amd64","go_os":"linux","go_version":"go1.15.7","level":"0","message":"unable to update node","name":{"Namespace":"openshift-logging","Name":"kibana"},"namespace":"openshift-logging","node":"elasticsearch-cdm-ns8u94oy-1","operator_version":"4.7.0","ts":"2021-05-07T00:59:11.619126665Z"}
      {"cluster":"elasticsearch","component":"elasticsearch-operator","error":{"msg":"timed out waiting for node to rollout","node":"elasticsearch-cdm-ns8u94oy-1"},"go_arch":"amd64","go_os":"linux","go_version":"go1.15.7","level":"0","message":"Flushing nodes","name":{"Namespace":"openshift-logging","Name":"kibana"},"namespace":"openshift-logging","node":"elasticsearch-cdm-ns8u94oy-1","objectKey":{"Namespace":"openshift-logging","Name":"elasticsearch"},"operator_version":"4.7.0","ts":"2021-05-07T00:59:12.619891446Z"}
      {"cluster":"elasticsearch","component":"elasticsearch-operator","error":{"msg":"timed out waiting for node to rollout","node":"elasticsearch-cdm-ns8u94oy-1"},"go_arch":"amd64","go_os":"linux","go_version":"go1.15.7","level":"0","message":"Registering future events","name":{"Namespace":"openshift-logging","Name":"kibana"},"namespace":"openshift-logging","node":"elasticsearch-cdm-ns8u94oy-1","objectKey":{"Namespace":"openshift-logging","Name":"elasticsearch"},"operator_version":"4.7.0","ts":"2021-05-07T01:01:40.435653703Z"}
      

      Version-Release number of selected component (if applicable):

      elasticsearch-operator.5.1.0-20
      How reproducible:

      Always
      Steps to Reproduce:

      condition 1:

      1. deploy CLO and EO
      2. create cl/instance with

        logStore:
          type: "elasticsearch"
          retentionPolicy: 
            application:
              maxAge: 60h 
            infra:
              maxAge: 3h
            audit:
              maxAge: 1d
          elasticsearch:
            nodeCount: 3
            redundancyPolicy: "SingleRedundancy"
            resources:
              requests:
                memory: "2Gi"
            storage: {}

      3.  wait until ES cluster status becomes green, update cl/instance, add storage to ES:

        logStore:
          type: "elasticsearch"
          retentionPolicy: 
            application:
              maxAge: 60h 
            infra:
              maxAge: 3h
            audit:
              maxAge: 1d
          elasticsearch:
            nodeCount: 3
            redundancyPolicy: "SingleRedundancy"
            resources:
              requests:
                memory: "2Gi"
            storage:
              storageClassName: "gp2"
              size: "20Gi"
      

      4. check ES status in es/elasticsearch

      condition 2:

      1. deploy CLO and EO
      2. create cl/instance with

        logStore:
          type: "elasticsearch"
          retentionPolicy: 
            application:
              maxAge: 60h 
            infra:
              maxAge: 3h
            audit:
              maxAge: 1d
          elasticsearch:
            nodeCount: 3
            redundancyPolicy: "SingleRedundancy"
            resources:
              requests:
                memory: "2Gi"
            storage: 
              storageClassName: "gp2" 
              size: "20Gi"

      3.  wait until ES cluster status becomes green, update cl/instance, remove storage :

        logStore:
          type: "elasticsearch"
          retentionPolicy: 
            application:
              maxAge: 60h 
            infra:
              maxAge: 3h
            audit:
              maxAge: 1d
          elasticsearch:
            nodeCount: 3
            redundancyPolicy: "SingleRedundancy"
            resources:
              requests:
                memory: "2Gi"
            storage: {}
      

      4. check ES status in es/elasticsearch

       

      Actual results:

      No ES pod restarts, but the ES cluster `upgradeStatus` is changed to `scheduledUpgrade: "True"`

      No message `Changing the storage structure for a custom resource is not supported` when follow the steps in condition 1.

      Expected results:

      ES cluster should not in upgrade status, and in es/elasticsearch, there should have message like:

          conditions:
          - lastTransitionTime: "2021-05-07T01:05:13Z"
            message: Changing the storage structure for a custom resource is not supported
            reason: StorageStructureChangeIgnored
            status: "True"
            type: StorageStructureChangeIgnored
      

      Additional info:

              gvanloo Gerard Vanloo (Inactive)
              qitang@redhat.com Qiaoling Tang
              Qiaoling Tang Qiaoling Tang
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: