Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-4969

[release-5.8] Loki doesn't watch the `spec.storage.tls.caName` for updating the status

XMLWordPrintable

    • False
    • None
    • False
    • NEW
    • VERIFIED
    • Hide
      * Before this update, when configured to read a custom S3 Certificate Authority the Loki Operator would not automatically update the configuration when the name of the ConfigMap or the contents changed. With this update, the Loki Operator is watching for changes to the ConfigMap and automatically updates the generated configuration.
      Show
      * Before this update, when configured to read a custom S3 Certificate Authority the Loki Operator would not automatically update the configuration when the name of the ConfigMap or the contents changed. With this update, the Loki Operator is watching for changes to the ConfigMap and automatically updates the generated configuration.
    • Bug Fix
    • Log Storage - Sprint 246, Log Storage - Sprint 247, Log Storage - Sprint 248
    • Moderate

      Description of problem:

      Loki is not watching the `spec.storage.tls.caName`, then, the status is not updated and also, not able to deploy Loki {}

      Version-Release number of selected component (if applicable):

      $ oc get csv -n openshift-logging|grep -E -i "loki|logging"
      cluster-logging.v5.8.1                  Red Hat OpenShift Logging          5.8.1     cluster-logging.v5.8.0                  Succeeded
      loki-operator.v5.8.1                    Loki Operator                      5.8.1     loki-operator.v5.8.0                    Succeeded
       

      How reproducible:

      Always

      Steps to Reproduce:

      Create the `LokiStack` CR defining the `spec.storage.tls.caName` containing the name of the configmap where the CA should be stored.

      apiVersion: loki.grafana.com/v1
      kind: LokiStack
      metadata:
        name: logging-loki
        namespace: openshift-logging
      spec:
        size: 1x.extra-small
        storage:
          schemas:
          - version: v12
            effectiveDate: '2023-12-08'
          secret:
            name: logging-loki-s3
            type: s3
          tls:
            caName: logging-loki-s3-ca
        storageClassName: gp3-csi
        tenants:
          mode: openshift-logging
      

      As the configmap `logging-loki-s3-ca` doesn't exist in the `openshift-logging` namespace, the Loki pods are not deployed as expected showing in the status the error `MissingObjectStorageCAConfigMap`:

      $ oc get lokistack logging-loki -o yaml -n openshift-logging|grep MissingObjectStorageCAConfigMap
          reason: MissingObjectStorageCAConfigMap
      

      It's created the `configmap` containing the CA:

      $ oc create -f logging-loki-s3-ca.yaml 
      configmap/logging-loki-s3-ca created
      
      $ oc describe configmap logging-loki-s3-ca 
      Name:         logging-loki-s3-ca
      Namespace:    openshift-logging
      Labels:       <none>
      Annotations:  <none>
      
      Data
      ====
      service-ca.crt:
      ...
      

      Actual results:

      The LokiStack pods continue without being deployed even when now the `configmap logging-loki-s3-ca` containing the secret exists:

      $ oc get pods
      NAME                                        READY   STATUS    RESTARTS   AGE
      cluster-logging-operator-56f78c6fff-z67nf   1/1     Running   0          3h14m
      

      And the `LokiStack` custom resource continues with the error `MissingObjectStorageCAConfigMap`.

      Expected results:

      When the configmap is created, the Loki Operator should observe it and update the status and resources as per definition.

      Workaround

      Delete the `lokiStack` CR:

      $ oc delete lokistack logging-loki
      lokistack.loki.grafana.com "logging-loki" deleted
      

      And create it again having the `configMap` logging-loki-s3-ca created:

      $ oc create -f logging-loki-cr.yaml 
      lokistack.loki.grafana.com/logging-loki created
      

      After doing it, the LokiStack is deployed:

      $ oc get pods -l app.kubernetes.io/name=lokistack
      NAME                                           READY   STATUS    RESTARTS   AGE
      logging-loki-compactor-0                       0/1     Pending   0          115s
      logging-loki-distributor-7f8ff878f5-rtv7l      1/1     Running   0          116s
      logging-loki-distributor-7f8ff878f5-xzxtz      1/1     Running   0          115s
      logging-loki-gateway-7c9c8bbb77-247c5          0/2     Pending   0          116s
      logging-loki-gateway-7c9c8bbb77-p5298          0/2     Pending   0          116s
      logging-loki-index-gateway-0                   1/1     Running   0          116s
      logging-loki-index-gateway-1                   0/1     Pending   0          81s
      logging-loki-ingester-0                        0/1     Pending   0          116s
      logging-loki-querier-79b6cb7dc9-kkcwp          1/1     Running   0          116s
      logging-loki-querier-79b6cb7dc9-rblbz          1/1     Running   0          116s
      logging-loki-query-frontend-6cc8956b64-gdvvh   0/1     Pending   0          116s
      logging-loki-query-frontend-6cc8956b64-mf9km   0/1     Pending   0          116s
      

              rojacob@redhat.com Robert Jacob
              rhn-support-ocasalsa Oscar Casal Sanchez
              Kabir Bharti Kabir Bharti
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: