Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-6758

Collector configmap not generated when same input used twice for the same output

XMLWordPrintable

    • False
    • None
    • False
    • NEW
    • NEW
    • Hide
      Before this change the collector verified a ClusterLogForwarder with multiple inputs to a LokiStack output as invalid due to incorrect , internal processing logic. This fixes that by modifying the logic and changing the collector config when forwarding to a LokiStack output.
      Show
      Before this change the collector verified a ClusterLogForwarder with multiple inputs to a LokiStack output as invalid due to incorrect , internal processing logic. This fixes that by modifying the logic and changing the collector config when forwarding to a LokiStack output.
    • Bug Fix
    • Log Collection - Sprint 267
    • Moderate

      Description of problem:

      The clusterLogForwarder status is not clear where it seems that all is correct, but really, the configuration is broken.

      Version-Release number of selected component (if applicable):

      $ oc get csv |grep -i logging
      cluster-logging.v6.0.4                             Red Hat OpenShift Logging          6.0.4                   cluster-logging.v6.0.3              Succeeded
      

      How reproducible:

      Always

      Steps to Reproduce:

      Deploy Logging 6 with Loki and create a clusterLogForwarder custom resource as below where the same output - application - is sent twice to the same output.

      This example is a simplication of a real clusterLogForwarder where a long list of outputs/inputs/filters exists and it can be introduced by error the same described below:

      apiVersion: observability.openshift.io/v1
      kind: ClusterLogForwarder
      metadata:
        name: collector
        namespace: openshift-logging
      spec:
        managementState: Managed
        outputs:
          - lokiStack:
              authentication:
                token:
                  from: serviceAccount
              target:
                name: logging-loki
                namespace: openshift-logging
            name: default-lokistack
            tls:
              ca:
                configMapName: openshift-service-ca.crt
                key: service-ca.crt
            type: lokiStack
        pipelines:
          - inputRefs:
              - audit
            name: syslog
            outputRefs:
              - default-lokistack
          - inputRefs:
              - infrastructure
              - application
            name: logging-loki
            outputRefs:
              - default-lokistack
          - inputRefs:
              - application
            name: container-logs
            outputRefs:
              - default-lokistack
        serviceAccount:
          name: collector
      

      Actual results:

      The "collector-configmap" is not generated by the Logging Operator.

      The "clusterLogForwarder" custom resource shows the next status:
      1. All "inputConditions" have "ValidationSuccess and status: True"
      2. All "outputConditions" have "ValidationSuccess and status: True"
      3. All "pipelineConditions" have "ValidationSuccess and status: True"

      Expected results:

      If an error exists in the pipeline, it could be expected:
      1. Identify the pipeline where the error exists
      2. in the "status.conditions.pipelineConditions" existed an error as really the pipeline is broken instead of all the entries being "reason: ValidationSuccess and status: True"

      Notes: if it was possible in the moment of validation time when writing the configuration, reject to apply it with the reason

              jcantril@redhat.com Jeffrey Cantrill
              rhn-support-ocasalsa Oscar Casal Sanchez
              Votes:
              0 Vote for this issue
              Watchers:
              1 Start watching this issue

                Created:
                Updated: