-
Bug
-
Resolution: Done-Errata
-
Normal
-
Logging 6.0.z
-
False
-
-
False
-
NEW
-
NEW
-
-
Bug Fix
-
-
-
Log Collection - Sprint 267, Log Collection - Sprint 268
-
Moderate
Description of problem:
The clusterLogForwarder status is not clear where it seems that all is correct, but really, the configuration is broken.
Version-Release number of selected component (if applicable):
$ oc get csv |grep -i logging cluster-logging.v6.0.4 Red Hat OpenShift Logging 6.0.4 cluster-logging.v6.0.3 Succeeded
How reproducible:
Always
Steps to Reproduce:
Deploy Logging 6 with Loki and create a clusterLogForwarder custom resource as below where the same output - application - is sent twice to the same output.
This example is a simplication of a real clusterLogForwarder where a long list of outputs/inputs/filters exists and it can be introduced by error the same described below:
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: collector
namespace: openshift-logging
spec:
managementState: Managed
outputs:
- lokiStack:
authentication:
token:
from: serviceAccount
target:
name: logging-loki
namespace: openshift-logging
name: default-lokistack
tls:
ca:
configMapName: openshift-service-ca.crt
key: service-ca.crt
type: lokiStack
pipelines:
- inputRefs:
- audit
name: syslog
outputRefs:
- default-lokistack
- inputRefs:
- infrastructure
- application
name: logging-loki
outputRefs:
- default-lokistack
- inputRefs:
- application
name: container-logs
outputRefs:
- default-lokistack
serviceAccount:
name: collector
Actual results:
The "collector-configmap" is not generated by the Logging Operator.
The "clusterLogForwarder" custom resource shows the next status:
1. All "inputConditions" have "ValidationSuccess and status: True"
2. All "outputConditions" have "ValidationSuccess and status: True"
3. All "pipelineConditions" have "ValidationSuccess and status: True"
Expected results:
If an error exists in the pipeline, it could be expected:
1. Identify the pipeline where the error exists
2. in the "status.conditions.pipelineConditions" existed an error as really the pipeline is broken instead of all the entries being "reason: ValidationSuccess and status: True"
Notes: if it was possible in the moment of validation time when writing the configuration, reject to apply it with the reason
- links to
-
RHBA-2025:147444
Logging for Red Hat OpenShift - 6.2.1
- mentioned on