Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-2864

[vector] Can not sent logs to default when loki is the default output in CLF

XMLWordPrintable

    • False
    • None
    • False
    • NEW
    • OBSDA-7 - Adopting Loki as an alternative to Elasticsearch to support more lightweight, easier to manage/operate storage scenarios
    • VERIFIED
    • Log Collection - Sprint 222

      collector raises Configuration error if we use CLF to forward logs to default output and and loki is the default store.

      Steps to Reproduce:
      1. use loki as default storage.

      apiVersion: "logging.openshift.io/v1"
      kind: "ClusterLogging"
      metadata:
        name: "instance"
        namespace: openshift-logging
      spec:
        managementState: "Managed"
        logStore:
          type: "lokistack"
          lokistack:
          name: lokistack-sample
        collection:
          type: "vector"

      2. use CLF to manage pipeline

      apiVersion: logging.openshift.io/v1
      kind: ClusterLogForwarder
      metadata:
        name: instance
        namespace: openshift-logging
      spec:
        pipelines:
          name: all-to-defaultES
          inputRefs:
          - infrastructure
          - application
          - audit
         outputRefs:
         - default
      

      Actual result:
      oc logs pod/collector-575nm -c collector
      2022-07-27T09:07:10.199007Z INFO vector::app: Log level is enabled. level="info"
      2022-07-27T09:07:10.199092Z INFO vector::app: Loading configs. paths=["/etc/vector/vector.toml"]
      2022-07-27T09:07:10.210016Z ERROR vector::cli: Configuration error. error=Sink "default_infra" has no inputs

            rojacob@redhat.com Robert Jacob
            rhn-support-anli Anping Li
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated:
              Resolved: