Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-2166

[Vector]CLO doesn't create correct configurations when forwarding different type logs to different log stores.

    XMLWordPrintable

Details

    Description

      Create a clf/instance with:

      apiVersion: logging.openshift.io/v1
      kind: ClusterLogForwarder
      metadata:
        name: instance
        namespace: openshift-logging
      spec:
        outputs:
        - name: secure-es
          secret:
            name: pipelinesecret
          type: elasticsearch
          url: https://elasticsearch-server.d4cm5.svc:9200
        pipelines:
        - inputRefs:
          - application
          labels:
            logging: app-logs
          name: forward-app-logs
          outputRefs:
          - secure-es
        - inputRefs:
          - infrastructure
          labels:
            logging: infra-logs
          name: forward-infra-logs
          outputRefs:
          - secure-es
        - inputRefs:
          - audit
          labels:
            logging: audit-logs
          name: forward-audit-logs
          outputRefs:
          - default
          - secure-es 

      forward all the logs to secure-es and only forward audit logs to internal ES, then check data in these 2 log stores, the app and infra logs are forwarded to internal ES too. In vector.toml, there has:

      # Adding _id field
      [transforms.elasticsearch_preprocess]
      type = "remap"
      inputs = ["forward-app-logs","forward-audit-logs","forward-infra-logs"]
      source = """
      index = "default"
      if (.log_type == "application"){
        index = "app"
      }
      if (.log_type == "infrastructure"){
        index = "infra"
      }
      if (.log_type == "audit"){
        index = "audit"
      }
      ."write-index"=index+"-write"
      ._id = encode_base64(uuid_v4())
      """
      
      
      [sinks.secure_es]
      type = "elasticsearch"
      inputs = ["elasticsearch_preprocess"]
      endpoint = "https://elasticsearch-server.d4cm5.svc:9200"
      index = "{{ write-index }}"
      request.timeout_secs = 2147483648
      bulk_action = "create"
      id_key = "_id"
      # TLS Config
      [sinks.secure_es.tls]
      key_file = "/var/run/ocp-collector/secrets/pipelinesecret/tls.key"
      crt_file = "/var/run/ocp-collector/secrets/pipelinesecret/tls.crt"
      ca_file = "/var/run/ocp-collector/secrets/pipelinesecret/ca-bundle.crt"
      [sinks.default]
      type = "elasticsearch"
      inputs = ["elasticsearch_preprocess"]
      endpoint = "https://elasticsearch.openshift-logging.svc:9200"
      index = "{{ write-index }}"
      request.timeout_secs = 2147483648
      bulk_action = "create"
      id_key = "_id"
      # TLS Config
      [sinks.default.tls]
      key_file = "/var/run/ocp-collector/secrets/collector/tls.key"
      crt_file = "/var/run/ocp-collector/secrets/collector/tls.crt"
      ca_file = "/var/run/ocp-collector/secrets/collector/ca-bundle.crt" 

      these 2 sinks are using the same inputs.
      Image: quay.io/openshift-logging/cluster-logging-operator@sha256:bddbeccc8e390d77f6672b958203f98177f171a3f12afe16449ec249b0c52a31

      Attachments

        Activity

          People

            aguptaredhat Ajay Gupta (Inactive)
            qitang@redhat.com Qiaoling Tang
            Qiaoling Tang Qiaoling Tang
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: