Uploaded image for project: 'Observability and Data Analysis Program'
  1. Observability and Data Analysis Program
  2. OBSDA-71

Pick daemon name from the record when using syslog forwarding with addLogSource field

XMLWordPrintable

    • Icon: Feature Feature
    • Resolution: Done
    • Icon: Undefined Undefined
    • Logging 5.2
    • None
    • None
    • None
    • False
    • False
    • Undefined
    • 0

      What is the problem that your customer is facing?

      Log forwarding using the syslog protocol with addLogSource field can not report who outputted logs in journal log.

      o Current behavior for syslog forwarded messages:

      <134>Jan 12 16:18:59 worker0 fluentd: I0112 16:18:57.831175 1763 setters.go:77] Using node IP: "192.168.200.13"
      => The fixed "fluentd" string is printed in every log message. So customers can not identify which daemon printed this message.
      
      
      

      o Expected behavior for syslog forwarded messages:

      <134>Jan 12 16:18:59 worker0 hyperkube[1763]: I0112 16:18:57.831175 1763 setters.go:77] Using node IP: "192.168.200.13"
      => The daemon name should be included in every log message.

      What is the business impact, if any, if this request will not be made available?

      The daemon name is an important information to troubleshoot cluster issues.
      If this feature is not implemented, customers don't have no way to identify which process sends logs when using syslog forwarding.

      What are your expectations for this feature

      This feature is implemented in the legacy syslog forwarding feature.
      The "systemd.u.SYSLOG_IDENTIFIER" in journal log record is the important key to identify the daemon name.

      This same feature is also required in the Log Forwarding API.

      The following steps is how to print the daemon name in the legacy syslog forwarding feature:

      1. Deploy ClusterLogging with "clusterlogging.openshift.io/logforwardingtechpreview: enabled"
      2. Prepare an external syslog server
      3. Apply the following config map on openshift-logging, then check if the external syslog server receives messages from ClusterLogging
      
          ---
          kind: ConfigMap
          apiVersion: v1
          metadata:
            name: syslog
            namespace: openshift-logging
          data:
            syslog.conf: |
              <store>
               @type syslog_buffered
               remote_syslog 192.168.122.1  
               port 514
               hostname rhocp4
               remove_tag_prefix tag
               tag_key ident,systemd.u.SYSLOG_IDENTIFIER
               facility local0
               severity debug
               use_record true
               payload_key message
              </store>
          ---
      4. Apply the following LogForwarding object
          ---
          apiVersion: logging.openshift.io/v1alpha1
          kind: LogForwarding
          metadata:
            name: instance
          spec:
            disableDefaultForwarding: true
            outputs:
              - name: user-created-es
                type: elasticsearch
                endpoint: elasticsearch.openshift-logging.svc:9200
                secret:
                  name: fluentd
            pipelines:
              - name: app-pipeline
                inputSource: logs.app
                outputRefs:
                  - user-created-es
              - name: infra-pipeline
                inputSource: logs.infra
                outputRefs:
                  - user-created-es
              - name: audit-pipeline
                inputSource: logs.audit
                outputRefs:
                  - user-created-es
          ---
      5. Check if external syslog server receives messages with daemon names from ClusterLogging

       

            kkii@redhat.com Keiichi Kii
            kkii@redhat.com Keiichi Kii
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

              Created:
              Updated:
              Resolved: