Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-4090

Time field of log message does not parse as structured.time in fluentd

    XMLWordPrintable

Details

    • False
    • None
    • False
    • NEW
    • VERIFIED
    • Parsed log messages will now also include "structured.time", if the output destination supports it. This fix adds 'keep_time_key' to the fluentd parsing config, when "parse: json" is enabled on the pipeline.
    • Bug Fix
    • Log Collection - Sprint 236, Log Collection - Sprint 237

    Description

      Description of problem:

      Time field of log message does not parse as structured.time by default in fluentd (when timestamp is also included).

      for following log message

      { "time": "2023-04-13 13:41:40.3958", "level": "INFO", "message": "Get OpenIdConfiguration" }

      only for "level" and "message", fields "structured.level" and "structured.message" is created by enabling json parsing in clusterlogforwarder configuration, but the "structured.time" filed is not created in kibana.

      fluentd parsing feature by default individually parses any json "time" field to the record's "timestamp", which could be verified through  https://docs.fluentd.org/filter/parser#reserve_time

      Version-Release number of selected component (if applicable):

      RHOCP cluster version 4.11
      Red Hat Openshift logging version 5.5.6 and 5.6.5

      How reproducible:

      can be reproduced.

      Steps to Reproduce:

      1. Configure clusterlogforwarder and enable json parsing.
      2. Create index patterns with time field as timestamp.
      3. Check a log in kibana whose message contains time field, it wont have a structured.time key.

      Actual results:

      structured.time field is not present

      Expected results:

      structured.time field should be present

      Additional info:

      Customer's environment is Red Hat Openshift logging version 5.5.6 but I also found similar issue in Red Hat Openshift logging version 5.6.5 during my reproduce

      The workaround for this issue is using vector. 

       

      Attachments

        Activity

          People

            cahartma@redhat.com Casey Hartman
            rhn-support-amanverm Aman Dev Verma
            Kabir Bharti Kabir Bharti
            Votes:
            0 Vote for this issue
            Watchers:
            5 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved: