Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-6813

Duplicated timestamp fields in some outputs

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Not a Bug
    • Icon: Normal Normal
    • None
    • Logging 6.2.0
    • Log Collection
    • False
    • Hide

      None

      Show
      None
    • False
    • NEW
    • NEW
    • Bug Fix
    • Log Collection - Sprint 268
    • Low

      Description of problem:

      The pull/2908  add a field timestamp is added into all supported outputs, and timestamp value align with @timestamp.  
      After the PR is merged.  Both timestamp and @timestamp appear in Elasticsearch,kafka and syslog receiver.    That is strange to provide both of them.   we need to make  clear which filed is popular per output

      Elasticsearch:

            {
              "_index" : "audit-000001",
              "_type" : "_doc",
              "_id" : "NzNlYzhmN2YtMGQ0Yy00MjNjLTg5ODUtZGM1NDNjYjliMDVi",
              "_score" : 8.866748,
              "_source" : {
                "level" : "info",
                "openshift" : {
                  "sequence" : 1740144087288977271,
                  "cluster_id" : "c3806457-409a-4d33-8685-42cbd507f4e6"
                },
                "message" : "xxxx",
                "hostname" : "anliazg18-rnfdd-worker-usgovvirginia1-tbtnh",
                "log_type" : "audit",
                "@timestamp" : "2025-02-21T13:21:27.288354332Z",
                "log_source" : "ovn",
                "timestamp" : "2025-02-21T13:21:27.288354332Z"
              }
      

      kafka:

      {
        "@timestamp": "2025-02-24T10:07:34.148873625Z",
        "hostname": "anliazr-j7xsq-worker-eastus33-rbv62",
        "level": "info",
        "log_source": "ovn",
        "log_type": "audit",
        "message": "2025-02-24T10:07:32.645Z|00038|acl_log(ovn_pinctrl0)|INFO|name=\"NP:ovn-test1:allow-same-namespace:Ingress:0\", verdict=allow, severity=alert, direction=to-lport: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:01,dl_dst=0a:58:0a:82:02:1e,nw_src=10.131.0.52,nw_dst=10.130.2.30,nw_tos=0,nw_ecn=0,nw_ttl=62,nw_frag=no,tp_src=36376,tp_dst=8080,tcp_flags=ack",
        "openshift": {
          "cluster_id": "fe617555-ce36-4bc6-9c34-cd8421d6153a",
          "sequence": 1740391654149338984
        },
        "timestamp": "2025-02-24T10:07:34.148873625Z"
      }
      
      

      syslog:

      {
        "@timestamp": "2025-02-24T08:58:02.789657851Z",
        "facility": "local0",
        "hostname": "anliazr-j7xsq-worker-eastus3-5ttfb",
        "kubernetes": {
         .............................
          },
          "namespace_id": "f626647a-367a-413b-af4e-0173850acfb2",
          ....
          "namespace_name": "openshift-monitoring",
          "pod_id": "82955a4d-9e7a-4961-96f2-13d72951c558",
          "pod_ip": "10.129.2.12",
          "pod_name": "thanos-querier-5c8488c4b8-52nk6",
          "pod_owner": "ReplicaSet/thanos-querier-5c8488c4b8"
        },
        "level": "info",
        "log_source": "container",
        "log_type": "infrastructure",
        "message": "I0224 08:58:02.789612       1 log.go:245] http: TLS handshake error from 10.128.2.8:50826: write tcp 10.129.2.12:9091->10.128.2.8:50826: write: connection reset by peer",
        "openshift": {
          "cluster_id": "fe617555-ce36-4bc6-9c34-cd8421d6153a",
          "sequence": 1740387482892004794
        },
        "proc_id": "-",
        "severity": "informational",
        "tag": "openshiftmonitoringthanosquerier",
        "timestamp": "2025-02-24T08:58:02.789657851Z"
      }

      Version-Release number of selected component (if applicable):

      6.2

      How reproducible:

      always

      Steps to Reproduce:

      1.  
      2.  
      3. ...

      Actual results:

      Expected results:

      Additional info:

              cahartma@redhat.com Casey Hartman
              rhn-support-anli Anping Li
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: