Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-3767

[release-5.5] JSON logs are not sent when there is more than one output

XMLWordPrintable

    • False
    • None
    • False
    • NEW
    • NEW
    • Hide
      Cause: Messages were not deep copied when structured parsing is enabled and forwarded to multiple destinations
      Consequence: Only some of the received logs included the structured message
      Fix: Modify config generation to deep copy messages prior to JSON parsing
      Result: All received messages have structured messages
      Show
      Cause: Messages were not deep copied when structured parsing is enabled and forwarded to multiple destinations Consequence: Only some of the received logs included the structured message Fix: Modify config generation to deep copy messages prior to JSON parsing Result: All received messages have structured messages
    • Bug Fix
    • High
    • Hide

      1) Deploy a CLF (fake configuration to check the logs in the buffer) instance similar to:

       

      apiVersion: logging.openshift.io/v1
      kind: ClusterLogForwarder
      metadata:
        creationTimestamp: "2023-02-17T07:46:36Z"
        generation: 6
        name: instance
        namespace: openshift-logging
        resourceVersion: "458342"
        uid: 6c328c84-978f-4eb7-aa08-19b189ad8e8d
      spec:
        outputs:
        - name: splunk-log-forwarder
          type: fluentdForward
          url: tcp://log-forwarder.openshift-logging.svc:24224
        - name: splunk-log-forwarder2
          type: fluentdForward
          url: tcp://log-forwarder2.openshift-logging.svc:24224
        pipelines:
        - inputRefs:
          - application
          name: fluentdtest
          outputRefs:
          - splunk-log-forwarder
          parse: json
        - inputRefs:
          - application
          name: fluentdtest2
          outputRefs:
          - splunk-log-forwarder2
          parse: json
       
      

      2) Deploy an application that sends logs in json format

      echo '{"X-test-date": "hello this is a test", "level": "info", "message": "sample log message"}' >> /proc/1/fd/1

      3) Check the collector pod that belongs to the node where the application is deployed and check the buffer

      $ oc rsh $collector
      # cd /var/lib/fluentd/
      # ls
      default  pos  retry_default  splunk_log_forwarder  splunk_log_forwarder2

      4) Check the output for the first pipeline defined in the CLF instance->splunk_log_forwarder and verify that the message is inside the structured field:

      @timestamp?#2023-02-17T09:32:52.070980426+00:00?docker??container_id?@ee3012d556fe8989a74b08c988bbdcda3ccf3c6581a6c04ee9d44e6dc6606023?kubernetes??container_name?rails-postgresql-example?namespace_name?testjson?pod_name? rails-postgresql-example-1-krwnf?container_imageٚimage-registry.openshift-image-registry.svc:5000/testjson/rails-postgresql-example@sha256:3f7437bae8e63d429abcd65c56f204e7ad34620dfbc6209f373f9711848c8397?container_image_idٚimage-registry.openshift-image-registry.svc:5000/testjson/rails-postgresql-example@sha256:3f7437bae8e63d429abcd65c56f204e7ad34620dfbc6209f373f9711848c8397?pod_id?$a0209988-868f-461a-a6f6-0200c760dafb?pod_ip?10.129.2.20?host?,worker-0.adricluster.lab.psi.pnq2.redhat.com?labels??deployment?rails-postgresql-example-1?deploymentconfig?rails-postgresql-example?name?rails-postgresql-example?master_url?https://kubernetes.default.svc?namespace_id?$3cfbd87a-b61f-4004-8d42-9f02d1dce597?namespace_labels??kubernetes.io/metadata.name?testjson?flat_labels??%deployment=rails-postgresql-example-1?)deploymentconfig=rails-postgresql-example?name=rails-postgresql-example?level?unknown?hostname?,worker-0.adricluster.lab.psi.pnq2.redhat.com?pipeline_metadata??collector??ipaddr4?10.74.211.175?inputname?fluent-plugin-systemd?name?fluentd?received_at? 2023-02-17T09:32:52.071612+00:00?version?1.14.6 1.6.0?openshift??sequence̺?cluster_id?$89587cc4-a61e-4952-838c-39654b05f256?viaq_msg_id?0NzI2OWQ0NGYtZTBjOC00MjAxLTgyZDYtMTRiZDBjM2FmZGY0?log_type?application?structured??X-test-date?hello this is a test?level?info?message?sample log message

      5)Check the output for the second pipeline defined in the CLF instance->splunk_log_forwarder2 and verify that the structured field is empty:

      @timestamp?#2023-02-17T09:32:51.616106139+00:00?docker??container_id?@ee3012d556fe8989a74b08c988bbdcda3ccf3c6581a6c04ee9d44e6dc6606023?kubernetes??container_name?rails-postgresql-example?namespace_name?testjson?pod_name? rails-postgresql-example-1-krwnf?container_imageٚimage-registry.openshift-image-registry.svc:5000/testjson/rails-postgresql-example@sha256:3f7437bae8e63d429abcd65c56f204e7ad34620dfbc6209f373f9711848c8397?container_image_idٚimage-registry.openshift-image-registry.svc:5000/testjson/rails-postgresql-example@sha256:3f7437bae8e63d429abcd65c56f204e7ad34620dfbc6209f373f9711848c8397?pod_id?$a0209988-868f-461a-a6f6-0200c760dafb?pod_ip?10.129.2.20?host?,worker-0.adricluster.lab.psi.pnq2.redhat.com?labels??deployment?rails-postgresql-example-1?deploymentconfig?rails-postgresql-example?name?rails-postgresql-example?master_url?https://kubernetes.default.svc?namespace_id?$3cfbd87a-b61f-4004-8d42-9f02d1dce597?namespace_labels??kubernetes.io/metadata.name?testjson?flat_labels??%deployment=rails-postgresql-example-1?)deploymentconfig=rails-postgresql-example?name=rails-postgresql-example?level?unknown?hostname?,worker-0.adricluster.lab.psi.pnq2.redhat.com?pipeline_metadata??collector??ipaddr4?10.74.211.175?inputname?fluent-plugin-systemd?name?fluentd?received_at? 2023-02-17T09:32:51.616733+00:00?version?1.14.6 1.6.0?openshift??sequence̹?cluster_id?$89587cc4-a61e-4952-838c-39654b05f256?viaq_msg_id?0YjQ5NDhhNjUtNDhmNS00NWExLTllOWEtM2Q4NjQxZGZlYmNl?log_type?application?structured???c?I?;J??

      These examples are tested for RHOL 5.5 and RHOL 5.6.

      In addition, I have done different tests playing with the output and the logs are always sent only in the first one that is defined in the CLF instance.

       

       

       

       

       

       

       

       

       

      Show
      1) Deploy a CLF (fake configuration to check the logs in the buffer) instance similar to:   apiVersion: logging.openshift.io/v1 kind: ClusterLogForwarder metadata:   creationTimestamp: "2023-02-17T07:46:36Z"   generation: 6   name: instance   namespace: openshift-logging   resourceVersion: "458342"   uid: 6c328c84-978f-4eb7-aa08-19b189ad8e8d spec:   outputs:   - name: splunk-log-forwarder     type: fluentdForward     url: tcp: //log-forwarder.openshift-logging.svc:24224   - name: splunk-log-forwarder2     type: fluentdForward     url: tcp: //log-forwarder2.openshift-logging.svc:24224   pipelines:   - inputRefs:     - application     name: fluentdtest     outputRefs:     - splunk-log-forwarder     parse: json   - inputRefs:     - application     name: fluentdtest2     outputRefs:     - splunk-log-forwarder2     parse: json 2) Deploy an application that sends logs in json format echo '{ "X-test-date" : "hello this is a test" , "level" : "info" , "message" : "sample log message" }' >> /proc/1/fd/1 3) Check the collector pod that belongs to the node where the application is deployed and check the buffer $ oc rsh $collector # cd / var /lib/fluentd/ # ls default  pos  retry_default  splunk_log_forwarder  splunk_log_forwarder2 4) Check the output for the first pipeline defined in the CLF instance->splunk_log_forwarder and verify that the message is inside the structured field: @timestamp?#2023-02-17T09:32:52.070980426+00:00?docker??container_id?@ee3012d556fe8989a74b08c988bbdcda3ccf3c6581a6c04ee9d44e6dc6606023?kubernetes??container_name?rails-postgresql-example?namespace_name?testjson?pod_name? rails-postgresql-example-1-krwnf?container_imageٚimage-registry.openshift-image-registry.svc:5000/testjson/rails-postgresql-example@sha256:3f7437bae8e63d429abcd65c56f204e7ad34620dfbc6209f373f9711848c8397?container_image_idٚimage-registry.openshift-image-registry.svc:5000/testjson/rails-postgresql-example@sha256:3f7437bae8e63d429abcd65c56f204e7ad34620dfbc6209f373f9711848c8397?pod_id?$a0209988-868f-461a-a6f6-0200c760dafb?pod_ip?10.129.2.20?host?,worker-0.adricluster.lab.psi.pnq2.redhat.com?labels??deployment?rails-postgresql-example-1?deploymentconfig?rails-postgresql-example?name?rails-postgresql-example?master_url?https: //kubernetes. default .svc?namespace_id?$3cfbd87a-b61f-4004-8d42-9f02d1dce597?namespace_labels??kubernetes.io/metadata.name?testjson?flat_labels??%deployment=rails-postgresql-example-1?)deploymentconfig=rails-postgresql-example?name=rails-postgresql-example?level?unknown?hostname?,worker-0.adricluster.lab.psi.pnq2.redhat.com?pipeline_metadata??collector??ipaddr4?10.74.211.175?inputname?fluent-plugin-systemd?name?fluentd?received_at? 2023-02-17T09:32:52.071612+00:00?version?1.14.6 1.6.0?openshift??sequence̺?cluster_id?$89587cc4-a61e-4952-838c-39654b05f256?viaq_msg_id?0NzI2OWQ0NGYtZTBjOC00MjAxLTgyZDYtMTRiZDBjM2FmZGY0?log_type?application?structured??X-test-date?hello this is a test?level?info?message?sample log message 5)Check the output for the second pipeline defined in the CLF instance->splunk_log_forwarder2 and verify that the structured field is empty: @timestamp?#2023-02-17T09:32:51.616106139+00:00?docker??container_id?@ee3012d556fe8989a74b08c988bbdcda3ccf3c6581a6c04ee9d44e6dc6606023?kubernetes??container_name?rails-postgresql-example?namespace_name?testjson?pod_name? rails-postgresql-example-1-krwnf?container_imageٚimage-registry.openshift-image-registry.svc:5000/testjson/rails-postgresql-example@sha256:3f7437bae8e63d429abcd65c56f204e7ad34620dfbc6209f373f9711848c8397?container_image_idٚimage-registry.openshift-image-registry.svc:5000/testjson/rails-postgresql-example@sha256:3f7437bae8e63d429abcd65c56f204e7ad34620dfbc6209f373f9711848c8397?pod_id?$a0209988-868f-461a-a6f6-0200c760dafb?pod_ip?10.129.2.20?host?,worker-0.adricluster.lab.psi.pnq2.redhat.com?labels??deployment?rails-postgresql-example-1?deploymentconfig?rails-postgresql-example?name?rails-postgresql-example?master_url?https: //kubernetes. default .svc?namespace_id?$3cfbd87a-b61f-4004-8d42-9f02d1dce597?namespace_labels??kubernetes.io/metadata.name?testjson?flat_labels??%deployment=rails-postgresql-example-1?)deploymentconfig=rails-postgresql-example?name=rails-postgresql-example?level?unknown?hostname?,worker-0.adricluster.lab.psi.pnq2.redhat.com?pipeline_metadata??collector??ipaddr4?10.74.211.175?inputname?fluent-plugin-systemd?name?fluentd?received_at? 2023-02-17T09:32:51.616733+00:00?version?1.14.6 1.6.0?openshift??sequence̹?cluster_id?$89587cc4-a61e-4952-838c-39654b05f256?viaq_msg_id?0YjQ5NDhhNjUtNDhmNS00NWExLTllOWEtM2Q4NjQxZGZlYmNl?log_type?application?structured???c?I?;J?? These examples are tested for RHOL 5.5 and RHOL 5.6. In addition, I have done different tests playing with the output and the logs are always sent only in the first one that is defined in the CLF instance.                  
    • Log Collection - Sprint 233
    • Important

      Description of problem:

      After defining two different outputs in the ClusterLogForwarder instance with the parameter "parse: json" enabled, we can observe that the content of the json logs is only created in the first output inside the structured field. In the second output, we can see that the structured field is empty.

      Version-Release number of selected component (if applicable):

      RHOL 5.5.7

      RHOL 5.6.2

      Actual results:

      json logs are not sent in the second outputs

      Expected results:

      json logs should be sent in all outputs

       

       

       

            jcantril@redhat.com Jeffrey Cantrill
            acandelp Adrian Candel
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

              Created:
              Updated:
              Resolved: