Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-7029

Collector pods raise conversion_failed error when forwarding auditd logs using Otel data module.

XMLWordPrintable

    • False
    • Hide

      None

      Show
      None
    • False
    • NEW
    • NEW
    • Hide
      Before this update, auditd log messages with multiple msg keys caused incorrect parsing, as all msg values were returned as an array. This behavior conflicted with the standard auditd log format, where the first msg field follows the structure msg=audit(TIMESTAMP:ID) and contains essential metadata. With this update, only the first msg value is used during parsing, which resolves the issue and ensures accurate extraction of audit metadata.
      Show
      Before this update, auditd log messages with multiple msg keys caused incorrect parsing, as all msg values were returned as an array. This behavior conflicted with the standard auditd log format, where the first msg field follows the structure msg=audit(TIMESTAMP:ID) and contains essential metadata. With this update, only the first msg value is used during parsing, which resolves the issue and ensures accurate extraction of audit metadata.
    • Bug Fix
    • Moderate

      Description of problem:

      Collector pods sometimes raise below error when forwarding auditd logs using Otel data module:

      2025-04-14T05:41:16.431666Z ERROR transform{component_kind="transform" component_id=output_lokistack_audit_auditd component_type=remap}: vector::internal_events::remap: Mapping failed with event. error="function call error for \"slice\" at (1761:1803): function call error for \"find\" at (1776:1794): expected string, got [string, string]" error_type="conversion_failed" stage="processing" internal_log_rate_limit=true
      2025-04-14T05:41:32.783153Z ERROR sink{component_kind="sink" component_id=output_lokistack_audit component_type=http}: vector::sinks::util::retries: Not retriable; dropping the request. reason="Http status: 422 Unprocessable Entity" internal_log_rate_limit=true
      2025-04-14T05:41:32.783183Z ERROR sink{component_kind="sink" component_id=output_lokistack_audit component_type=http}: vector_common::internal_event::service: Service call failed. No retries or retries exhausted. error=None request_id=52 error_type="request_failed" stage="sending" internal_log_rate_limit=true
      2025-04-14T05:41:32.783194Z ERROR sink{component_kind="sink" component_id=output_lokistack_audit component_type=http}: vector_common::internal_event::component_events_dropped: Events dropped intentional=false count=1 reason="Service call failed. No retries or retries exhausted." internal_log_rate_limit=true
      2025-04-14T05:41:34.882218Z ERROR transform{component_kind="transform" component_id=output_lokistack_audit_auditd component_type=remap}: vector::internal_events::remap: Mapping failed with event. error="function call error for \"slice\" at (1761:1803): function call error for \"find\" at (1776:1794): expected string, got [string, string]" error_type="conversion_failed" stage="processing" internal_log_rate_limit=true
      2025-04-14T05:41:51.709006Z ERROR sink{component_kind="sink" component_id=output_lokistack_audit component_type=http}: vector::sinks::util::retries: Not retriable; dropping the request. reason="Http status: 422 Unprocessable Entity" internal_log_rate_limit=true
      2025-04-14T05:41:51.709040Z ERROR sink{component_kind="sink" component_id=output_lokistack_audit component_type=http}: vector_common::internal_event::service: Service call failed. No retries or retries exhausted. error=None request_id=54 error_type="request_failed" stage="sending" internal_log_rate_limit=true
      2025-04-14T05:41:51.709054Z ERROR sink{component_kind="sink" component_id=output_lokistack_audit component_type=http}: vector_common::internal_event::component_events_dropped: Events dropped intentional=false count=1 reason="Service call failed. No retries or retries exhausted." internal_log_rate_limit=true
      2025-04-14T05:44:18.442732Z ERROR sink{component_kind="sink" component_id=output_lokistack_audit component_type=http}: vector::sinks::util::retries: Not retriable; dropping the request. reason="Http status: 400 Bad Request" internal_log_rate_limit=true

      vector.toml: [^vector.toml]

      Version-Release number of selected component (if applicable):

      cluster-logging.v6.2.1

      cluster-logging.v6.1.5

      How reproducible:

      Always

      Steps to Reproduce:

      1. Forward logs to lokistack, set data module to Otel
      2. Check logs in collector pods

      Actual results:

      Collector pods raise errors.

      Expected results:

      No error in collector pods.

      Additional info:

      Although there are some auditd logs in lokistack, when comparing the auditd logs in node and lokistack, there are lots of auditd logs not collected.

              vparfono Vitalii Parfonov
              qitang@redhat.com Qiaoling Tang
              Qiaoling Tang Qiaoling Tang
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: