Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-2793

[Vector] OVN audit logs are missing the level field.

XMLWordPrintable

    • False
    • None
    • False
    • NEW
    • VERIFIED
    • Log Collection - Sprint 221, Log Collection - Sprint 222

      Version of components:

      Clusterlogging.v5.5.0

      Elasticsearch-operator.v5.5.0

      Server Version: 4.10.0-0.nightly-2022-06-08-150219

      Kubernetes Version: v1.23.5+3afdacb

      Description of the problem:

      When using Vector as collector, the OVN audit logs shipped to the logstore are missing the level field.

      Steps to reproduce the issue:

      1 Deploy a OCP cluster with NetworkType OVNKubernetes.

      2 Create a ClusterLogging instance.

      apiVersion: "logging.openshift.io/v1"
      kind: "ClusterLogging"
      metadata:
        name: "instance"
        namespace: "openshift-logging"
      spec:
        managementState: "Managed"  
        logStore:
          type: "elasticsearch"  
          retentionPolicy:
            application:
              maxAge: 10h
            infra:
              maxAge: 10h
            audit:
              maxAge: 10h
          elasticsearch:
            nodeCount: 1
            storage: {}
            resources:
                limits:
                  memory: "4Gi"
                requests:
                  memory: "1Gi"
            proxy:
              resources:
                limits:
                  memory: 256Mi
                requests:
                  memory: 256Mi
            redundancyPolicy: "ZeroRedundancy"
        visualization:
          type: "kibana"  
          kibana:
            replicas: 1
        collection:
          logs:
            type: "vector"  
            vector: {}
      

      3 Create a ClusterLogForwarder instance to forward all log types to default logstore.

      apiVersion: logging.openshift.io/v1
      kind: ClusterLogForwarder
      metadata:
        name: instance
        namespace: openshift-logging
      spec:
        pipelines:
        - name: forward-all-log-types
          inputRefs:
          - infrastructure
          - application
          - audit
          outputRefs:
          - default
      

      4 Create a project test1 with OVN audit logging ACL enabled along with a test app.

      oc new-project test1
      oc annotate ns test1 k8s.ovn.org/acl-logging='{ "deny": "alert", "allow": "alert" }'
      oc create -f https://raw.githubusercontent.com/openshift/verification-tests/master/testdata/networking/list_for_pods.json -n test1
      

      5 Apply the following network policy in the test1 project.

      cat ovn.yml 
      —
      kind: NetworkPolicy
      apiVersion: networking.k8s.io/v1
      metadata:
        name: default-deny
      spec:
        podSelector:
      —
      kind: NetworkPolicy
      apiVersion: networking.k8s.io/v1
      metadata:
        name: allow-same-namespace
      spec:
        podSelector:
        ingress:
        - from:
          - podSelector: {}
      —
      apiVersion: networking.k8s.io/v1
      kind: NetworkPolicy
      metadata:
        name: bad-np
      spec:
        egress:
        - {}
        podSelector:
          matchLabels:
            never-gonna: match
        policyTypes:
        - Egress
      

      6 Access the pod from another pod in the same project (test1) to see 'allow' ACL messages.

      oc exec <pod name2 in test1> – curl <IP address of pod 1 in test1>:8080

      $ oc get pods -o wide
      NAME            READY   STATUS    RESTARTS   AGE    IP            NODE                                         NOMINATED NODE   READINESS GATES
      test-rc-2m5qt   1/1     Running   0          2m4s   10.130.2.48   ip-10-0-203-60.us-east-2.compute.internal    <none>           <none>
      test-rc-hs42q   1/1     Running   0          2m4s   10.128.2.53   ip-10-0-177-143.us-east-2.compute.internal   <none>           <none>
       
      $ oc rsh test-rc-hs42q
      ~ $ curl 10.130.2.48:8080
      Hello OpenShift!
      

      7 Check the generated logs in Kibana in the audit index.

      {
        "_index": "audit-000001",
        "_type": "_doc",
        "_id": "NzA5MzgzODMtMmYyYS00MWY0LThmYzctNmI3YWJhZjBjM2Jl",
        "_version": 1,
        "_score": null,
        "_source":
      {     "log_type": "audit",     "@timestamp": "2022-07-07T06:39:35.541133789Z",     "host": "collector-b7jfl",     "write_index": "audit-write",     "message": "2022-07-07T06:39:27.317Z|00025|acl_log(ovn_pinctrl0)|INFO|name=\"test1_allow-same-namespace_0\", verdict=allow, severity=alert: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:01,dl_dst=0a:58:0a:82:02:30,nw_src=10.128.2.53,nw_dst=10.130.2.48,nw_tos=0,nw_ecn=0,nw_ttl=63,tp_src=59112,tp_dst=8080,tcp_flags=ack"   }
      ,
        "fields":
      {     "@timestamp": [       "2022-07-07T06:39:35.541Z"     ]   }
      ,
        "highlight":
      {     "message": [       "2022-07-07T06:39:27.317Z|00025|acl_log(ovn_pinctrl0)|INFO|name=\"test1_allow-same-namespace_0\", verdict=allow, severity=alert: tcp,vlan_tci=0x0000,dl_src=0a:58:0a:82:02:01,dl_dst=0a:58:0a:82:02:30,nw_src=10.128.2.53,nw_dst=10.130.2.48,nw_tos=0,nw_ecn=0,nw_ttl=63,tp_src=59112,tp_dst=8080,@kibana-highlighted-field@tcp_flags@/kibana-highlighted-field@=@kibana-highlighted-field@ack@/kibana-highlighted-field@"     ]   }
      ,
        "sort": [
          1657175975541
        ]
      }
      

       

              cahartma@redhat.com Casey Hartman
              rhn-support-ikanse Ishwar Kanse
              Ishwar Kanse Ishwar Kanse
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: