Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-4681

kubeAPIAudit policies don't work on audit logs from auditWebhook

XMLWordPrintable

    • False
    • None
    • False
    • NEW
    • OBSDA-344 - Audit log forwarding produces excessive data, configuration for prefiltering is needed
    • VERIFIED
    • Release Note Not Required
    • Log Collection - Sprint 243, Log Collection - Sprint 244

      Description of problem:

      Create CLF with below yaml to filter audit logs from hypershift hosted cluster:

      apiVersion: logging.openshift.io/v1
      kind: ClusterLogForwarder
      metadata:
        name: http-to-cloudwatch
        namespace: clusters-hypershift-ci-20510
      spec:
        filters:
        - kubeAPIAudit:
            omitResponseCodes:
            - 200
            omitStages:
            - RequestReceived
            rules:
            - level: RequestResponse
              resources:
              - group: ""
                resources:
                - pods
            - level: Metadata
              resources:
              - group: oauth.openshift.io
                resources:
                - oauthclients
          name: my-policy
          type: kubeAPIAudit
        inputs:
        - name: input-http
          receiver:
            http:
              format: kubeAPIAudit
              receiverPort:
                name: httpserver
                port: 443
                targetPort: 8443
        outputs:
        - cloudwatch:
            groupBy: logType
            groupPrefix: qitang-audit-test
            region: us-east-2
          name: cloudwatch
          secret:
            name: cloudwatch-credentials
          type: cloudwatch
        pipelines:
        - filterRefs:
          - my-policy
          inputRefs:
          - input-http
          name: to-cloudwatch
          outputRefs:
          - cloudwatch
        serviceAccountName: clf-collector

      Then check audit logs in cloudwatch, the filters are not applied to the audit logs:

      {
          "@timestamp": "",
          "annotations": {
              "authorization.k8s.io/decision": "allow",
              "authorization.k8s.io/reason": ""
          },
          "auditID": "7c8c453f-bf26-4619-8f8b-46bc7d082e5a",
          "group_name": "qitang-audit-test.audit",
          "hostname": "ip-10-0-86-156.us-east-2.compute.internal",
          "k8s_audit_level": "Metadata",
          "level": "default",
          "log_type": "audit",
          "objectRef": {
              "apiVersion": "v1",
              "resource": "pods"
          },
          "openshift": {
              "cluster_id": "621172c6-bc72-45b0-91e6-45ffae8ce18b",
              "sequence": 141
          },
          "requestReceivedTimestamp": "2023-10-18T08:17:20.165324Z",
          "requestURI": "/api/v1/pods?allowWatchBookmarks=true&resourceVersion=216450&timeout=6m29s&timeoutSeconds=389&watch=true",
          "responseStatus": {
              "code": 200,
              "metadata": {}
          },
          "sourceIPs": [
              "10.129.2.68"
          ],
          "stage": "ResponseComplete",
          "stageTimestamp": "2023-10-18T08:23:49.166758Z",
          "stream_name": "ip-10-0-86-156.us-east-2.compute.internal.k8s-audit.log",
          "user": {
              "groups": [
                  "system:masters",
                  "system:authenticated"
              ],
              "username": "system:admin"
          },
          "userAgent": "openshift-apiserver/v0.0.0 (linux/amd64) kubernetes/$Format",
          "verb": "watch"
      } 

      The logs from http receiver don't have filed `"kind":"Event"`, but in vector, it uses this field to filter audit logs:

      [transforms.to_cloudwatch_user_defined]
      type = "remap"
      inputs = ["input-http_input"]
      source = '''
        if is_object(.) && .kind == "Event" && .apiVersion == "audit.k8s.io/v1" {
          res = if is_null(.objectRef.resource) { "" } else { string!(.objectRef.resource) }
          sub = if is_null(.objectRef.subresource) { "" } else { string!(.objectRef.subresource) }
          namespace = if is_null(.objectRef.namespace) { "" } else { string!(.objectRef.namespace) }
          username = if is_null(.user.username) { "" } else { string!(.user.username) }
          if sub != "" { res = res + "/" + sub }
          if includes(["RequestReceived"], .stage) { # Policy OmitStages
            .level = "None"
          } else if includes([200], .responseStatus.code) { # Omit by response code.
            .level = "None"
          } else if ((((is_null(.objectRef.apiGroup) || string!(.objectRef.apiGroup) == "") && match(res, r'^(pods)$'))) && true) {
            .level = "RequestResponse"
          } else if (((.objectRef.apiGroup == "oauth.openshift.io" && match(res, r'^(oauthclients)$'))) && true) {
            .level = "Metadata"
          } else {
            # No rule matched, apply default rules for system events.
            if match(username, r'^$|^system:.*') { # System events
              readonly = r'get|list|watch|head|options'
              if match(string!(.verb), readonly) {
        	.level = "None" # Drop read-only system events.
              } else if ((int(.responseStatus.code) < 300 ?? true) && starts_with(username, "system:serviceaccount:"+namespace)) {
        	.level = "None" # Drop write events by service account for same namespace as resource or for non-namespaced resource.
              }
              if .level == "RequestResponse" {
        	.level = "Request" # Downgrade RequestResponse system events.
              }
            }
          }
          # Update the event
          if .level == "None" {
            abort
          } else {
            if .level == "Metadata" {
              del(.responseObject)
              del(.requestObject)
            } else if .level == "Request" {
              del(.responseObject)
            }
          }
        }
      ''' 

      In the origin log from hypershift cluster, the filed `"kind":"Event"` is there:

      # oc -n clusters-hypershift-ci-20510 logs kube-apiserver-845984dd45-zdgqc -c audit-logs |grep pods |grep "2023-10-18T08:23:49.166758Z"
      {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"7c8c453f-bf26-4619-8f8b-46bc7d082e5a","stage":"ResponseComplete","requestURI":"/api/v1/pods?allowWatchBookmarks=true\u0026resourceVersion=216450\u0026timeout=6m29s\u0026timeoutSeconds=389\u0026watch=true","verb":"watch","user":{"username":"system:admin","groups":["system:masters","system:authenticated"]},"sourceIPs":["10.129.2.68"],"userAgent":"openshift-apiserver/v0.0.0 (linux/amd64) kubernetes/$Format","objectRef":{"resource":"pods","apiVersion":"v1"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2023-10-18T08:17:20.165324Z","stageTimestamp":"2023-10-18T08:23:49.166758Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}} 

      Version-Release number of selected component (if applicable):

      openshift-logging/cluster-logging-rhel9-operator/images/v5.8.0-188

      How reproducible:

      Always

      Steps to Reproduce:

      Follow the steps in https://polarion.engineering.redhat.com/polarion/#/project/OSE/workitem?id=OCP-67713 

      Actual results:

      kubeAPIAudit policies are not taking effect when forwarding audit logs from AuditWebhook.

      Expected results:

      kubeAPIAudit policies should take effect when forwarding audit logs from AuditWebhook.

      Additional info:

              rhn-engineering-aconway Alan Conway
              qitang@redhat.com Qiaoling Tang
              Qiaoling Tang Qiaoling Tang
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: