Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-3813

Collector not complying with the tlsSecurityProfile config after enabling the feature gate.

XMLWordPrintable

    • False
    • None
    • False
    • NEW
    • OBSDA-160 - Comply with OCP cluster-wide cryptographic policies
    • NEW
    • Log Collection - Sprint 233

      Description of problem:

      After enabling the tlsSecurityProfile feature gate in CLF, the tlsSecurityProfile min tls version and ciphers are not added to the collector sink configuration even if we specify the tlsSecurityProfile in cluster wide, cluster log forwarder spec or in output.

      Version-Release number of selected component (if applicable):

      Cluster-logging.v5.7.0

      Server Version: 4.13.0-0.nightly-2023-03-14-053612

      CPaaS index: quay.io/openshift-qe-optional-operators/aosqe-index:log5.7

      How reproducible:

      Always

      Steps to Reproduce:

      *Set a tlsSecurityProfile in the global API server configuration.

      oc get apiserver/cluster -o yaml
      spec:
        audit:
          profile: Default
        tlsSecurityProfile:
          old: {}
          type: Old
      

      *Create a secret for forwarding logs to Cloudwatch.

      export REGION=us-east-2
      export ACCESS_KEY_ID=$(oc get secret aws-creds -n kube-system -o json | jq -r '.data.aws_access_key_id'|base64 -d)
      export SECRET_ACCESS_KEY=$(oc get secret  aws-creds -n kube-system -o json |jq -r '.data.aws_secret_access_key'|base64 -d)
      oc -n openshift-logging create secret generic cw-secret \
      --from-literal=aws_access_key_id="${ACCESS_KEY_ID}" \
      --from-literal=aws_secret_access_key="${SECRET_ACCESS_KEY}"
      

      *Create a ClusterLogForwarder with the tlsSecurityProfile feature gate enabled to forward logs to cloudwatch.

      apiVersion: "logging.openshift.io/v1"
      kind: ClusterLogForwarder
      metadata:
        name: instance
        namespace: openshift-logging
        annotations:
          logging.openshift.io/preview-tls-security-profile: enabled
      spec:
        outputs:
         - name: cw
           type: cloudwatch
           cloudwatch:
             groupBy: logType
             region: us-east-2
           secret:
              name: cw-secret
        pipelines:
          - name: all-logs
            inputRefs:
              - infrastructure
              - audit
              - application
            outputRefs:
              - cw

      *Create ClusterLogging instance.

      apiVersion: "logging.openshift.io/v1"
      kind: "ClusterLogging"
      metadata:
        name: "instance" 
        namespace: "openshift-logging"
      spec:
        managementState: "Managed"  
        collection:
          type: "vector"

      *Extract and check the Vector config, the cloudwatch sink does not have the min tls version and ciphers for the old profile set. There is not tls config for the sink.

      #Cloudwatch Logs
      [sinks.cw]
      type = "aws_cloudwatch_logs"
      inputs = ["cw_normalize_group_and_streams"]
      region = "us-east-2"
      compression = "none"
      group_name = "{{ group_name }}"
      stream_name = "{{ stream_name }}"
      auth.access_key_id = "REDACTED"
      auth.secret_access_key = "REDACTED"
      encoding.codec = "json"
      request.concurrency = 2
      healthcheck.enabled = false
      [transforms.add_nodename_to_metric]
      type = "remap"
      inputs = ["internal_metrics"]
      source = '''
      .tags.hostname = get_env_var!("VECTOR_SELF_NODE_NAME")
      '''
      [sinks.prometheus_output]
      type = "prometheus_exporter"
      inputs = ["add_nodename_to_metric"]
      address = "[::]:24231"
      default_namespace = "collector"
      [sinks.prometheus_output.tls]
      enabled = true
      key_file = "/etc/collector/metrics/tls.key"
      crt_file = "/etc/collector/metrics/tls.crt"
      min_tls_version = "VersionTLS10"
      ciphersuites = "TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,ECDHE-ECDSA-AES128-GCM-SHA256,ECDHE-RSA-AES128-GCM-SHA256,ECDHE-ECDSA-AES256-GCM-SHA384,ECDHE-RSA-AES256-GCM-SHA384,ECDHE-ECDSA-CHACHA20-POLY1305,ECDHE-RSA-CHACHA20-POLY1305,DHE-RSA-AES128-GCM-SHA256,DHE-RSA-AES256-GCM-SHA384,DHE-RSA-CHACHA20-POLY1305,ECDHE-ECDSA-AES128-SHA256,ECDHE-RSA-AES128-SHA256,ECDHE-ECDSA-AES128-SHA,ECDHE-RSA-AES128-SHA,ECDHE-ECDSA-AES256-SHA384,ECDHE-RSA-AES256-SHA384,ECDHE-ECDSA-AES256-SHA,ECDHE-RSA-AES256-SHA,DHE-RSA-AES128-SHA256,DHE-RSA-AES256-SHA256,AES128-GCM-SHA256,AES256-GCM-SHA384,AES128-SHA256,AES256-SHA256,AES128-SHA,AES256-SHA,DES-CBC3-SHA"
      

      *Try adding the tlsSecurityProfile config to the CLF spec and also the output, the issue still persists.

      apiVersion: logging.openshift.io/v1
      kind: ClusterLogForwarder
      metadata:
        annotations:
          logging.openshift.io/preview-tls-security-profile: enabled
        creationTimestamp: "2023-03-17T09:49:49Z"
        generation: 4
        name: instance
        namespace: openshift-logging
        resourceVersion: "67529"
        uid: 357f6ab0-c1b1-4b2c-98b2-cfa6a0bd679b
      spec:
        outputs:
        - cloudwatch:
            groupBy: logType
            region: us-east-2
          name: cw
          secret:
            name: cw-secret
          type: cloudwatch
        pipelines:
        - inputRefs:
          - infrastructure
          - audit
          - application
          name: all-logs
          outputRefs:
          - cw
        tlsSecurityProfile:
          type: Old
      status:
        conditions:
        - lastTransitionTime: "2023-03-17T10:38:04Z"
          status: "True"
          type: Ready
        outputs:
          cw:
          - lastTransitionTime: "2023-03-17T10:38:04Z"
            status: "True"
            type: Ready
        pipelines:
          all-logs:
          - lastTransitionTime: "2023-03-17T10:38:04Z"
            status: "True"
            type: Ready
      apiVersion: logging.openshift.io/v1
      kind: ClusterLogForwarder
      metadata:
        annotations:
          logging.openshift.io/preview-tls-security-profile: enabled
        creationTimestamp: "2023-03-17T09:49:49Z"
        generation: 3
        name: instance
        namespace: openshift-logging
        resourceVersion: "65845"
        uid: 357f6ab0-c1b1-4b2c-98b2-cfa6a0bd679b
      spec:
        outputs:
        - cloudwatch:
            groupBy: logType
            region: us-east-2
          name: cw
          secret:
            name: cw-secret
          tls:
            securityProfile:
              type: Old
          type: cloudwatch
        pipelines:
        - inputRefs:
          - infrastructure
          - audit
          - application
          name: all-logs
          outputRefs:
          - cw
      status:
        conditions:
        - lastTransitionTime: "2023-03-17T10:33:49Z"
          status: "True"
          type: Ready
        outputs:
          cw:
          - lastTransitionTime: "2023-03-17T10:33:49Z"
            status: "True"
            type: Ready
        pipelines:
          all-logs:
          - lastTransitionTime: "2023-03-17T10:33:49Z"
            status: "True"
            type: Ready

      Expected results:

      The collector must comply with the tlsSecurityProfile set,  starting with the highest priority in the order of precedence CLF output, CLF spec and ClusterWide config.

      Additional info:

      The CLO image is at commit:

      sh-5.1# podman inspect registry.redhat.io/openshift-logging/cluster-logging-rhel8-operator@sha256:a90762297595517f0b4bc2109b3a3ebefdea92e31d9da292d3966486f2c42f5c | grep -i commit
                          "io.openshift.build.commit.id": "c13761a7a2e58a23608484ab0250f49411df8c22",
                          "io.openshift.build.commit.url": "https://github.com/openshift/cluster-logging-operator/commit/c13761a7a2e58a23608484ab0250f49411df8c22",

              jcantril@redhat.com Jeffrey Cantrill
              rhn-support-ikanse Ishwar Kanse
              Ishwar Kanse Ishwar Kanse
              Votes:
              0 Vote for this issue
              Watchers:
              2 Start watching this issue

                Created:
                Updated:
                Resolved: