Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-4572

collection DaemonSet violates PodSecurity in hypershift hosted project

    • False
    • None
    • False
    • NEW
    • NEW
    • Bug Fix
    • Log Collection - Sprint 243

      Description of problem:

      There is label pod-security.kubernetes.io/enforce: restricted in hypershift hosted project. collection DaemonSet violates PodSecurity when deployed in Hypershift Hosted Project

      Events:
        Type     Reason        Age                 From                  Message
        ----     ------        ----                ----                  -------
        Normal   CreateObject  20m                 clusterlogforwarder   CreateObject DaemonSet clusters-hypershift-ci-24808/http-to-cloudwatch
        Warning  FailedCreate  20m                 daemonset-controller  Error creating: pods "http-to-cloudwatch-d6wsk" is forbidden: violates PodSecurity "restricted:latest": seLinuxOptions (container "collector" set forbidden securityContext.seLinuxOptions: type "spc_t"), unrestricted capabilities (container "collector" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "varlogcontainers", "varlogpods", "varlogjournal", "varlogaudit", "varlogovn", "varlogoauthapiserver", "varlogoauthserver", "varlogopenshiftapiserver", "varlogkubeapiserver", "datadir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or container "collector" must set securityContext.runAsNonRoot=true)
        Warning  FailedCreate  20m                 daemonset-controller  Error creating: pods "http-to-cloudwatch-fd9dd" is forbidden: violates PodSecurity "restricted:latest": seLinuxOptions (container "collector" set forbidden securityContext.seLinuxOptions: type "spc_t"), unrestricted capabilities (container "collector" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "varlogcontainers", "varlogpods", "varlogjournal", "varlogaudit", "varlogovn", "varlogoauthapiserver", "varlogoauthserver", "varlogopenshiftapiserver", "varlogkubeapiserver", "datadir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or container "collector" must set securityContext.runAsNonRoot=true)
      

      How reproducible:

      Always

      Steps to Reproduce:

      #On Management Cluster, Create CLF under hosted cluster project.
      1) oc project ${hosted_cluster_project}
      2) give cluster-logging-operator edit roles to ${hosted_cluster_project}
      oc policy add-role-to-user edit system:serviceaccount:openshift-logging:cluster-logging-operator
      3) Create secret to cloudwatch output

       oc create secret generic cloudwatch-credentials \
          --from-literal=aws_access_key_id="${AWS_ACCESS_KEY_ID}" \
          --from-literal=aws_secret_access_key="${AWS_SECRET_ACCESS_KEY}"
      

      4) oc create serviceaccount clf-collector
      oc adm policy add-cluster-role-to-user collect-audit-logs -z clf-collector

      5)

      cat <<EOF |  oc apply -f -
      apiVersion: logging.openshift.io/v1
      kind: ClusterLogForwarder
      metadata:
        name: to-cloudwatch
      spec:
        outputs:
        - name: cloudwatch
          type: cloudwatch
          cloudwatch:
            groupBy: logType
            region: us-east-2
          secret:
            name: cloudwatch-credentials
        pipelines:
          - name: to-cloudwatch
            inputRefs:
            - audit
            outputRefs:
            - cloudwatch
        serviceAccountName: clf-collector
      EOF
      

      6. check ds status

      $oc get ds
      NAME            DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
      to-cloudwatch   6         0         0       0            0           kubernetes.io/os=linux   7s
      $oc describe ds/to-cloudwatch
      
      Events:
        Type     Reason        Age                 From                  Message
        ----     ------        ----                ----                  -------
        Normal   CreateObject  2m                  clusterlogforwarder   CreateObject DaemonSet clusters-hypershift-ci-24808/to-cloudwatch
        Warning  FailedCreate  2m                  daemonset-controller  Error creating: pods "to-cloudwatch-4qwpc" is forbidden: violates PodSecurity "restricted:latest": seLinuxOptions (container "collector" set forbidden securityContext.seLinuxOptions: type "spc_t"), unrestricted capabilities (container "collector" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "varlogcontainers", "varlogpods", "varlogjournal", "varlogaudit", "varlogovn", "varlogoauthapiserver", "varlogoauthserver", "varlogopenshiftapiserver", "varlogkubeapiserver", "datadir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or container "collector" must set securityContext.runAsNonRoot=true)
        Warning  FailedCreate  2m                  daemonset-controller  Error creating: pods "to-cloudwatch-w6662" is forbidden: violates PodSecurity "restricted:latest": seLinuxOptions (container "collector" set forbidden securityContext.seLinuxOptions: type "spc_t"), unrestricted capabilities (container "collector" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "varlogcontainers", "varlogpods", "varlogjournal", "varlogaudit", "varlogovn", "varlogoauthapiserver", "varlogoauthserver", "varlogopenshiftapiserver", "varlogkubeapiserver", "datadir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or container "collector" must set securityContext.runAsNonRoot=true)
      

      Additional Info:

      We can create an pod-security.kubernetes.io/enforce: restricted as below

      apiVersion: project.openshift.io/v1
      kind: Project
      metadata:
        labels:
          kubernetes.io/metadata.name: test2
          pod-security.kubernetes.io/audit: restricted
          pod-security.kubernetes.io/audit-version: v1.24
          pod-security.kubernetes.io/warn: restricted
          pod-security.kubernetes.io/warn-version: v1.24
          pod-security.kubernetes.io/enforce: restricted
        name: test3
      

            [LOG-4572] collection DaemonSet violates PodSecurity in hypershift hosted project

            Anping Li added a comment - - edited

            nmalik-srej Logging test pass on hypershift after I used two workarounds in my test.

            • Workaround 1: update the project label oc label ns ${hosted_cluster_project} "pod-security.kubernetes.io/enforce=privileged" --overwrite
            • Workaround 2: After auditWebook is eanbled. Set hostedclusters and hostedcontrolplanes CRs to Unmanged, and add httpserver.${hosted_cluster_project}.svc in the NO_PROXY in deployment/openshift-apiserver

            For more defaill, refer to the case OCP-67713 inputs.receiver.http collect one hosted cluster audit logs
            https://docs.google.com/spreadsheets/d/1MxLnBBae7rhFI2AvFAuBMyP1g0c-0iCkW7bINNuvLrU/edit#gid=0

            Question: Are there any other methods to add service name to NO_PROXY in an existing Hosted control panel pods?

            Anping Li added a comment - - edited nmalik-srej Logging test pass on hypershift after I used two workarounds in my test. Workaround 1: update the project label oc label ns ${hosted_cluster_project} "pod-security.kubernetes.io/enforce=privileged" --overwrite Workaround 2: After auditWebook is eanbled. Set hostedclusters and hostedcontrolplanes CRs to Unmanged, and add httpserver.${hosted_cluster_project}.svc in the NO_PROXY in deployment/openshift-apiserver For more defaill, refer to the case OCP-67713 inputs.receiver.http collect one hosted cluster audit logs https://docs.google.com/spreadsheets/d/1MxLnBBae7rhFI2AvFAuBMyP1g0c-0iCkW7bINNuvLrU/edit#gid=0 Question: Are there any other methods to add service name to NO_PROXY in an existing Hosted control panel pods?

            Anping Li added a comment -

            cahartma@redhat.com we needn't the below command after label "pod-security.kubernetes.io/enforce" to "privileged".
            oc policy add-role-to-user edit system:serviceaccount:openshift-logging:cluster-logging-operator

            Anping Li added a comment - cahartma@redhat.com we needn't the below command after label "pod-security.kubernetes.io/enforce" to "privileged". oc policy add-role-to-user edit system:serviceaccount:openshift-logging:cluster-logging-operator

            My question is basically:   Why is that line included in "steps to repro?"   It confuses me for some reason.
            And as a more important follow-up:  "Is there anything we need to do for HCP, in regards to pod security scanning?"   

            Casey Hartman added a comment - My question is basically:   Why is that line included in "steps to repro?"   It confuses me for some reason. And as a more important follow-up:  "Is there anything we need to do for HCP, in regards to pod security scanning?"   

            cahartma@redhat.com this sounds like just giving CLO broad permissions to the HCP NS so it could do whatever it needs as a part of giving a reproducer for the bug.  This level of access would be too broad for a production deployment and would be more constrained.  Is the question then what are those more limited permisions?  Or is it a question of why does CLO need any permissions at all?

            Naveen Malik added a comment - cahartma@redhat.com this sounds like just giving CLO broad permissions to the HCP NS so it could do whatever it needs as a part of giving a reproducer for the bug.  This level of access would be too broad for a production deployment and would be more constrained.  Is the question then what are those more limited permisions?  Or is it a question of why does CLO need any permissions at all?

            anli@redhat.com nmalik-srej 
            Do you know why this step is necessary?   Is it specific to HCP?

            2) give cluster-logging-operator edit roles to ${hosted_cluster_project}
            oc policy add-role-to-user edit system:serviceaccount:openshift-logging:cluster-logging-operator

            Casey Hartman added a comment - anli@redhat.com nmalik-srej   Do you know why this step is necessary?   Is it specific to HCP? 2) give cluster-logging-operator edit roles to ${hosted_cluster_project} oc policy add-role-to-user edit system:serviceaccount:openshift-logging:cluster-logging-operator

            Namespace needs to be labeled, per comments.

            Casey Hartman added a comment - Namespace needs to be labeled, per comments.

            nmalik-srej unless this is something you have a need to resolve, I am going to close this as "not a bug".    Let me know if there is any additional info and we can re-open this ticket.

            Casey Hartman added a comment - nmalik-srej unless this is something you have a need to resolve, I am going to close this as "not a bug".    Let me know if there is any additional info and we can re-open this ticket.

            Casey Hartman added a comment - - edited

            Pod security for our collectors, must be set to "privileged".   
            I did not notice at first that the namespace you listed above has the "restricted" security label.   and verified: 

              Warning  FailedCreate      12m (x4 over 12m)  daemonset-controller  (combined from similar events): Error creating: pods "to-cloudwatch-gzqlg" is forbidden: violates PodSecurity "restricted:latest": seLinuxOptions (container "collector" set forbidden securityContext.seLinuxOptions: type "spc_t"), unrestricted capabilities (container "collector" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "varlogcontainers", "varlogpods", "varlogjournal", "varlogaudit", "varlogovn", "varlogoauthapiserver", "varlogoauthserver", "varlogopenshiftapiserver", "varlogkubeapiserver", "datadir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or container "collector" must set securityContext.runAsNonRoot=true)
            

              Once I updated the "enforce" label to be "privileged":   

             

            oc label ns clusters-hypershift-ci-11863 "pod-security.kubernetes.io/enforce=privileged" --overwrite

             So that the labels are now:  

            [cluster-logging-operator]$ oc get ns clusters-hypershift-ci-11863 -o json | jq -r '.metadata.labels' 
            {
              "hypershift.openshift.io/hosted-control-plane": "true",
              "hypershift.openshift.io/monitoring": "true",
              "kubernetes.io/metadata.name": "clusters-hypershift-ci-11863",
              "pod-security.kubernetes.io/warn": "restricted",
              "pod-security.kubernetes.io/audit": "restricted",
              "pod-security.kubernetes.io/enforce": "privileged",
              "security.openshift.io/scc.podSecurityLabelSync": "false"
            }
            

             

             

             

            All was created.

              Normal   SuccessfulCreate  11m                daemonset-controller  Created pod: to-cloudwatch-xrgnv
              Normal   SuccessfulCreate  11m                daemonset-controller  Created pod: to-cloudwatch-b9d4q
              Normal   SuccessfulCreate  11m                daemonset-controller  Created pod: to-cloudwatch-hd5kp
              Normal   SuccessfulCreate  11m                daemonset-controller  Created pod: to-cloudwatch-4427j
              Normal   SuccessfulCreate  11m                daemonset-controller  Created pod: to-cloudwatch-8tbnf
              Normal   SuccessfulCreate  11m                daemonset-controller  Created pod: to-cloudwatch-vrdjf
            

            The "audit" and "warn" labels can be updated as well, but are for prior to 4.13.       We will look into a way to reduce these down to "least privileges" for the guest cluster namespaces, since root access should no longer be required in the HCP clusters??

            Casey Hartman added a comment - - edited Pod security for our collectors, must be set to "privileged".    I did not notice at first that the namespace you listed above has the "restricted" security label.   and verified:    Warning  FailedCreate      12m (x4 over 12m)  daemonset-controller  (combined from similar events): Error creating: pods "to-cloudwatch-gzqlg" is forbidden: violates PodSecurity "restricted:latest": seLinuxOptions (container "collector" set forbidden securityContext.seLinuxOptions: type "spc_t"), unrestricted capabilities (container "collector" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "varlogcontainers", "varlogpods", "varlogjournal", "varlogaudit", "varlogovn", "varlogoauthapiserver", "varlogoauthserver", "varlogopenshiftapiserver", "varlogkubeapiserver", "datadir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or container "collector" must set securityContext.runAsNonRoot=true)   Once I updated the "enforce" label to be "privileged":      oc label ns clusters-hypershift-ci-11863 "pod-security.kubernetes.io/enforce=privileged" --overwrite  So that the labels are now:   [cluster-logging-operator]$ oc get ns clusters-hypershift-ci-11863 -o json | jq -r '.metadata.labels'  {   "hypershift.openshift.io/hosted-control-plane": "true",   "hypershift.openshift.io/monitoring": "true",   "kubernetes.io/metadata.name": "clusters-hypershift-ci-11863",   "pod-security.kubernetes.io/warn": "restricted",   "pod-security.kubernetes.io/audit": "restricted",   "pod-security.kubernetes.io/enforce": "privileged",   "security.openshift.io/scc.podSecurityLabelSync": "false" }       All was created.   Normal   SuccessfulCreate  11m                daemonset-controller  Created pod: to-cloudwatch-xrgnv   Normal   SuccessfulCreate  11m                daemonset-controller  Created pod: to-cloudwatch-b9d4q   Normal   SuccessfulCreate  11m                daemonset-controller  Created pod: to-cloudwatch-hd5kp   Normal   SuccessfulCreate  11m                daemonset-controller  Created pod: to-cloudwatch-4427j   Normal   SuccessfulCreate  11m                daemonset-controller  Created pod: to-cloudwatch-8tbnf   Normal   SuccessfulCreate  11m                daemonset-controller  Created pod: to-cloudwatch-vrdjf The "audit" and "warn" labels can be updated as well, but are for prior to 4.13.       We will look into a way to reduce these down to "least privileges" for the guest cluster namespaces, since root access should no longer be required in the HCP clusters??

            Please try testing in a hosted control plane namespace.  From the description this is a bespoke namespace for testing.  In review of a recently created HCP the `security.openshift.io/scc.podSecurityLabelSync` label is set to "false".

            $ oc get project ocm-staging-26k96io4d7o7pju44hs6er9iuees27gs-nmalik-hcp3 -o json | jq -r '.metadata.labels'
            {
              "hypershift.openshift.io/hosted-control-plane": "true",
              "hypershift.openshift.io/monitoring": "true",
              "kubernetes.io/metadata.name": "ocm-staging-26k96io4d7o7pju44hs6er9iuees27gs-nmalik-hcp3",
              "pod-security.kubernetes.io/audit": "privileged",
              "pod-security.kubernetes.io/enforce": "privileged",
              "pod-security.kubernetes.io/warn": "privileged",
              "security.openshift.io/scc.podSecurityLabelSync": "false"
            }
            

            Naveen Malik added a comment - Please try testing in a hosted control plane namespace.  From the description this is a bespoke namespace for testing.  In review of a recently created HCP the `security.openshift.io/scc.podSecurityLabelSync` label is set to "false". $ oc get project ocm-staging-26k96io4d7o7pju44hs6er9iuees27gs-nmalik-hcp3 -o json | jq -r '.metadata.labels' {   "hypershift.openshift.io/hosted-control-plane": "true",   "hypershift.openshift.io/monitoring": "true",   "kubernetes.io/metadata.name": "ocm-staging-26k96io4d7o7pju44hs6er9iuees27gs-nmalik-hcp3",   "pod-security.kubernetes.io/audit": "privileged",   "pod-security.kubernetes.io/enforce": "privileged",   "pod-security.kubernetes.io/warn": "privileged",   "security.openshift.io/scc.podSecurityLabelSync": "false" }

            Casey Hartman added a comment - - edited

            anli@redhat.com for pod security scanning, you will also need to add the label:   

            "security.openshift.io/scc.podSecurityLabelSync": "false", 

            nmalik-srej this annotation applies to the project labels and is outside the control of what the operator provides.  Is this something that can be added to existing projects and when provisioning new namespaces?  We will otherwise need to work together to find an alternate solution

            Casey Hartman added a comment - - edited anli@redhat.com for pod security scanning, you will also need to add the label:    "security.openshift.io/scc.podSecurityLabelSync" : " false " , nmalik-srej this annotation applies to the project labels and is outside the control of what the operator provides.  Is this something that can be added to existing projects and when provisioning new namespaces?  We will otherwise need to work together to find an alternate solution

              Unassigned Unassigned
              rhn-support-anli Anping Li
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved: