Uploaded image for project: 'OpenShift Logging'
  1. OpenShift Logging
  2. LOG-3194

Collector pod violates PodSecurity "restricted:v1.24" when using lokistack as the default log store in OCP 4.12.

XMLWordPrintable

    • False
    • None
    • False
    • NEW
    • VERIFIED
    • Hide
      The Pod Security admission controller has begun adding the label 'podSecurityLabelSync' = true to the openshift-logging namespace. This results in our specified security labels being overwritten. This new label then prevents our pods from starting.
      This change adds the "false" sync label and ensures that our privileged policy is maintained. Collector pods are deployed as expected.
      Show
      The Pod Security admission controller has begun adding the label 'podSecurityLabelSync' = true to the openshift-logging namespace. This results in our specified security labels being overwritten. This new label then prevents our pods from starting. This change adds the "false" sync label and ensures that our privileged policy is maintained. Collector pods are deployed as expected.
    • Log Collection - Sprint 227

      Description of problem:

      When deploy logging on OCP 4.12, using lokistack as the default log store, collector pods can't be deployed with below error:

      $ oc describe ds
      Events:   Type     Reason        Age                From                       Message   ----     ------        ----               ----                       -------   Normal   CreateObject  33s                clusterlogging-controller  CreateObject DaemonSet openshift-logging/collector   Warning  FailedCreate  33s                daemonset-controller       Error creating: pods "collector-chszd" is forbidden: violates PodSecurity "restricted:v1.24": seLinuxOptions (containers "collector", "logfilesmetricexporter" set forbidden securityContext.seLinuxOptions: type "spc_t"), unrestricted capabilities (containers "collector", "logfilesmetricexporter" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "varlogcontainers", "varlogpods", "varlogjournal", "varlogaudit", "varlogovn", "varlogoauthapiserver", "varlogopenshiftapiserver", "varlogkubeapiserver", "datadir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "collector", "logfilesmetricexporter" must set securityContext.runAsNonRoot=true)   Warning  FailedCreate  33s                daemonset-controller       Error creating: pods "collector-vqfxm" is forbidden: violates PodSecurity "restricted:v1.24": seLinuxOptions (containers "collector", "logfilesmetricexporter" set forbidden securityContext.seLinuxOptions: type "spc_t"), unrestricted capabilities (containers "collector", "logfilesmetricexporter" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "varlogcontainers", "varlogpods", "varlogjournal", "varlogaudit", "varlogovn", "varlogoauthapiserver", "varlogopenshiftapiserver", "varlogkubeapiserver", "datadir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "collector", "logfilesmetricexporter" must set securityContext.runAsNonRoot=true)  

      daemonset/collector: ds.yaml

      $ oc get ns openshift-logging --show-labels
      NAME                STATUS   AGE   LABELS
      openshift-logging   Active   30m   kubernetes.io/metadata.name=openshift-logging,olm.operatorgroup.uid/32efb4c2-9237-4411-a78d-8af88a269274=,olm.operatorgroup.uid/bc63067d-ff62-463f-91b9-211016e4d21d=,openshift.io/cluster-logging=true,openshift.io/cluster-monitoring=true,pod-security.kubernetes.io/enforce-version=v1.24,pod-security.kubernetes.io/enforce=restricted,security.openshift.io/scc.podSecurityLabelSync=true
      
      $ oc get ds
      NAME        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
      collector   0         0         0       0            0           kubernetes.io/os=linux   30m 

      Version-Release number of selected component (if applicable):

      cluster-logging.v5.6.0

      loki-operator.v5.6.0

      clusterversion: 4.12.0-0.nightly-2022-10-15-094115

      How reproducible:

      Always

      Steps to Reproduce:

      1. deploy logging 5.6 operators
      2. deploy lokistack
      apiVersion: loki.grafana.com/v1
      kind: LokiStack
      metadata:
        name: lokistack-sample
        namespace: openshift-logging
      spec:
        managementState: Managed
        replicationFactor: 1
        rules:
          enabled: true
          namespaceSelector:
            matchLabels:
              openshift.io/cluster-monitoring: "true"
          selector:
            matchLabels:
              openshift.io/cluster-monitoring: "true"
        size: 1x.extra-small
        storage:
          schemas:
          - effectiveDate: "2020-10-11"
            version: v11
          secret:
            name: s3-secret
            type: s3
        storageClassName: gp3-csi
        tenants:
          mode: openshift-logging

      3. create clusterlogging with:

      apiVersion: logging.openshift.io/v1
      kind: ClusterLogging
      metadata:
        name: instance
        namespace: openshift-logging
      spec:
        collection:
          type: fluentd
        logStore:
          lokistack:
            name: lokistack-sample
          type: lokistack
        managementState: Managed

      4. check logging pods

      Actual results:

      Collector pods are not created

      Expected results:

      Collector pods should be created

      Additional info:

      When deploy ES as the default log store, no such issue. 

              cahartma@redhat.com Casey Hartman
              qitang@redhat.com Qiaoling Tang
              Qiaoling Tang Qiaoling Tang
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

                Created:
                Updated:
                Resolved: