Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-1561

Namespace openshift-dns lost pod-security labels during 4.9.48 -> 4.11 -> 4.12 upgrade

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Duplicate
    • Icon: Critical Critical
    • None
    • 4.11
    • Networking / DNS
    • None
    • 0
    • Sprint 225
    • 1
    • False
    • Hide

      None

      Show
      None

      Description of problem:

      4.10 -> 4.11 -> 4.12 upgrade test was failed, checked the log, FailedCreate   daemonset/dns-default     Error creating: pods "dns-default-ntdqn" is forbidden: violates PodSecurity "restricted:latest": allowPrivilegeEscalation != false 

      Version-Release number of selected component (if applicable):

       

      How reproducible:

      run the upgrade test

      Steps to Reproduce:

      1. run the upgrade test with profile "02_aarch64_Disconnected IPI on AWS & Private cluster"
      
      http://virt-openshift-05.lab.eng.nay.redhat.com/buildcorp/ocp_upgrade/22500.html
      https://mastern-jenkins-csb-openshift-qe.apps.ocp-c1.prod.psi.redhat.com/job/ocp-upgrade/job/upgrade-pipeline/22500/console
      
      FROM: 
      4.9.48-aarch64
      TO: 4.10.33-aarch64,4.11.5-aarch64,4.12.0-0.nightly-arm64-2022-09-18-164517
      RUN_TESTS
      RUN_UPGRADE_TESTS
      
      2. check the log in upgrade-pipeline/22500/console 
      09-19 16:28:19.486  oc get event -n openshift-dns
      09-19 16:28:19.744  LAST SEEN   TYPE      REASON         OBJECT                    MESSAGE
      09-19 16:28:19.744  26m         Warning   FailedCreate   daemonset/dns-default     (combined from similar events): Error creating: pods "dns-default-x7q7m" is forbidden: violates PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "dns", "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "dns", "kube-rbac-proxy" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "dns", "kube-rbac-proxy" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "dns", "kube-rbac-proxy" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
      09-19 16:28:19.744  166m        Warning   FailedCreate   daemonset/dns-default     Error creating: pods "dns-default-kx72n" is forbidden: violates PodSecurity "restricted:latest": allowPrivilegeEscalation != false (containers "dns", "kube-rbac-proxy" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "dns", "kube-rbac-proxy" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or containers "dns", "kube-rbac-proxy" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "dns", "kube-rbac-proxy" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
      
      3. check openshift-dns.yaml in the must-gather, the Namespace does not have the pod-security labels
      % cat namespaces/openshift-dns/openshift-dns.yaml 
      ---
      apiVersion: v1
      kind: Namespace
      metadata:
        annotations:
          openshift.io/node-selector: ""
          openshift.io/sa.scc.mcs: s0:c24,c9
          openshift.io/sa.scc.supplemental-groups: 1000570000/10000
          openshift.io/sa.scc.uid-range: 1000570000/10000
          workload.openshift.io/allowed: management
        creationTimestamp: "2022-09-18T23:15:40Z"
        labels:
          kubernetes.io/metadata.name: openshift-dns
          olm.operatorgroup.uid/af1a2a04-f190-4c8e-9b86-3b97994ef939: ""
          openshift.io/cluster-monitoring: "true"
          openshift.io/run-level: "0"
        managedFields:
        - apiVersion: v1
          fieldsType: FieldsV1
          fieldsV1:
            f:metadata:
              f:annotations:
                f:openshift.io/sa.scc.mcs: {}
                f:openshift.io/sa.scc.supplemental-groups: {}
                f:openshift.io/sa.scc.uid-range: {}
          manager: cluster-policy-controller
          operation: Update
          time: "2022-09-18T23:15:40Z"
        - apiVersion: v1
          fieldsType: FieldsV1
          fieldsV1:
            f:metadata:
              f:annotations:
                .: {}
                f:openshift.io/node-selector: {}
                f:workload.openshift.io/allowed: {}
              f:labels:
                .: {}
                f:kubernetes.io/metadata.name: {}
                f:openshift.io/cluster-monitoring: {}
                f:openshift.io/run-level: {}
          manager: dns-operator
          operation: Update
          time: "2022-09-18T23:15:40Z"
        - apiVersion: v1
          fieldsType: FieldsV1
          fieldsV1:
            f:metadata:
              f:labels:
                f:olm.operatorgroup.uid/af1a2a04-f190-4c8e-9b86-3b97994ef939: {}
          manager: olm
          operation: Update
          time: "2022-09-18T23:15:44Z"
        name: openshift-dns
        resourceVersion: "7544"
        uid: 2e3e7358-d5df-466d-9333-6a51cae50ee4
      spec:
        finalizers:
        - kubernetes
      status:
        phase: Active
      shudi@Shudis-MacBook-Pro quay-io-openshift-release-dev-ocp-v4-0-art-dev-sha256-b44c6f52c9a91c55e721343bc20d742ee01b1d195518e9b9ece790907526f553 % pwd
      /Users/shudi/Desktop/MACOC/410/test01/must-gather.local.1771211867743020513/quay-io-openshift-release-dev-ocp-v4-0-art-dev-sha256-b44c6f52c9a91c55e721343bc20d742ee01b1d195518e9b9ece790907526f553
      
      4. https://github.com/openshift/cluster-dns-operator/blob/release-4.11/assets/dns/namespace.yaml#L15 has the labels:
      
          pod-security.kubernetes.io/enforce: privileged
          pod-security.kubernetes.io/audit: privileged
          pod-security.kubernetes.io/warn: privileged
      
      5. oc get clusteroperators
      09-19 16:28:17.759  dns                                        4.11.5                                     True        True          False      9h      DNS "default" reports Progressing=True: "Have 5 available DNS pods, want 6.\nHave 0 up-to-date DNS pods, want 6.\nHave 5 available node-resolver pods, want 6."...
      09-19 16:28:17.759  etcd                                       4.12.0-0.nightly-arm64-2022-09-18-164517   True        False         False      9h      

      Actual results:

      dns co was upgraded to 4.11.5, not 4.12.0-0.nightly-arm64-2022-09-18-164517, so the upgrade to 4.12 was failed

      Expected results:

      The upgrade was successful

      Additional info:

       

            mmasters1@redhat.com Miciah Masters
            shudili@redhat.com Shudi Li
            Hongan Li Hongan Li
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated:
              Resolved: