Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-34756

Error: container has runAsNonRoot and image will run as root

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Duplicate
    • Icon: Undefined Undefined
    • None
    • 4.15.z
    • service-ca
    • None
    • Important
    • No
    • False
    • Hide

      None

      Show
      None

      Description of problem:

      The service-ca pods cannot start. The error is:

      Error: container has runAsNonRoot and image will run as root (pod: "service-ca-7cf58c4f6c-xm6j2_openshift-service-ca(40e38ec2-47ae-4d47-8cb4-907b99c678ac)", container: service-ca-controller)
      

      Version-Release number of selected component (if applicable):

      4.15.14
          

      How reproducible:

      I just installed a fresh 4.15 cluster and noticed the alert. I have not tried installing a second cluster, but the configuration is CVO controlled and I haven't configured anything weird. Maybe it always happens?
          

      Steps to Reproduce:

          1. Install 4.15.14
          2. Observe error
          

      Actual results:

      Error: container has runAsNonRoot and image will run as root (pod: "service-ca-7cf58c4f6c-xm6j2_openshift-service-ca(40e38ec2-47ae-4d47-8cb4-907b99c678ac)", container: service-ca-controller)
          

      Expected results:

      Pods running
          

      Additional info:
      I tried the obvious: deleting the pod, and deleting the deployment so CVO would create a fresh one. The result is always the same.

      So you don't have to look it up, the deployment is:

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        annotations:
          deployment.kubernetes.io/revision: "1"
          operator.openshift.io/spec-hash: b6555712de476242bcb29688bd1689f018c7318a19cfcaeceaaef1c3972635cb
        creationTimestamp: "2024-05-31T21:16:07Z"
        generation: 1
        labels:
          app: service-ca
          service-ca: "true"
        name: service-ca
        namespace: openshift-service-ca
        resourceVersion: "237422"
        uid: 00e7e12c-2ef6-434a-8986-aadf98ecd85d
      spec:
        progressDeadlineSeconds: 600
        replicas: 1
        revisionHistoryLimit: 10
        selector:
          matchLabels:
            app: service-ca
            service-ca: "true"
        strategy:
          type: Recreate
        template:
          metadata:
            annotations:
              target.workload.openshift.io/management: '{"effect": "PreferredDuringScheduling"}'
            creationTimestamp: null
            labels:
              app: service-ca
              service-ca: "true"
            name: service-ca
          spec:
            containers:
            - args:
              - -v=2
              command:
              - service-ca-operator
              - controller
              image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:902e1022029ccb59689e66510cb0d926947ae4ce1a33c68b8dee4270e59789d6
              imagePullPolicy: IfNotPresent
              name: service-ca-controller
              ports:
              - containerPort: 8443
                protocol: TCP
              resources:
                requests:
                  cpu: 10m
                  memory: 120Mi
              securityContext:
                runAsNonRoot: true
              terminationMessagePath: /dev/termination-log
              terminationMessagePolicy: File
              volumeMounts:
              - mountPath: /var/run/secrets/signing-key
                name: signing-key
              - mountPath: /var/run/configmaps/signing-cabundle
                name: signing-cabundle
            dnsPolicy: ClusterFirst
            nodeSelector:
              node-role.kubernetes.io/master: ""
            priorityClassName: system-cluster-critical
            restartPolicy: Always
            schedulerName: default-scheduler
            securityContext: {}
            serviceAccount: service-ca
            serviceAccountName: service-ca
            terminationGracePeriodSeconds: 30
            tolerations:
            - effect: NoSchedule
              key: node-role.kubernetes.io/master
              operator: Exists
            - effect: NoExecute
              key: node.kubernetes.io/unreachable
              operator: Exists
              tolerationSeconds: 120
            - effect: NoExecute
              key: node.kubernetes.io/not-ready
              operator: Exists
              tolerationSeconds: 120
            volumes:
            - name: signing-key
              secret:
                defaultMode: 420
                secretName: signing-key
            - configMap:
                defaultMode: 420
                name: signing-cabundle
              name: signing-cabundle
      status:
        conditions:
        - lastTransitionTime: "2024-05-31T21:16:08Z"
          lastUpdateTime: "2024-05-31T21:16:08Z"
          message: Deployment does not have minimum availability.
          reason: MinimumReplicasUnavailable
          status: "False"
          type: Available
        - lastTransitionTime: "2024-05-31T21:16:08Z"
          lastUpdateTime: "2024-05-31T21:16:08Z"
          message: ReplicaSet "service-ca-7cf58c4f6c" is progressing.
          reason: ReplicaSetUpdated
          status: "True"
          type: Progressing
        observedGeneration: 1
        replicas: 1
        unavailableReplicas: 1
        updatedReplicas: 1
      

      The namespace is:

      apiVersion: v1
      kind: Namespace
      metadata:
        annotations:
          openshift.io/node-selector: ""
          openshift.io/sa.scc.mcs: s0:c23,c22
          openshift.io/sa.scc.supplemental-groups: 1000550000/10000
          openshift.io/sa.scc.uid-range: 1000550000/10000
          workload.openshift.io/allowed: management
        creationTimestamp: "2024-05-31T09:58:04Z"
        labels:
          kubernetes.io/metadata.name: openshift-service-ca
        name: openshift-service-ca
        resourceVersion: "5100"
        uid: 83ec7085-5a2d-4643-9ee3-abea416eac68
      spec:
        finalizers:
        - kubernetes
      status:
        phase: Active
      

      I'm guessing this should be high impact?

              slaznick@redhat.com Stanislav Láznička (Inactive)
              rhn-gps-mbooth Matthew Booth
              Xingxing Xia Xingxing Xia
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Created:
                Updated:
                Resolved:

                  Estimated:
                  Original Estimate - Not Specified
                  Not Specified
                  Remaining:
                  Remaining Estimate - 0 minutes
                  0m
                  Logged:
                  Time Spent - 9 years, 2 weeks
                  470w