Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-20192

openshift.io/scc: restricted-readonly when setting up router sharding

XMLWordPrintable

    • Moderate
    • No
    • 2
    • Sprint 243
    • 1
    • Rejected
    • False
    • Hide

      None

      Show
      None

      When setting up router sharding with `endpointPublishingStrategy: Private` in a OCP 4.13.11 BareMetal cluster, the restricted-readonly scc is added to the router pods. Causing them to CrashLoopBackOff:

      ~~~
      $ oc get pod -n openshift-ingress router-spinque-xxx -oyaml | grep -i scc
      openshift.io/scc: restricted-readonly <<<
      $ oc get pod -n openshift-ingress router-spinque-xxxj -oyaml | grep -i scc
      openshift.io/scc: restricted-readonly <<<<
      $ oc get pod -n openshift-ingress router-spinque-xxx -oyaml | grep -i scc
      openshift.io/scc: restricted-readonly <<<<
      ~~~
      ~~~
      router-spinque-xxx 0/1 CrashLoopBackOff 27 2h
      router-spinque-xxx 0/1 CrashLoopBackOff 27 2h
      router-spinque-xxx 0/1 CrashLoopBackOff 27 2h
      ~~~

      Please find the must-gather as well as the sos-report from one of the nodes in the case 03624389 in supportshell

       

      The following scc config can be used to reproduce this issue on any platform:

      allowPrivilegeEscalation: true
      allowedCapabilities: []
      apiVersion: security.openshift.io/v1
      defaultAddCapabilities: null
      fsGroup:
        type: MustRunAs
      groups:
      - system:authenticated
      kind: SecurityContextConstraints
      metadata:
        name: bad-router
      priority: 0
      readOnlyRootFilesystem: true
      requiredDropCapabilities:
      - KILL
      - MKNOD
      - SETUID
      - SETGID
      runAsUser:
        type: MustRunAsRange
      seLinuxContext:
        type: MustRunAs
      supplementalGroups:
        type: RunAsAny
      users: []
      volumes:
      - configMap
      - downwardAPI
      - emptyDir
      - persistentVolumeClaim
      - projected
      - secret
      

      Save the above yaml as bad-router-scc.yaml then apply it to your cluster:

      $ oc apply -f bad-router-scc.yaml
      

      Force the restart of router pods, such as by deleting one:

      $ oc delete pod router-default-6465854689-gvjhs
      

      The newly started pod(s) should be running but not ready, with the bad-router scc:

      $ oc get pods
      NAME                              READY   STATUS    RESTARTS   AGE
      router-default-6465854689-7x558   0/1     Running   0          49s
      $ oc get pod router-default-6465854689-7x558 -o yaml|grep scc
          openshift.io/scc: bad-router
      

      If you wait long enough, it will restart multiple times, and eventually enter the CrashLoopBackOff state

              rfredett@redhat.com Ryan Fredette
              aalgorta@redhat.com Agustin Algorta (Inactive)
              Shudi Li Shudi Li
              Votes:
              0 Vote for this issue
              Watchers:
              12 Start watching this issue

                Created:
                Updated:
                Resolved: