Uploaded image for project: 'Red Hat OpenShift Dev Spaces (formerly CodeReady Workspaces) '
  1. Red Hat OpenShift Dev Spaces (formerly CodeReady Workspaces)
  2. CRW-7199

Use version(s) of registry.redhat.io/openshift4/ose-kube-rbac-proxy to support x86, s390x & PPC on OCP 4.12 - 4.16

XMLWordPrintable

    • False
    • None
    • False

       

      DevSpaces is currently using registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.12 (in the che-gateway) which only supports x86 and s390x architectures. As a result, potential builds of DevSpaces 3.16 will not be able to deploy on PPC clusters:

       

      Failed to pull image "registry.redhat.io/openshift4/ose-kube-rbac-proxy@sha256:9674694d47bf0056d6ff9ab3c9f8d0849399ac66c74c98b63bc8fa68c8aaec58": choosing image instance: no image found in manifest list for architecture ppc64le, variant "", OS linux 

       

       

      We need to find a way to resolve this issue. Some potential ideas:

      • Use a new version of ose-kube-rbac-proxy (Builds of DWO using the 4.15 version of this image are currently underway for testing). The worry with this approach is whether or not there is backwards compability for older version of OCP (i.e. can we use ose-kube-rbac-proxy 4.15 on OCP 4.14?)
      • Use different ose-kube-rbac-proxy images for different architectures (i.e. v4.12 for x86 & s390x, v4.13 for PPC & ARM). I'm not sure if this is actually possible with Brew and operators in general: AFAIK there's only a single clusterserviceversion that gets created for all architecture builds.

      "che-gateway" pod of openshift-devspaces namespace is using the next images:

      • registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.12
      • registry.redhat.io/openshift4/ose-oauth-proxy:v4.12.
         

            sdawley@redhat.com Samantha Dawley
            aobuchow Andrew Obuchowicz
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

              Created:
              Updated:
              Resolved: