Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-66224

ODF installation constantly fails with Assisted Installer

XMLWordPrintable

    • Icon: Bug Bug
    • Resolution: Unresolved
    • Icon: Undefined Undefined
    • None
    • 4.20.0
    • None
    • None
    • False
    • Hide

      None

      Show
      None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None
    • None

      Description of problem:

      Failing to install ODF operator with assisted installer.
      85% of the time when we enable ODF operator with 4.20.4 we are getting cluster without it. 

      ODF itself is installed though OCS that is dependency for it is failing and OCS is actually the CEPH itself.

      Assisted-installer-controller logs

      time="2025-12-01T02:03:09Z" level=info msg="Checking if odf operator is initialized" func="github.com/openshift/assisted-installer/src/assisted_installer_controller.(*controller).getReadyOperators" file="/app/src/assisted_installer_controller/assisted_installer_controller.go:609"
      time="2025-12-01T02:03:09Z" level=info msg="Check if there are failed olm jobs and delete them in case they exists" func=github.com/openshift/assisted-installer/src/assisted_installer_controller.ClusterServiceVersionHandler.handleOLMEarlySetupBug file="/app/src/assisted_installer_controller/operator_handler.go:196"
      time="2025-12-01T02:03:09Z" level=info msg="Deleting failed install plan install-sn54l" func=github.com/openshift/assisted-installer/src/assisted_installer_controller.ClusterServiceVersionHandler.deleteFailedSubscriptionInstallPlans file="/app/src/assisted_installer_controller/operator_handler.go:260"
      time="2025-12-01T02:03:09Z" level=info msg="Operator odf csv name is odf-operator.v4.20.0-rhodf" func=github.com/openshift/assisted-installer/src/assisted_installer_controller.ClusterServiceVersionHandler.IsInitialized file="/app/src/assisted_installer_controller/operator_handler.go:176"
      time="2025-12-01T02:03:09Z" level=info msg="odf operator is initialized" func="github.com/openshift/assisted-installer/src/assisted_installer_controller.(*controller).getReadyOperators" file="/app/src/assisted_installer_controller/assisted_installer_controller.go:611"
      time="2025-12-01T02:03:09Z" level=info msg="Created temporary directory /tmp/controller-custom-manifests-1765545254 to store custom manifest content." func="github.com/openshift/assisted-installer/src/assisted_installer_controller.(*controller).applyPostInstallManifests" file="/app/src/assisted_installer_controller/assisted_installer_controller.go:639"
      
      time="2025-12-01T02:14:10Z" level=info msg="resource mapping not found for name: \"ocs-storagecluster-storagesystem\" namespace: \"openshift-storage\" from \"/tmp/operator-manifest2644321606\": no matches for kind \"StorageSystem\" in version \"odf.openshift.io/v1alpha1\"\nensure CRDs are installed first\nresource mapping not found for name: \"ocs-storagecluster\" namespace: \"openshift-storage\" from \"/tmp/operator-manifest2644321606\": no matches for kind \"StorageCluster\" in version \"ocs.openshift.io/v1\"\nensure CRDs are installed first\n" func="github.com/openshift/assisted-installer/src/utils.(*LogWriter).Write" file="/app/src/utils/utils.go:58"
      time="2025-12-01T02:14:10Z" level=info msg="failed executing /usr/bin/bash [-c oc --kubeconfig=/tmp/controller-custom-manifests-3524504881/kubeconfig-noingress apply -f /tmp/operator-manifest2644321606], env vars [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm HOSTNAME=52-54-00-20-a4-65 NSS_SDB_USE_CACHE=no CLUSTER_ID=77b04a2f-8753-43af-81fe-302b0c45306d INVENTORY_URL=https://api.openshift.com PULL_SECRET_TOKEN=b3BlbnNoaWZ0LXJlbGVhc2UtZGV2K29jbV9hY2Nlc3NfNTUwYTMyOGVjYjA0NDA2Nzk4MWZiMzFiZDViZDgzYTg6QU80WDZZMlZHSzMzMEc5MUdYSlFWUUIyQ1JFV05YS1A4MTVYQ0xKWjM5M1NJODVSVUU5OTZWRlBYWVhLMTQwSQ== CA_CERT_PATH= OPENSHIFT_VERSION=4.20.4 SKIP_CERT_VERIFICATION=false NOTIFY_NUM_REBOOTS=true CONTROL_PLANE_COUNT=3 CHECK_CLUSTER_VERSION=true MUST_GATHER_IMAGE=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:19a34b9c79579cf3765ecc40b4ef832e46cfbbf7fa9888992eaed1b397fc5422 KUBERNETES_SERVICE_PORT=443 KUBERNETES_SERVICE_PORT_HTTPS=443 KUBERNETES_PORT=tcp://172.30.0.1:443 KUBERNETES_PORT_443_TCP=tcp://172.30.0.1:443 KUBERNETES_PORT_443_TCP_PROTO=tcp KUBERNETES_PORT_443_TCP_PORT=443 KUBERNETES_PORT_443_TCP_ADDR=172.30.0.1 KUBERNETES_SERVICE_HOST=172.30.0.1 container=oci HOME=/], error exit status 1, waitStatus 1, Output \"resource mapping not found for name: \"ocs-storagecluster-storagesystem\" namespace: \"openshift-storage\" from \"/tmp/operator-manifest2644321606\": no matches for kind \"StorageSystem\" in version \"odf.openshift.io/v1alpha1\"\nensure CRDs are installed first\nresource mapping not found for name: \"ocs-storagecluster\" namespace: \"openshift-storage\" from \"/tmp/operator-manifest2644321606\": no matches for kind \"StorageCluster\" in version \"ocs.openshift.io/v1\"\nensure CRDs are installed first\"" func="github.com/openshift/assisted-installer/src/ops/execute.(*executor).execCommand" file="/app/src/ops/execute/execute.go:83"
      
      

      The issue is that installPlan is failing, controller deletes it but OLM have no idea what to do from this point and will just search for it, fail and do nothing.

      Controller should probably find the relevant olm job and delete it to

      How reproducible:

      Install ODF with assisted    

      Steps to Reproduce:

          1. Install ODF with assisted
          2. Check the cluster
          3.
          

      Actual results:

         ODF is running, no OCS, no PVs, storage cluster doesn't exists

      Expected results:

         Running storage cluster

      Additional info:

          

              oourfali Oved Ourfali
              itsoiref@redhat.com Igal Tsoiref
              None
              None
              Michael Burman Michael Burman
              None
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

                Created:
                Updated: