Use the image from https://access.redhat.com/articles/6976804 (i.e. "projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-48-x86-64-202206140145") ============================================================== $ openshift-install version openshift-install 4.12.0-0.nightly-2022-09-22-153054 built from commit 6eca978b89fc0be17f70fc8a28fa20aab1316843 release image registry.ci.openshift.org/ocp/release@sha256:9fa1b2e858fa04a48695537b4609cdbedf9a9015a8bfe98abdaa69bf8506d3e1 release architecture amd64 $ $ openshift-install create manifests --dir work ? SSH Public Key /home/fedora/.ssh/openshift-qe.pub ? Platform gcp INFO Credentials loaded from file "/home/fedora/.gcp/osServiceAccount.json" ? Project ID OpenShift QE (openshift-qe) ? Region us-east1 ? Base Domain qe.gcp.devcluster.openshift.com ? Cluster Name jiwei-0923-04 ? Pull Secret [? for help] ***** INFO Manifests created in: work/manifests and work/openshift $ grep image work/openshift/*worker*.yaml work/openshift/99_openshift-cluster-api_worker-machineset-0.yaml: image: projects/rhcos-cloud/global/images/rhcos-412-86-202208101039-0-gcp-x86-64 work/openshift/99_openshift-cluster-api_worker-machineset-1.yaml: image: projects/rhcos-cloud/global/images/rhcos-412-86-202208101039-0-gcp-x86-64 work/openshift/99_openshift-cluster-api_worker-machineset-2.yaml: image: projects/rhcos-cloud/global/images/rhcos-412-86-202208101039-0-gcp-x86-64 $ $ sed -i 's/projects\/rhcos-cloud\/global\/images\/rhcos-412-86-202208101039-0-gcp-x86-64/projects\/redhat-mark etplace-public\/global\/images\/redhat-coreos-ocp-48-x86-64-202206140145/' work/openshift/*worker*.yaml $ $ grep image work/openshift/*worker*.yaml work/openshift/99_openshift-cluster-api_worker-machineset-0.yaml: image: projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-48-x86-64-202206140145 work/openshift/99_openshift-cluster-api_worker-machineset-1.yaml: image: projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-48-x86-64-202206140145 work/openshift/99_openshift-cluster-api_worker-machineset-2.yaml: image: projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-48-x86-64-202206140145 $ $ openshift-install create cluster --dir work INFO Consuming Openshift Manifests from target directory INFO Consuming Common Manifests from target directory INFO Consuming Master Machines from target directory INFO Consuming OpenShift Install (Manifests) from target directory INFO Consuming Worker Machines from target directory INFO Credentials loaded from file "/home/fedora/.gcp/osServiceAccount.json" INFO Creating infrastructure resources... INFO Waiting up to 20m0s (until 3:40AM) for the Kubernetes API at https://api.jiwei-0923-04.qe.gcp.devcluster.openshift.com:6443... INFO API v1.24.0+8c7c967 up INFO Waiting up to 30m0s (until 3:54AM) for bootstrapping to complete... INFO Destroying the bootstrap resources... INFO Waiting up to 40m0s (until 4:16AM) for the cluster at https://api.jiwei-0923-04.qe.gcp.devcluster.openshift.com:6443 to initialize... INFO Checking to see if there is a route at openshift-console/console... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/fedora/work/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.jiwei-0923-04.qe.gcp.devcluster.openshift.com INFO Login to the console with user: "kubeadmin", and password: "wfE4V-DApdV-ZPizq-Cp5Wi" INFO Time elapsed: 31m3s $ export KUBECONFIG=/home/fedora/work/auth/kubeconfig $ $ gcloud compute disks list --filter='name~jiwei' --format='table(name:sort=1,type,sizeGb,sourceImageId,sourceImage)' NAME TYPE SIZE_GB SOURCE_IMAGE_ID SOURCE_IMAGE jiwei-0923-04-np5d8-master-0 pd-ssd 128 3924444975955053205 https://www.googleapis.com/compute/v1/projects/rhcos-cloud/global/images/rhcos-412-86-202208101039-0-gcp-x86-64 jiwei-0923-04-np5d8-master-1 pd-ssd 128 3924444975955053205 https://www.googleapis.com/compute/v1/projects/rhcos-cloud/global/images/rhcos-412-86-202208101039-0-gcp-x86-64 jiwei-0923-04-np5d8-master-2 pd-ssd 128 3924444975955053205 https://www.googleapis.com/compute/v1/projects/rhcos-cloud/global/images/rhcos-412-86-202208101039-0-gcp-x86-64 jiwei-0923-04-np5d8-worker-b-xtd6p pd-ssd 128 604711057471723307 https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-48-x86-64-202206140145 jiwei-0923-04-np5d8-worker-c-n5tzh pd-ssd 128 604711057471723307 https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-48-x86-64-202206140145 jiwei-0923-04-np5d8-worker-d-7854d pd-ssd 128 604711057471723307 https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-48-x86-64-202206140145 $ $ ./check_cluster_health.sh 4.12.0-0.nightly-2022-09-22-153054 registry.ci.openshift.org/ocp/release Step #0: kubeconfig is avaiable Step #1: check cluster and payload availability Step #2: Make sure all machines are applied with latest machineconfig Checking #0 Command: oc get machineconfig NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master a627415c240b4c7dd2f9e90f659690d9c0f623f3 3.2.0 29m 00-worker a627415c240b4c7dd2f9e90f659690d9c0f623f3 3.2.0 29m 01-master-container-runtime a627415c240b4c7dd2f9e90f659690d9c0f623f3 3.2.0 29m 01-master-kubelet a627415c240b4c7dd2f9e90f659690d9c0f623f3 3.2.0 29m 01-worker-container-runtime a627415c240b4c7dd2f9e90f659690d9c0f623f3 3.2.0 29m 01-worker-kubelet a627415c240b4c7dd2f9e90f659690d9c0f623f3 3.2.0 29m 99-master-generated-registries a627415c240b4c7dd2f9e90f659690d9c0f623f3 3.2.0 29m 99-master-ssh 3.2.0 36m 99-worker-generated-registries a627415c240b4c7dd2f9e90f659690d9c0f623f3 3.2.0 29m 99-worker-ssh 3.2.0 36m rendered-master-00d397ffc1dabc9cfef1f2c0f88526ab a627415c240b4c7dd2f9e90f659690d9c0f623f3 3.2.0 29m rendered-worker-74d2c06650172ab441706b8be2b4ef65 a627415c240b4c7dd2f9e90f659690d9c0f623f3 3.2.0 29m Checking master machines are applied with latest master machineconfig... latest master machineconfig: rendered-master-00d397ffc1dabc9cfef1f2c0f88526ab latest machineconfig - rendered-master-00d397ffc1dabc9cfef1f2c0f88526ab is already applied to jiwei-0923-04-np5d8-master-0.c.openshift-qe.internal jiwei-0923-04-np5d8-master-1.c.openshift-qe.internal jiwei-0923-04-np5d8-master-2.c.openshift-qe.internal masters are already applied with latest machineconfig Checking #0 Command: oc get machineconfig NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master a627415c240b4c7dd2f9e90f659690d9c0f623f3 3.2.0 29m 00-worker a627415c240b4c7dd2f9e90f659690d9c0f623f3 3.2.0 29m 01-master-container-runtime a627415c240b4c7dd2f9e90f659690d9c0f623f3 3.2.0 29m 01-master-kubelet a627415c240b4c7dd2f9e90f659690d9c0f623f3 3.2.0 29m 01-worker-container-runtime a627415c240b4c7dd2f9e90f659690d9c0f623f3 3.2.0 29m 01-worker-kubelet a627415c240b4c7dd2f9e90f659690d9c0f623f3 3.2.0 29m 99-master-generated-registries a627415c240b4c7dd2f9e90f659690d9c0f623f3 3.2.0 29m 99-master-ssh 3.2.0 36m 99-worker-generated-registries a627415c240b4c7dd2f9e90f659690d9c0f623f3 3.2.0 29m 99-worker-ssh 3.2.0 36m rendered-master-00d397ffc1dabc9cfef1f2c0f88526ab a627415c240b4c7dd2f9e90f659690d9c0f623f3 3.2.0 29m rendered-worker-74d2c06650172ab441706b8be2b4ef65 a627415c240b4c7dd2f9e90f659690d9c0f623f3 3.2.0 29m Checking worker machines are applied with latest worker machineconfig... latest worker machineconfig: rendered-worker-74d2c06650172ab441706b8be2b4ef65 latest machineconfig - rendered-worker-74d2c06650172ab441706b8be2b4ef65 is already applied to jiwei-0923-04-np5d8-worker-b-xtd6p.c.openshift-qe.internal jiwei-0923-04-np5d8-worker-c-n5tzh.c.openshift-qe.internal jiwei-0923-04-np5d8-worker-d-7854d.c.openshift-qe.internal workers are already applied with latest machineconfig Step #3: check all cluster operators get stable and ready Checking #0 Make sure every operator do not reporty empty column Make sure every operator column reports version Make sure every operator's AVAILABLE column is True Passed #0 Checking #1 Make sure every operator do not reporty empty column Make sure every operator column reports version Make sure every operator's AVAILABLE column is True Passed #1 Checking #2 Make sure every operator do not reporty empty column Make sure every operator column reports version Make sure every operator's AVAILABLE column is True Passed #2 All cluster operators get stable and ready Step #4: Make sure every machine is in 'Ready' status All nodes are ready Step #5: check all pods are in status running or complate There are some failed pods: openshift-marketplace redhat-operators-rzbrs 1/1 Terminating 0 32s Step #6: Get cluster version, make sure it is matched with your selected target payload version 4.12.0-0.nightly-2022-09-22-153054 True False 13m Cluster version is 4.12.0-0.nightly-2022-09-22-153054 Step #7: Make sure all operator report correct payload version All operators report correct payload version Step #8: Make sure your cluster's web console is accessible Skip ... ... Step #9: CVO Health Check Step #9.1: Make sure CVO is using the image from your selected target payload CVO image is matched with your selected payload image Step #9.2 Make sure CVO imagePullPolicy is 'IfNotPresent' CVO imagePullPolicy is set to 'IfNotPresent' Step #10: MCO Health Check all pods in openshift-machine-config-operator is running Step #10.1: Check pods in openshift-machine-config-operator project Step #10.2: Check if the machineconfigs and machineconfigpools are cluster scoped (no namespace qualification) Step #10.3: Make sure MCO is using the image from your selected target payload MCO pod is using: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a9a605715992525922dec7d33e6c491f902d71db9503c1442803e57297470f39 Step #10.4: Make sure MCO imagePullPolicy is 'IfNotPresent' MCO imagePullPolicy is IfNotPresent 10.6 Make sure machines in cluster is using machine-os-content from your selected target payload OS version: 412.86.202209220614-0 Machine is using the following os content, refer to machine-config-daemon-jzmbd pod log in openshift-machine-config-operator namespace * pivot://quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:62fb9000753641f0007652b5446449e0e7c659a9e2958134648597cd37d53076 CustomOrigin: Managed by machine-config-operator Version: 412.86.202209220614-0 (2022-09-22T06:16:48Z) a4cd5482264860d07e6830708ba5952df8ec2596feff273f8ccddf2d4331bdb6 Version: 412.86.202208101039-0 (2022-08-10T10:42:36Z) machine-os-content is from quay.io quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:62fb9000753641f0007652b5446449e0e7c659a9e2958134648597cd37d53076 There are the machineconfig rendered-master-00d397ffc1dabc9cfef1f2c0f88526ab rendered-worker-74d2c06650172ab441706b8be2b4ef65 master and worker machineconfig is detected, and number is 2 Step #11.1: Make sure cloud credential operator pod is using the image from your selected target payload cloud credential operator pod is using: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:572f4b6955fe0309ad99392889ce835d0774ffbcffddac6b454b0a05274d6bab Step #11.3: Make sure cloud credential operator imagePullPolicy is 'IfNotPresent' cloud credential operator imagePullPolicy is IfNotPresent cloud credential operator is from quay.io $ $ openshift-install destroy cluster --dir work INFO Credentials loaded from file "/home/fedora/.gcp/osServiceAccount.json" INFO Stopped instance jiwei-0923-04-np5d8-worker-d-7854d INFO Stopped instance jiwei-0923-04-np5d8-worker-c-n5tzh INFO Stopped instance jiwei-0923-04-np5d8-worker-b-xtd6p INFO Stopped instance jiwei-0923-04-np5d8-master-2 INFO Stopped instance jiwei-0923-04-np5d8-master-0 INFO Stopped instance jiwei-0923-04-np5d8-master-1 INFO Deleted IAM project role bindings INFO Deleted service account projects/openshift-qe/serviceAccounts/jiwei-0923-04-np5d8-m@openshift-qe.iam.gserviceaccount.com INFO Deleted service account projects/openshift-qe/serviceAccounts/jiwei-0923-0-openshift-g-b784b@openshift-qe.iam.gserviceaccount.com INFO Deleted service account projects/openshift-qe/serviceAccounts/jiwei-0923-04-np5d8-w@openshift-qe.iam.gserviceaccount.com INFO Deleted service account projects/openshift-qe/serviceAccounts/jiwei-0923-0-openshift-i-l28fw@openshift-qe.iam.gserviceaccount.com INFO Deleted service account projects/openshift-qe/serviceAccounts/jiwei-0923-0-openshift-m-mvgkh@openshift-qe.iam.gserviceaccount.com INFO Deleted service account projects/openshift-qe/serviceAccounts/jiwei-0923-0-openshift-g-c5h9k@openshift-qe.iam.gserviceaccount.com INFO Deleted service account projects/openshift-qe/serviceAccounts/jiwei-0923-0-cloud-crede-xn7s9@openshift-qe.iam.gserviceaccount.com INFO Deleted service account projects/openshift-qe/serviceAccounts/jiwei-0923-0-openshift-i-87st6@openshift-qe.iam.gserviceaccount.com INFO Deleted service account projects/openshift-qe/serviceAccounts/jiwei-0923-0-openshift-c-8scsz@openshift-qe.iam.gserviceaccount.com INFO Deleted 2 recordset(s) in zone qe INFO Deleted 3 recordset(s) in zone jiwei-0923-04-np5d8-private-zone INFO Deleted DNS zone jiwei-0923-04-np5d8-private-zone INFO Deleted bucket jiwei-0923-04-np5d8-image-registry-us-east1-fladekxsmepvxqgdcd INFO Deleted instance jiwei-0923-04-np5d8-master-1 INFO Deleted instance jiwei-0923-04-np5d8-worker-c-n5tzh INFO Deleted instance jiwei-0923-04-np5d8-master-2 INFO Deleted instance jiwei-0923-04-np5d8-worker-d-7854d INFO Deleted instance jiwei-0923-04-np5d8-master-0 INFO Deleted instance jiwei-0923-04-np5d8-worker-b-xtd6p INFO Deleted disk jiwei-0923-04-np5d8-master-0 INFO Deleted disk jiwei-0923-04-np5d8-worker-b-xtd6p INFO Deleted disk jiwei-0923-04-np5d8-master-2 INFO Deleted disk jiwei-0923-04-np5d8-worker-d-7854d INFO Deleted disk jiwei-0923-04-np5d8-master-1 INFO Deleted disk jiwei-0923-04-np5d8-worker-c-n5tzh INFO Deleted firewall rule k8s-fw-a95751246d46a4d8c827098400af20cb INFO Deleted firewall rule k8s-a95751246d46a4d8c827098400af20cb-http-hc INFO Deleted firewall rule jiwei-0923-04-np5d8-api INFO Deleted firewall rule jiwei-0923-04-np5d8-control-plane INFO Deleted firewall rule jiwei-0923-04-np5d8-etcd INFO Deleted firewall rule jiwei-0923-04-np5d8-health-checks INFO Deleted firewall rule jiwei-0923-04-np5d8-internal-cluster INFO Deleted firewall rule jiwei-0923-04-np5d8-internal-network INFO Deleted address jiwei-0923-04-np5d8-cluster-ip INFO Deleted address jiwei-0923-04-np5d8-cluster-public-ip INFO Deleted forwarding rule jiwei-0923-04-np5d8-api-internal INFO Deleted forwarding rule a95751246d46a4d8c827098400af20cb INFO Deleted forwarding rule jiwei-0923-04-np5d8-api INFO Deleted router jiwei-0923-04-np5d8-router INFO Deleted subnetwork jiwei-0923-04-np5d8-worker-subnet INFO Deleted target pool a95751246d46a4d8c827098400af20cb INFO Deleted target pool jiwei-0923-04-np5d8-api INFO Deleted backend service jiwei-0923-04-np5d8-api-internal INFO Deleted subnetwork jiwei-0923-04-np5d8-master-subnet INFO Deleted instance group jiwei-0923-04-np5d8-master-us-east1-c INFO Deleted instance group jiwei-0923-04-np5d8-master-us-east1-d INFO Deleted instance group jiwei-0923-04-np5d8-master-us-east1-b INFO Deleted health check jiwei-0923-04-np5d8-api-internal INFO Deleted HTTP health check a95751246d46a4d8c827098400af20cb INFO Deleted HTTP health check jiwei-0923-04-np5d8-api INFO Deleted network jiwei-0923-04-np5d8-network INFO Time elapsed: 4m48s $