Uploaded image for project: 'OpenShift Bugs'
  1. OpenShift Bugs
  2. OCPBUGS-9133

ClusterVersion Failing=True and Available=False should trigger alerts

    • Low
    • None
    • 3
    • OTA 252
    • 1
    • Unspecified
    • Hide
      * This release expands the `ClusterOperatorDown` and `ClusterOperatorDegraded` alerts to cover ClusterVersion conditions and send alerts for `Available=False` (`ClusterOperatorDown`) and `Failing=True` (`ClusterOperatorDegraded`). In previous releases, those alerts only covered ClusterOperator conditions. (link:https://issues.redhat.com/browse/OCPBUGS-9133[*OCPBUGS-9133*])
      Show
      * This release expands the `ClusterOperatorDown` and `ClusterOperatorDegraded` alerts to cover ClusterVersion conditions and send alerts for `Available=False` (`ClusterOperatorDown`) and `Failing=True` (`ClusterOperatorDegraded`). In previous releases, those alerts only covered ClusterOperator conditions. (link: https://issues.redhat.com/browse/OCPBUGS-9133 [* OCPBUGS-9133 *])
    • Enhancement
    • Done

      We have ClusterOperatorDown and ClusterOperatorDegraded in this space for ClusterOperator conditions. We should wire that up for ClusterVersion as well.

            [OCPBUGS-9133] ClusterVersion Failing=True and Available=False should trigger alerts

            Since the problem described in this issue should be resolved in a recent advisory, it has been closed.

            For information on the advisory (Critical: OpenShift Container Platform 4.16.0 bug fix and security update), and where to find the updated files, follow the link below.

            If the solution does not work for you, open a new bug report.
            https://access.redhat.com/errata/RHSA-2024:0041

            Errata Tool added a comment - Since the problem described in this issue should be resolved in a recent advisory, it has been closed. For information on the advisory (Critical: OpenShift Container Platform 4.16.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2024:0041

            The Jira bot is over-eager in closing bugs that affect end-of-life releases but also effect still-supported releases.  This bug is one of those, and I've shifted it back to Verified.  If the Jira bot closes it again, we may have to drop 4.6 from Affects Version/s to avoid confusing the bot, even though 4.6 and all other old releases were in fact affected by this issue.

            W. Trevor King added a comment - The Jira bot is over-eager in closing bugs that affect end-of-life releases but also effect still-supported releases.  This bug is one of those, and I've shifted it back to Verified .  If the Jira bot closes it again, we may have to drop 4.6 from Affects Version/s to avoid confusing the bot, even though 4.6 and all other old releases were in fact affected by this issue.

            This bug is being closed because while it may represent a valid problem, the bug has been reported against a version no longer in support. For support lifecycle dates please refer to https://access.redhat.com/support/policy/updates/openshift#dates .

            OpenShift Jira Bot added a comment - This bug is being closed because while it may represent a valid problem, the bug has been reported against a version no longer in support. For support lifecycle dates please refer to https://access.redhat.com/support/policy/updates/openshift#dates .

            yanyang@redhat.com Please suggest for the above, do we need to create a new test case for this scenario and its a destructive case. 

            Dinesh Kumar S added a comment - yanyang@redhat.com Please suggest for the above, do we need to create a new test case for this scenario and its a destructive case. 

            Jia Liu added a comment -

            hi rhn-support-dis it looks like a new alert on clusterversion, so we might need add test case for future regression test, wdyt?

            Jia Liu added a comment - hi rhn-support-dis it looks like a new alert on clusterversion, so we might need add test case for future regression test, wdyt?

            1.Disable the scheduling for control-plane nodes:

            [root@preserve-dis016-centos9 cloud-user]# oc adm cordon -l node-role.kubernetes.io/control-plane=
            node/ip-10-0-104-4.ec2.internal cordoned
            node/ip-10-0-109-179.ec2.internal cordoned
            node/ip-10-0-52-64.ec2.internal cordoned
            [root@preserve-dis016-centos9 cloud-user]#  

            2. Delete oauth pod:

            [root@preserve-dis016-centos9 cloud-user]# oc -n openshift-authentication delete "$(oc -n openshift-authentication get -o name pods | head -n1)"
            pod "oauth-openshift-5f9dc866b8-jk94h" deleted
            [root@preserve-dis016-centos9 cloud-user]#  

            3.Disable scheduling for all workers:

            [root@preserve-dis016-centos9 cloud-user]# oc adm cordon -l node-role.kubernetes.io/worker=
            node/ip-10-0-93-167.ec2.internal cordoned
            node/ip-10-0-97-79.ec2.internal cordoned
            [root@preserve-dis016-centos9 cloud-user]# 

            4.Delete image-registry namespace:

            [root@preserve-dis016-centos9 cloud-user]# oc delete namespace openshift-image-registry
            namespace "openshift-image-registry" deleted 

            5.Check Operator status

            [root@preserve-dis016-centos9 cloud-user]# oc get -o json clusteroperator | jq -c '.items[] | .metadata.name as $n | .status.conditions[] | select((.type == "Available" and .status == "False") or (.type == "Degraded" and .status == "True")) | .name = $n' | sort
            {"lastTransitionTime":"2024-04-16T12:59:51Z","message":"1 of 6 credentials requests are failing to sync.","reason":"CredentialsFailing","status":"True","type":"Degraded","name":"cloud-credential"}
            {"lastTransitionTime":"2024-04-16T12:59:51Z","message":"Available: The registry is ready\nNodeCADaemonAvailable: The daemon set node-ca does not exist\nImagePrunerAvailable: Pruner CronJob has been created","reason":"NodeCADaemonNotFound::Ready","status":"False","type":"Available","name":"image-registry"}
            {"lastTransitionTime":"2024-04-16T12:59:51Z","message":"NodeCADaemonControllerDegraded: failed to create object *v1.DaemonSet, Namespace=openshift-image-registry, Name=node-ca: daemonsets.apps is forbidden: User \"system:serviceaccount:openshift-image-registry:cluster-image-registry-operator\" cannot create resource \"daemonsets\" in API group \"apps\" in the namespace \"openshift-image-registry\": RBAC: role.rbac.authorization.k8s.io \"cluster-image-registry-operator\" not found","reason":"NodeCADaemonControllerError","status":"True","type":"Degraded","name":"image-registry"}
            [root@preserve-dis016-centos9 cloud-user]# 
             

            6.Check oc adm upgrade status:

            [root@preserve-dis016-centos9 cloud-user]# oc adm upgrade 
            Failing=True:  Reason: MultipleErrors
              Message: Multiple errors are preventing progress:
              * Cluster operator cloud-credential is degraded
              * Could not update role "openshift-image-registry/cluster-image-registry-operator" (343 of 890): the server has forbidden updates to this resource
              * Could not update role "openshift-image-registry/prometheus-k8s" (771 of 890): resource may have been deletedError while reconciling 4.16.0-0.test-2024-04-16-105220-ci-ln-8hggfxk-latest: an unknown error has occurred: MultipleErrorsUpgradeable=False  Reason: PoolUpdating
              Message: Cluster operator machine-config should not be upgraded between minor versions: One or more machine config pools are updating, please see `oc get mcp` for further detailswarning: Cannot display available updates:
              Reason: NoChannel
              Message: The update channel has not been configured.[root@preserve-dis016-centos9 cloud-user]#  

            CVO started alerting:

            alertname alertstate name Value
            ClusterOperatorDegraded pending image-registry 1
            ClusterOperatorDegraded pending monitoring 1
            ClusterOperatorDegraded pending version 1
            ClusterOperatorDown pending monitoring 1
            ClusterOperatorDegraded pending cloud-credential None
            ClusterOperatorDown pending image-registry None
            ClusterOperatorDown firing image-registry 1

            Dinesh Kumar S added a comment - 1.Disable the scheduling for control-plane nodes: [root@preserve-dis016-centos9 cloud-user]# oc adm cordon -l node-role.kubernetes.io/control-plane= node/ip-10-0-104-4.ec2.internal cordoned node/ip-10-0-109-179.ec2.internal cordoned node/ip-10-0-52-64.ec2.internal cordoned [root@preserve-dis016-centos9 cloud-user]#  2. Delete oauth pod: [root@preserve-dis016-centos9 cloud-user]# oc -n openshift-authentication delete "$(oc -n openshift-authentication get -o name pods | head -n1)" pod "oauth-openshift-5f9dc866b8-jk94h" deleted [root@preserve-dis016-centos9 cloud-user]#  3.Disable scheduling for all workers: [root@preserve-dis016-centos9 cloud-user]# oc adm cordon -l node-role.kubernetes.io/worker= node/ip-10-0-93-167.ec2.internal cordoned node/ip-10-0-97-79.ec2.internal cordoned [root@preserve-dis016-centos9 cloud-user]# 4.Delete image-registry namespace: [root@preserve-dis016-centos9 cloud-user]# oc delete namespace openshift-image-registry namespace "openshift-image-registry" deleted 5.Check Operator status [root@preserve-dis016-centos9 cloud-user]# oc get -o json clusteroperator | jq -c '.items[] | .metadata.name as $n | .status.conditions[] | select((.type == "Available" and .status == "False" ) or (.type == "Degraded" and .status == "True" )) | .name = $n' | sort { "lastTransitionTime" : "2024-04-16T12:59:51Z" , "message" : "1 of 6 credentials requests are failing to sync." , "reason" : "CredentialsFailing" , "status" : "True" , "type" : "Degraded" , "name" : "cloud-credential" } { "lastTransitionTime" : "2024-04-16T12:59:51Z" , "message" : "Available: The registry is ready\nNodeCADaemonAvailable: The daemon set node-ca does not exist\nImagePrunerAvailable: Pruner CronJob has been created" , "reason" : "NodeCADaemonNotFound::Ready" , "status" : "False" , "type" : "Available" , "name" : "image-registry" } { "lastTransitionTime" : "2024-04-16T12:59:51Z" , "message" : "NodeCADaemonControllerDegraded: failed to create object *v1.DaemonSet, Namespace=openshift-image-registry, Name=node-ca: daemonsets.apps is forbidden: User \" system:serviceaccount:openshift-image-registry:cluster-image-registry- operator \ " cannot create resource \" daemonsets\ " in API group \" apps\ " in the namespace \" openshift-image-registry\ ": RBAC: role.rbac.authorization.k8s.io \" cluster-image-registry- operator \ " not found" , "reason" : "NodeCADaemonControllerError" , "status" : "True" , "type" : "Degraded" , "name" : "image-registry" } [root@preserve-dis016-centos9 cloud-user]#  6.Check oc adm upgrade status: [root@preserve-dis016-centos9 cloud-user]# oc adm upgrade  Failing=True:  Reason: MultipleErrors   Message: Multiple errors are preventing progress:   * Cluster operator cloud-credential is degraded   * Could not update role "openshift-image-registry/cluster-image-registry- operator " (343 of 890): the server has forbidden updates to this resource   * Could not update role "openshift-image-registry/prometheus-k8s" (771 of 890): resource may have been deletedError while reconciling 4.16.0-0.test-2024-04-16-105220-ci-ln-8hggfxk-latest: an unknown error has occurred: MultipleErrorsUpgradeable=False  Reason: PoolUpdating   Message: Cluster operator machine-config should not be upgraded between minor versions: One or more machine config pools are updating, please see `oc get mcp` for further detailswarning: Cannot display available updates:   Reason: NoChannel   Message: The update channel has not been configured.[root@preserve-dis016-centos9 cloud-user]#  CVO started alerting: alertname alertstate name Value ClusterOperatorDegraded pending image-registry 1 ClusterOperatorDegraded pending monitoring 1 ClusterOperatorDegraded pending version 1 ClusterOperatorDown pending monitoring 1 ClusterOperatorDegraded pending cloud-credential None ClusterOperatorDown pending image-registry None ClusterOperatorDown firing image-registry 1

            W. Trevor King added a comment - - edited

            W. Trevor King added a comment - - edited You might try my cordon everything and delete some operand pods approach ?

            Initial Stage:

            [root@preserve-dis016-centos9 cloud-user]# oc get clusterversion 
            NAME      VERSION                                                AVAILABLE   PROGRESSING   SINCE   STATUS
            version   4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         51m     Cluster version is 4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest
            [root@preserve-dis016-centos9 cloud-user]# oc get clusteroperator
            NAME                                       VERSION                                                AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
            authentication                             4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      51m     
            baremetal                                  4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      66m     
            cloud-controller-manager                   4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      68m     
            cloud-credential                           4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      69m     
            cluster-autoscaler                         4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      66m     
            config-operator                            4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      66m     
            console                                    4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      56m     
            control-plane-machine-set                  4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      63m     
            csi-snapshot-controller                    4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      66m     
            dns                                        4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      65m     
            etcd                                       4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      64m     
            image-registry                             4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      58m     
            ingress                                    4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      59m     
            insights                                   4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      53m     
            kube-apiserver                             4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      59m     
            kube-controller-manager                    4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      64m     
            kube-scheduler                             4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      63m     
            kube-storage-version-migrator              4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      66m     
            machine-api                                4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      62m     
            machine-approver                           4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      66m     
            machine-config                             4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      66m     
            marketplace                                4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      66m     
            monitoring                                 4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      55m     
            network                                    4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      68m     
            node-tuning                                4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      62m     
            openshift-apiserver                        4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      59m     
            openshift-controller-manager               4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      65m     
            openshift-samples                          4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      59m     
            operator-lifecycle-manager                 4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      65m     
            operator-lifecycle-manager-catalog         4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      65m     
            operator-lifecycle-manager-packageserver   4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      59m     
            service-ca                                 4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      66m     
            storage                                    4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      66m     
            [root@preserve-dis016-centos9 cloud-user]#  

            Delete pull-secret:

            [root@preserve-dis016-centos9 cloud-user]# oc -n openshift-config delete secret pull-secret
            secret "pull-secret" deleted
            [root@preserve-dis016-centos9 cloud-user]#   [root@preserve-dis016-centos9 cloud-user]# oc -n openshift-config get secret
            NAME                                      TYPE                                  DATA   AGE
            builder-dockercfg-swhjq                   kubernetes.io/dockercfg               1      66m
            builder-token-h52pb                       kubernetes.io/service-account-token   4      66m
            default-dockercfg-7btsf                   kubernetes.io/dockercfg               1      65m
            default-token-pht4t                       kubernetes.io/service-account-token   4      65m
            deployer-dockercfg-sgrrs                  kubernetes.io/dockercfg               1      66m
            deployer-token-qwsp4                      kubernetes.io/service-account-token   4      66m
            etcd-client                               kubernetes.io/tls                     2      70m
            etcd-metric-signer                        kubernetes.io/tls                     2      70m
            etcd-signer                               kubernetes.io/tls                     2      70m
            initial-service-account-private-key       Opaque                                1      70m
            webhook-authentication-integrated-oauth   Opaque                                1      67m
            [root@preserve-dis016-centos9 cloud-user]# 

            Machine-config didn't notice:

            [root@preserve-dis016-centos9 cloud-user]# oc -n openshift-machine-config-operator logs -l k8s-app=machine-config-controller
            I0412 09:26:46.973127       1 render_controller.go:127] Starting MachineConfigController-RenderController
            I0412 09:26:47.001509       1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool worker
            I0412 09:26:47.063418       1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159
            I0412 09:26:47.095891       1 kubelet_config_nodes.go:156] Applied Node configuration 97-master-generated-kubelet on MachineConfigPool master
            I0412 09:26:47.126857       1 container_runtime_config_controller.go:888] Applied ImageConfig cluster on MachineConfigPool master
            I0412 09:26:47.234432       1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool master
            I0412 09:26:47.312612       1 container_runtime_config_controller.go:888] Applied ImageConfig cluster on MachineConfigPool worker
            I0412 09:26:47.566714       1 kubelet_config_nodes.go:156] Applied Node configuration 97-worker-generated-kubelet on MachineConfigPool worker
            I0412 09:26:48.166436       1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool master
            I0412 09:26:48.768173       1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool worker 

            Noticed error, but status Status didn't change:

            :
            [root@preserve-dis016-centos9 cloud-user]# oc get clusteroperator image-registry 
            NAME             VERSION                                                AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
            image-registry   4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      61m     
            [root@preserve-dis016-centos9 cloud-user]# 
            [root@preserve-dis016-centos9 cloud-user]# 
            [root@preserve-dis016-centos9 cloud-user]# 
            [root@preserve-dis016-centos9 cloud-user]# oc get -o json clusteroperator image-registry | jq -r '.status.conditions[] | .lastTransitionTime + " " + .type + "=" + .status + " " + .reason + ": " + .message'
            2024-04-12T08:30:39Z Available=True Ready: Available: The registry is ready
            NodeCADaemonAvailable: The daemon set node-ca has available replicas
            ImagePrunerAvailable: Pruner CronJob has been created
            2024-04-12T09:31:46Z Progressing=True Error: Progressing: Unable to apply resources: unable to apply objects: failed to update object *v1.Secret, Namespace=openshift-image-registry, Name=installation-pull-secrets: Secret "installation-pull-secrets" is invalid: data[.dockerconfigjson]: Required value
            NodeCADaemonProgressing: The daemon set node-ca is deployed
            2024-04-12T08:30:14Z Degraded=False AsExpected: 
            [root@preserve-dis016-centos9 cloud-user]# 

            Delete Machine-config-operator:

            [root@preserve-dis016-centos9 cloud-user]# oc -n openshift-machine-config-operator delete pod -l k8s-app=machine-config-controller
            pod "machine-config-controller-74468cd789-8lzv9" deleted
            [root@preserve-dis016-centos9 cloud-user]# 
            [root@preserve-dis016-centos9 cloud-user]# oc -n openshift-machine-config-operator logs -l k8s-app=machine-config-controller --tail 10
            Defaulted container "machine-config-controller" out of: machine-config-controller, kube-rbac-proxy
            I0412 09:34:34.587556       1 template_controller.go:410] Error syncing controllerconfig machine-config-controller: secrets "pull-secret" not found
            I0412 09:34:34.758299       1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(false) failing(true)
            I0412 09:34:34.758299       1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(false) failing(true)
            I0412 09:34:35.398643       1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(false) failing(true)
            I0412 09:34:35.398698       1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(false) failing(true)
            I0412 09:34:36.679666       1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(false) failing(true)
            I0412 09:34:36.679666       1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(false) failing(true)
            I0412 09:34:39.240078       1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(false) failing(true)
            I0412 09:34:39.240135       1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(false) failing(true)
            I0412 09:34:39.719303       1 template_controller.go:410] Error syncing controllerconfig machine-config-controller: secrets "pull-secret" not found
            [root@preserve-dis016-centos9 cloud-user]#  

            Cluster operator Status:

            [root@preserve-dis016-centos9 cloud-user]# oc get -o json clusteroperator machine-config | jq -r '.status.conditions[] | .lastTransitionTime + " " + .type + "=" + .status + " " + .reason + ": " + .message'
            2024-04-12T08:24:42Z Progressing=False : Cluster version is 4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest
            2024-04-12T09:30:08Z Degraded=True RenderConfigFailed: Failed to resync 4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest because: error fetching cluster pull secret: secret "pull-secret" not found
            2024-04-12T08:23:24Z Available=True AsExpected: Cluster has deployed [{operator 4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest}]
            2024-04-12T09:00:16Z Upgradeable=True AsExpected: 
            [root@preserve-dis016-centos9 cloud-user]# 
            [root@preserve-dis016-centos9 cloud-user]# oc get -o json clusterversion version | jq -r '.status.conditions[] | .lastTransitionTime + " " + .type + "=" + .status + " " + .reason + ": " + .message'
            2024-04-12T08:20:06Z RetrievedUpdates=False NoChannel: The update channel has not been configured.
            2024-04-12T08:20:06Z ImplicitlyEnabledCapabilities=False AsExpected: Capabilities match configured spec
            2024-04-12T08:20:06Z ReleaseAccepted=True PayloadLoaded: Payload loaded version="4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest" image="registry.build03.ci.openshift.org/ci-ln-s5jnqy2/release@sha256:98cbfb91408a31a680dfdc6ab434b246d3e0eb6a6f974cdef124b28f28bc8983" architecture="amd64"
            2024-04-12T08:38:06Z Available=True : Done applying 4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest
            2024-04-12T09:34:21Z Failing=True ClusterOperatorDegraded: Cluster operator machine-config is degraded
            2024-04-12T08:38:06Z Progressing=False ClusterOperatorDegraded: Error while reconciling 4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest: the cluster operator machine-config is degraded
            [root@preserve-dis016-centos9 cloud-user]#  

            Alerting gives:

            alertname alertstate name Value
            ClusterOperatorDegraded pending machine-config 1
            ClusterOperatorDegraded pending version 1

            trking am i doing something wrong here?  status of machine-config is not being changed to Available=False. 

            Dinesh Kumar S added a comment - Initial Stage: [root@preserve-dis016-centos9 cloud-user]# oc get clusterversion  NAME      VERSION                                                AVAILABLE   PROGRESSING   SINCE   STATUS version   4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         51m     Cluster version is 4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest [root@preserve-dis016-centos9 cloud-user]# oc get clusteroperator NAME                                       VERSION                                                AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE authentication                             4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      51m      baremetal                                  4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      66m      cloud-controller-manager                   4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      68m      cloud-credential                           4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      69m      cluster-autoscaler                         4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      66m      config- operator                            4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      66m      console                                    4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      56m      control-plane-machine-set                  4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      63m      csi-snapshot-controller                    4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      66m      dns                                        4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      65m      etcd                                       4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      64m      image-registry                             4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      58m      ingress                                    4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      59m      insights                                   4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      53m      kube-apiserver                             4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      59m      kube-controller-manager                    4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      64m      kube-scheduler                             4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      63m      kube-storage-version-migrator              4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      66m      machine-api                                4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      62m      machine-approver                           4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      66m      machine-config                             4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      66m      marketplace                                4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      66m      monitoring                                 4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      55m      network                                    4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      68m      node-tuning                                4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      62m      openshift-apiserver                        4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      59m      openshift-controller-manager               4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      65m      openshift-samples                          4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      59m      operator -lifecycle-manager                 4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      65m      operator -lifecycle-manager-catalog         4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      65m      operator -lifecycle-manager-packageserver   4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      59m      service-ca                                 4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      66m      storage                                    4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      66m      [root@preserve-dis016-centos9 cloud-user]#  Delete pull-secret: [root@preserve-dis016-centos9 cloud-user]# oc -n openshift-config delete secret pull-secret secret "pull-secret" deleted [root@preserve-dis016-centos9 cloud-user]#   [root@preserve-dis016-centos9 cloud-user]# oc -n openshift-config get secret NAME                                      TYPE                                  DATA   AGE builder-dockercfg-swhjq                   kubernetes.io/dockercfg               1      66m builder-token-h52pb                       kubernetes.io/service-account-token   4      66m default -dockercfg-7btsf                   kubernetes.io/dockercfg               1      65m default -token-pht4t                       kubernetes.io/service-account-token   4      65m deployer-dockercfg-sgrrs                  kubernetes.io/dockercfg               1      66m deployer-token-qwsp4                      kubernetes.io/service-account-token   4      66m etcd-client                               kubernetes.io/tls                     2      70m etcd-metric-signer                        kubernetes.io/tls                     2      70m etcd-signer                               kubernetes.io/tls                     2      70m initial-service-account- private -key       Opaque                                1      70m webhook-authentication-integrated-oauth   Opaque                                1      67m [root@preserve-dis016-centos9 cloud-user]#  Machine-config didn't notice: [root@preserve-dis016-centos9 cloud-user]# oc -n openshift-machine-config- operator logs -l k8s-app=machine-config-controller I0412 09:26:46.973127       1 render_controller.go:127] Starting MachineConfigController-RenderController I0412 09:26:47.001509       1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool worker I0412 09:26:47.063418       1 reflector.go:351] Caches populated for *v1.Pod from k8s.io/client-go/informers/factory.go:159 I0412 09:26:47.095891       1 kubelet_config_nodes.go:156] Applied Node configuration 97-master-generated-kubelet on MachineConfigPool master I0412 09:26:47.126857       1 container_runtime_config_controller.go:888] Applied ImageConfig cluster on MachineConfigPool master I0412 09:26:47.234432       1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool master I0412 09:26:47.312612       1 container_runtime_config_controller.go:888] Applied ImageConfig cluster on MachineConfigPool worker I0412 09:26:47.566714       1 kubelet_config_nodes.go:156] Applied Node configuration 97-worker-generated-kubelet on MachineConfigPool worker I0412 09:26:48.166436       1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool master I0412 09:26:48.768173       1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool worker Noticed error, but status Status didn't change: : [root@preserve-dis016-centos9 cloud-user]# oc get clusteroperator image-registry  NAME             VERSION                                                AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE image-registry   4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest   True        False         False      61m      [root@preserve-dis016-centos9 cloud-user]#  [root@preserve-dis016-centos9 cloud-user]#  [root@preserve-dis016-centos9 cloud-user]#  [root@preserve-dis016-centos9 cloud-user]# oc get -o json clusteroperator image-registry | jq -r '.status.conditions[] | .lastTransitionTime + " " + .type + "=" + .status + " " + .reason + ": " + .message' 2024-04-12T08:30:39Z Available=True Ready: Available: The registry is ready NodeCADaemonAvailable: The daemon set node-ca has available replicas ImagePrunerAvailable: Pruner CronJob has been created 2024-04-12T09:31:46Z Progressing=True Error: Progressing: Unable to apply resources: unable to apply objects: failed to update object *v1.Secret, Namespace=openshift-image-registry, Name=installation-pull-secrets: Secret "installation-pull-secrets" is invalid: data[.dockerconfigjson]: Required value NodeCADaemonProgressing: The daemon set node-ca is deployed 2024-04-12T08:30:14Z Degraded=False AsExpected:  [root@preserve-dis016-centos9 cloud-user]# Delete Machine-config-operator: [root@preserve-dis016-centos9 cloud-user]# oc -n openshift-machine-config- operator delete pod -l k8s-app=machine-config-controller pod "machine-config-controller-74468cd789-8lzv9" deleted [root@preserve-dis016-centos9 cloud-user]#  [root@preserve-dis016-centos9 cloud-user]# oc -n openshift-machine-config- operator logs -l k8s-app=machine-config-controller --tail 10 Defaulted container "machine-config-controller" out of: machine-config-controller, kube-rbac-proxy I0412 09:34:34.587556       1 template_controller.go:410] Error syncing controllerconfig machine-config-controller: secrets "pull-secret" not found I0412 09:34:34.758299       1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed( false ) running( false ) failing( true ) I0412 09:34:34.758299       1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed( false ) running( false ) failing( true ) I0412 09:34:35.398643       1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed( false ) running( false ) failing( true ) I0412 09:34:35.398698       1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed( false ) running( false ) failing( true ) I0412 09:34:36.679666       1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed( false ) running( false ) failing( true ) I0412 09:34:36.679666       1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed( false ) running( false ) failing( true ) I0412 09:34:39.240078       1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed( false ) running( false ) failing( true ) I0412 09:34:39.240135       1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed( false ) running( false ) failing( true ) I0412 09:34:39.719303       1 template_controller.go:410] Error syncing controllerconfig machine-config-controller: secrets "pull-secret" not found [root@preserve-dis016-centos9 cloud-user]#  Cluster operator Status: [root@preserve-dis016-centos9 cloud-user]# oc get -o json clusteroperator machine-config | jq -r '.status.conditions[] | .lastTransitionTime + " " + .type + "=" + .status + " " + .reason + ": " + .message' 2024-04-12T08:24:42Z Progressing=False : Cluster version is 4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest 2024-04-12T09:30:08Z Degraded=True RenderConfigFailed: Failed to resync 4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest because: error fetching cluster pull secret: secret "pull-secret" not found 2024-04-12T08:23:24Z Available=True AsExpected: Cluster has deployed [{ operator 4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest}] 2024-04-12T09:00:16Z Upgradeable=True AsExpected:  [root@preserve-dis016-centos9 cloud-user]#  [root@preserve-dis016-centos9 cloud-user]# oc get -o json clusterversion version | jq -r '.status.conditions[] | .lastTransitionTime + " " + .type + "=" + .status + " " + .reason + ": " + .message' 2024-04-12T08:20:06Z RetrievedUpdates=False NoChannel: The update channel has not been configured. 2024-04-12T08:20:06Z ImplicitlyEnabledCapabilities=False AsExpected: Capabilities match configured spec 2024-04-12T08:20:06Z ReleaseAccepted=True PayloadLoaded: Payload loaded version= "4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest" image= "registry.build03.ci.openshift.org/ci-ln-s5jnqy2/release@sha256:98cbfb91408a31a680dfdc6ab434b246d3e0eb6a6f974cdef124b28f28bc8983" architecture= "amd64" 2024-04-12T08:38:06Z Available=True : Done applying 4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest 2024-04-12T09:34:21Z Failing=True ClusterOperatorDegraded: Cluster operator machine-config is degraded 2024-04-12T08:38:06Z Progressing=False ClusterOperatorDegraded: Error while reconciling 4.16.0-0.test-2024-04-12-080819-ci-ln-s5jnqy2-latest: the cluster operator machine-config is degraded [root@preserve-dis016-centos9 cloud-user]#  Alerting gives: alertname alertstate name Value ClusterOperatorDegraded pending machine-config 1 ClusterOperatorDegraded pending version 1 trking am i doing something wrong here?  status of machine-config is not being changed to Available=False. 

            [root@preserve-dis016-centos9 cloud-user]# oc get clusterversion 
            NAME      VERSION                                                AVAILABLE   PROGRESSING   SINCE   STATUS
            version   4.16.0-0.test-2024-03-22-102827-ci-ln-l7i7flb-latest   True        False         104m    Cluster version is 4.16.0-0.test-2024-03-22-102827-ci-ln-l7i7flb-latest
            
             
            [root@preserve-dis016-centos9 cloud-user]# oc -n openshift-config delete secret pull-secret
            secret "pull-secret" deleted
            
            [root@preserve-dis016-centos9 cloud-user]# oc -n openshift-machine-config-operator logs -l k8s-app=machine-config-controller --tail 2
            Defaulted container "machine-config-controller" out of: machine-config-controller, kube-rbac-proxy
            I0322 12:36:37.814549       1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool master
            I0322 12:36:38.414722       1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool worker
            [root@preserve-dis016-centos9 cloud-user]# oc get -o json clusteroperator image-registry | jq -r '.status.conditions[] | .lastTransitionTime + " " + .type + "=" + .status + " " + .reason + ": " + .message'
            2024-03-22T10:51:48Z Available=True Ready: Available: The registry is ready
            NodeCADaemonAvailable: The daemon set node-ca has available replicas
            ImagePrunerAvailable: Pruner CronJob has been created
            2024-03-22T10:52:47Z Progressing=False Ready: Progressing: The registry is ready
            NodeCADaemonProgressing: The daemon set node-ca is deployed
            2024-03-22T10:51:25Z Degraded=False AsExpected: 
            
            [root@preserve-dis016-centos9 cloud-user]# oc -n openshift-machine-config-operator delete pod -l k8s-app=machine-config-controller
            pod "machine-config-controller-754ffff8b5-5mm26" deleted
            
            [root@preserve-dis016-centos9 cloud-user]# oc -n openshift-machine-config-operator logs -l k8s-app=machine-config-controller --tail 2
            Defaulted container "machine-config-controller" out of: machine-config-controller, kube-rbac-proxy
            I0322 12:47:15.300516       1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(false) failing(true)
            I0322 12:47:15.300516       1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(false) failing(true)
            
            [root@preserve-dis016-centos9 cloud-user]# oc get -o json clusteroperator machine-config | jq -r '.status.conditions[] | .lastTransitionTime + " " + .type + "=" + .status + " " + .reason + ": " + .message'
            2024-03-22T10:43:33Z Progressing=False : Error while reconciling 4.16.0-0.test-2024-03-22-102827-ci-ln-l7i7flb-latest
            2024-03-22T12:46:03Z Degraded=True RenderConfigFailed: Failed to resync 4.16.0-0.test-2024-03-22-102827-ci-ln-l7i7flb-latest because: error fetching cluster pull secret: secret "pull-secret" not found
            2024-03-22T12:46:03Z Available=False RenderConfigFailed: Cluster not available for [{operator 4.16.0-0.test-2024-03-22-102827-ci-ln-l7i7flb-latest}]: error fetching cluster pull secret: secret "pull-secret" not found
            2024-03-22T10:53:49Z Upgradeable=True AsExpected: 
            
            [root@preserve-dis016-centos9 cloud-user]# oc get -o json clusterversion version | jq -r '.status.conditions[] | .lastTransitionTime + " " + .type + "=" + .status + " " + .reason + ": " + .message'
            2024-03-22T10:38:37Z RetrievedUpdates=False NoChannel: The update channel has not been configured.
            2024-03-22T10:38:37Z ImplicitlyEnabledCapabilities=False AsExpected: Capabilities match configured spec
            2024-03-22T10:38:37Z ReleaseAccepted=True PayloadLoaded: Payload loaded version="4.16.0-0.test-2024-03-22-102827-ci-ln-l7i7flb-latest" image="registry.build03.ci.openshift.org/ci-ln-l7i7flb/release@sha256:178af045a10f4c90d4a17ba4b6d7c3ee92ff84741201bbd31bed900fab9d9060" architecture="amd64"
            2024-03-22T11:00:59Z Available=True : Done applying 4.16.0-0.test-2024-03-22-102827-ci-ln-l7i7flb-latest
            2024-03-22T11:00:59Z Failing=False : 
            2024-03-22T11:00:59Z Progressing=False : Cluster version is 4.16.0-0.test-2024-03-22-102827-ci-ln-l7i7flb-latest
            
            [root@preserve-dis016-centos9 cloud-user]# oc get -o json clusterversion version | jq -r '.status.conditions[] | .lastTransitionTime + " " + .type + "=" + .status + " " + .reason + ": " + .message'
            2024-03-22T10:38:37Z RetrievedUpdates=False NoChannel: The update channel has not been configured.
            2024-03-22T10:38:37Z ImplicitlyEnabledCapabilities=False AsExpected: Capabilities match configured spec
            2024-03-22T10:38:37Z ReleaseAccepted=True PayloadLoaded: Payload loaded version="4.16.0-0.test-2024-03-22-102827-ci-ln-l7i7flb-latest" image="registry.build03.ci.openshift.org/ci-ln-l7i7flb/release@sha256:178af045a10f4c90d4a17ba4b6d7c3ee92ff84741201bbd31bed900fab9d9060" architecture="amd64"
            2024-03-22T11:00:59Z Available=True : Done applying 4.16.0-0.test-2024-03-22-102827-ci-ln-l7i7flb-latest
            2024-03-22T11:00:59Z Failing=False : 
            2024-03-22T11:00:59Z Progressing=False : Cluster version is 4.16.0-0.test-2024-03-22-102827-ci-ln-l7i7flb-latest
            [root@preserve-dis016-centos9 cloud-user]# 

            Alerting gives:

              alertname alertstate name Value
              ClusterOperatorDegraded pending machine-config 1
              ClusterOperatorDegraded pending version 1
              ClusterOperatorDown pending machine-config 1

            Dinesh Kumar S added a comment - [root@preserve-dis016-centos9 cloud-user]# oc get clusterversion NAME      VERSION                                                AVAILABLE   PROGRESSING   SINCE   STATUS version   4.16.0-0.test-2024-03-22-102827-ci-ln-l7i7flb-latest   True        False         104m    Cluster version is 4.16.0-0.test-2024-03-22-102827-ci-ln-l7i7flb-latest [root@preserve-dis016-centos9 cloud-user]# oc -n openshift-config delete secret pull-secret secret "pull-secret" deleted [root@preserve-dis016-centos9 cloud-user]# oc -n openshift-machine-config- operator logs -l k8s-app=machine-config-controller --tail 2 Defaulted container "machine-config-controller" out of: machine-config-controller, kube-rbac-proxy I0322 12:36:37.814549       1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool master I0322 12:36:38.414722       1 kubelet_config_features.go:120] Applied FeatureSet cluster on MachineConfigPool worker [root@preserve-dis016-centos9 cloud-user]# oc get -o json clusteroperator image-registry | jq -r '.status.conditions[] | .lastTransitionTime + " " + .type + "=" + .status + " " + .reason + ": " + .message' 2024-03-22T10:51:48Z Available=True Ready: Available: The registry is ready NodeCADaemonAvailable: The daemon set node-ca has available replicas ImagePrunerAvailable: Pruner CronJob has been created 2024-03-22T10:52:47Z Progressing=False Ready: Progressing: The registry is ready NodeCADaemonProgressing: The daemon set node-ca is deployed 2024-03-22T10:51:25Z Degraded=False AsExpected: [root@preserve-dis016-centos9 cloud-user]# oc -n openshift-machine-config- operator delete pod -l k8s-app=machine-config-controller pod "machine-config-controller-754ffff8b5-5mm26" deleted [root@preserve-dis016-centos9 cloud-user]# oc -n openshift-machine-config- operator logs -l k8s-app=machine-config-controller --tail 2 Defaulted container "machine-config-controller" out of: machine-config-controller, kube-rbac-proxy I0322 12:47:15.300516       1 render_controller.go:380] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed( false ) running( false ) failing( true ) I0322 12:47:15.300516       1 render_controller.go:380] Error syncing machineconfigpool master: ControllerConfig has not completed: completed( false ) running( false ) failing( true ) [root@preserve-dis016-centos9 cloud-user]# oc get -o json clusteroperator machine-config | jq -r '.status.conditions[] | .lastTransitionTime + " " + .type + "=" + .status + " " + .reason + ": " + .message' 2024-03-22T10:43:33Z Progressing=False : Error while reconciling 4.16.0-0.test-2024-03-22-102827-ci-ln-l7i7flb-latest 2024-03-22T12:46:03Z Degraded=True RenderConfigFailed: Failed to resync 4.16.0-0.test-2024-03-22-102827-ci-ln-l7i7flb-latest because: error fetching cluster pull secret: secret "pull-secret" not found 2024-03-22T12:46:03Z Available=False RenderConfigFailed: Cluster not available for [{ operator 4.16.0-0.test-2024-03-22-102827-ci-ln-l7i7flb-latest}]: error fetching cluster pull secret: secret "pull-secret" not found 2024-03-22T10:53:49Z Upgradeable=True AsExpected: [root@preserve-dis016-centos9 cloud-user]# oc get -o json clusterversion version | jq -r '.status.conditions[] | .lastTransitionTime + " " + .type + "=" + .status + " " + .reason + ": " + .message' 2024-03-22T10:38:37Z RetrievedUpdates=False NoChannel: The update channel has not been configured. 2024-03-22T10:38:37Z ImplicitlyEnabledCapabilities=False AsExpected: Capabilities match configured spec 2024-03-22T10:38:37Z ReleaseAccepted=True PayloadLoaded: Payload loaded version= "4.16.0-0.test-2024-03-22-102827-ci-ln-l7i7flb-latest" image= "registry.build03.ci.openshift.org/ci-ln-l7i7flb/release@sha256:178af045a10f4c90d4a17ba4b6d7c3ee92ff84741201bbd31bed900fab9d9060" architecture= "amd64" 2024-03-22T11:00:59Z Available=True : Done applying 4.16.0-0.test-2024-03-22-102827-ci-ln-l7i7flb-latest 2024-03-22T11:00:59Z Failing=False : 2024-03-22T11:00:59Z Progressing=False : Cluster version is 4.16.0-0.test-2024-03-22-102827-ci-ln-l7i7flb-latest [root@preserve-dis016-centos9 cloud-user]# oc get -o json clusterversion version | jq -r '.status.conditions[] | .lastTransitionTime + " " + .type + "=" + .status + " " + .reason + ": " + .message' 2024-03-22T10:38:37Z RetrievedUpdates=False NoChannel: The update channel has not been configured. 2024-03-22T10:38:37Z ImplicitlyEnabledCapabilities=False AsExpected: Capabilities match configured spec 2024-03-22T10:38:37Z ReleaseAccepted=True PayloadLoaded: Payload loaded version= "4.16.0-0.test-2024-03-22-102827-ci-ln-l7i7flb-latest" image= "registry.build03.ci.openshift.org/ci-ln-l7i7flb/release@sha256:178af045a10f4c90d4a17ba4b6d7c3ee92ff84741201bbd31bed900fab9d9060" architecture= "amd64" 2024-03-22T11:00:59Z Available=True : Done applying 4.16.0-0.test-2024-03-22-102827-ci-ln-l7i7flb-latest 2024-03-22T11:00:59Z Failing=False : 2024-03-22T11:00:59Z Progressing=False : Cluster version is 4.16.0-0.test-2024-03-22-102827-ci-ln-l7i7flb-latest [root@preserve-dis016-centos9 cloud-user]# Alerting gives:   alertname alertstate name Value   ClusterOperatorDegraded pending machine-config 1   ClusterOperatorDegraded pending version 1   ClusterOperatorDown pending machine-config 1

            This bug is being closed because while it may represent a valid problem, the bug has been reported against a version no longer in support. For support lifecycle dates please refer to https://access.redhat.com/support/policy/updates/openshift#dates .

            OpenShift Jira Bot added a comment - This bug is being closed because while it may represent a valid problem, the bug has been reported against a version no longer in support. For support lifecycle dates please refer to https://access.redhat.com/support/policy/updates/openshift#dates .

              trking W. Trevor King
              trking W. Trevor King
              Dinesh Kumar S Dinesh Kumar S
              Red Hat Employee
              Votes:
              0 Vote for this issue
              Watchers:
              8 Start watching this issue

                Created:
                Updated:
                Resolved: