I1005 08:32:41.664693 1 start.go:59] Version: machine-config-daemon-4.6.0-202006240615.p0-2372-g8e2ca527-dirty (8e2ca527ec2990eee93be55b61eaa6825451b17f) I1005 08:32:41.668038 1 leaderelection.go:122] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}. I1005 08:32:41.683882 1 leaderelection.go:245] attempting to acquire leader lease openshift-machine-config-operator/machine-config-controller... I1005 08:32:41.690099 1 leaderelection.go:255] successfully acquired lease openshift-machine-config-operator/machine-config-controller W1005 08:32:41.698465 1 controller_context.go:111] unable to get owner reference (falling back to namespace): replicasets.apps "machine-config-controller-5575bcf54f" is forbidden: User "system:serviceaccount:openshift-machine-config-operator:machine-config-controller" cannot get resource "replicasets" in API group "apps" in the namespace "openshift-machine-config-operator" I1005 08:32:41.704799 1 simple_featuregate_reader.go:171] Starting feature-gate-detector I1005 08:32:41.704836 1 metrics.go:74] Registering Prometheus metrics I1005 08:32:41.704866 1 metrics.go:81] Starting metrics listener on 127.0.0.1:8797 I1005 08:32:41.718750 1 start.go:99] FeatureGates initialized: enabled=[AlibabaPlatform AzureWorkloadIdentity BuildCSIVolumes CloudDualStackNodeIPs ExternalCloudProviderAzure ExternalCloudProviderExternal PrivateHostedZoneAWS] disabled=[AdminNetworkPolicy AdmissionWebhookMatchConditions AutomatedEtcdBackup CSIDriverSharedResource DynamicResourceAllocation EventedPLEG ExternalCloudProvider ExternalCloudProviderGCP GCPLabelsTags GatewayAPI InsightsConfigAPI MachineAPIOperatorDisableMachineHealthCheckController MachineAPIProviderOpenStack MaxUnavailableStatefulSet NodeSwap OpenShiftPodSecurityAdmission RetroactiveDefaultStorageClass RouteExternalCertificate SigstoreImageVerification VSphereStaticIPs ValidatingAdmissionPolicy] I1005 08:32:41.718816 1 event.go:298] Event(v1.ObjectReference{Kind:"Namespace", Namespace:"openshift-machine-config-operator", Name:"openshift-machine-config-operator", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'FeatureGatesInitialized' FeatureGates updated to featuregates.Features{Enabled:[]v1.FeatureGateName{"AlibabaPlatform", "AzureWorkloadIdentity", "BuildCSIVolumes", "CloudDualStackNodeIPs", "ExternalCloudProviderAzure", "ExternalCloudProviderExternal", "PrivateHostedZoneAWS"}, Disabled:[]v1.FeatureGateName{"AdminNetworkPolicy", "AdmissionWebhookMatchConditions", "AutomatedEtcdBackup", "CSIDriverSharedResource", "DynamicResourceAllocation", "EventedPLEG", "ExternalCloudProvider", "ExternalCloudProviderGCP", "GCPLabelsTags", "GatewayAPI", "InsightsConfigAPI", "MachineAPIOperatorDisableMachineHealthCheckController", "MachineAPIProviderOpenStack", "MaxUnavailableStatefulSet", "NodeSwap", "OpenShiftPodSecurityAdmission", "RetroactiveDefaultStorageClass", "RouteExternalCertificate", "SigstoreImageVerification", "VSphereStaticIPs", "ValidatingAdmissionPolicy"}} I1005 08:32:41.718851 1 kubelet_config_controller.go:193] Starting MachineConfigController-KubeletConfigController I1005 08:32:41.719041 1 container_runtime_config_controller.go:216] Starting MachineConfigController-ContainerRuntimeConfigController I1005 08:32:41.719311 1 container_runtime_config_controller.go:417] Error syncing image config openshift-config: could not get ControllerConfig controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:41.719338 1 render_controller.go:124] Starting MachineConfigController-RenderController I1005 08:32:41.719388 1 kubelet_config_controller.go:347] Error syncing kubeletconfig cluster: could not get ControllerConfig: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:41.720010 1 template_controller.go:235] Starting MachineConfigController-TemplateController E1005 08:32:41.720062 1 template_controller.go:186] couldn't get ControllerConfig on dependency callback &%!w(errors.StatusError=errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"controllerconfig.machineconfiguration.openshift.io \"machine-config-controller\" not found", Reason:"NotFound", Details:(*v1.StatusDetails)(0xc000796d20), Code:404}}) I1005 08:32:41.720074 1 template_controller.go:134] Re-syncing ControllerConfig due to secret pull-secret change I1005 08:32:41.720095 1 drain_controller.go:160] Starting MachineConfigController-DrainController I1005 08:32:41.720115 1 node_controller.go:214] Starting MachineConfigController-NodeController I1005 08:32:41.724999 1 kubelet_config_controller.go:347] Error syncing kubeletconfig cluster: could not get ControllerConfig: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:41.725107 1 container_runtime_config_controller.go:417] Error syncing image config openshift-config: could not get ControllerConfig controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:41.735807 1 container_runtime_config_controller.go:417] Error syncing image config openshift-config: could not get ControllerConfig controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:41.736070 1 kubelet_config_controller.go:347] Error syncing kubeletconfig cluster: could not get ControllerConfig: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:41.759091 1 container_runtime_config_controller.go:417] Error syncing image config openshift-config: could not get ControllerConfig controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:41.759120 1 kubelet_config_controller.go:347] Error syncing kubeletconfig cluster: could not get ControllerConfig: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:41.799828 1 kubelet_config_controller.go:347] Error syncing kubeletconfig cluster: could not get ControllerConfig: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:41.799888 1 container_runtime_config_controller.go:417] Error syncing image config openshift-config: could not get ControllerConfig controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:41.879904 1 kubelet_config_controller.go:347] Error syncing kubeletconfig cluster: could not get ControllerConfig: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:41.880035 1 container_runtime_config_controller.go:417] Error syncing image config openshift-config: could not get ControllerConfig controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:42.040412 1 kubelet_config_controller.go:347] Error syncing kubeletconfig cluster: could not get ControllerConfig: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:42.040531 1 container_runtime_config_controller.go:417] Error syncing image config openshift-config: could not get ControllerConfig controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:42.361566 1 kubelet_config_controller.go:347] Error syncing kubeletconfig cluster: could not get ControllerConfig: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:42.361715 1 container_runtime_config_controller.go:417] Error syncing image config openshift-config: could not get ControllerConfig controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:43.002813 1 kubelet_config_controller.go:347] Error syncing kubeletconfig cluster: could not get ControllerConfig: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:43.003013 1 container_runtime_config_controller.go:417] Error syncing image config openshift-config: could not get ControllerConfig controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:44.284624 1 container_runtime_config_controller.go:417] Error syncing image config openshift-config: could not get ControllerConfig controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:44.284744 1 kubelet_config_controller.go:347] Error syncing kubeletconfig cluster: could not get ControllerConfig: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:46.715463 1 render_controller.go:377] Error syncing machineconfigpool master: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:46.715560 1 render_controller.go:377] Error syncing machineconfigpool worker: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:46.715619 1 node_controller.go:882] Pool master is unconfigured, pausing 5s for renderer to initialize I1005 08:32:46.715661 1 node_controller.go:882] Pool worker is unconfigured, pausing 5s for renderer to initialize I1005 08:32:46.724302 1 render_controller.go:377] Error syncing machineconfigpool master: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:46.724382 1 render_controller.go:377] Error syncing machineconfigpool worker: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:46.735315 1 render_controller.go:377] Error syncing machineconfigpool master: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:46.735386 1 render_controller.go:377] Error syncing machineconfigpool worker: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:46.755690 1 render_controller.go:377] Error syncing machineconfigpool master: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:46.755751 1 render_controller.go:377] Error syncing machineconfigpool worker: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:46.796083 1 render_controller.go:377] Error syncing machineconfigpool master: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:46.796146 1 render_controller.go:377] Error syncing machineconfigpool worker: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:46.845584 1 kubelet_config_controller.go:347] Error syncing kubeletconfig cluster: could not get ControllerConfig: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:46.845736 1 container_runtime_config_controller.go:417] Error syncing image config openshift-config: could not get ControllerConfig controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:46.876288 1 render_controller.go:377] Error syncing machineconfigpool master: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:46.876375 1 render_controller.go:377] Error syncing machineconfigpool worker: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:47.036640 1 render_controller.go:377] Error syncing machineconfigpool worker: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:47.036720 1 render_controller.go:377] Error syncing machineconfigpool master: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:47.358495 1 render_controller.go:377] Error syncing machineconfigpool worker: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:47.358587 1 render_controller.go:377] Error syncing machineconfigpool master: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:47.998996 1 render_controller.go:377] Error syncing machineconfigpool worker: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found I1005 08:32:47.999086 1 render_controller.go:377] Error syncing machineconfigpool master: controllerconfig.machineconfiguration.openshift.io "machine-config-controller" not found W1005 08:32:48.301120 1 warnings.go:70] unknown field "spec.dns.spec.platform" W1005 08:32:48.320329 1 warnings.go:70] unknown field "spec.dns.spec.platform" I1005 08:32:48.351165 1 node_controller.go:493] Pool master[zone=us-east-2a]: node ip-10-0-1-133.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = open /etc/docker/certs.d: no such file or directory I1005 08:32:48.352703 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"7581", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-1-133.us-east-2.compute.internal now has machineconfiguration.openshift.io/reason=open /etc/docker/certs.d: no such file or directory I1005 08:32:48.363544 1 node_controller.go:493] Pool master[zone=us-east-2c]: node ip-10-0-89-70.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = open /etc/docker/certs.d: no such file or directory I1005 08:32:48.363839 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"7581", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-89-70.us-east-2.compute.internal now has machineconfiguration.openshift.io/reason=open /etc/docker/certs.d: no such file or directory W1005 08:32:48.373464 1 warnings.go:70] unknown field "spec.dns.spec.platform" I1005 08:32:48.393373 1 node_controller.go:493] Pool master[zone=us-east-2b]: node ip-10-0-60-194.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = open /etc/docker/certs.d: no such file or directory I1005 08:32:48.393731 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"7581", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-60-194.us-east-2.compute.internal now has machineconfiguration.openshift.io/reason=open /etc/docker/certs.d: no such file or directory I1005 08:32:49.280148 1 render_controller.go:377] Error syncing machineconfigpool worker: ControllerConfig has not completed: completed(false) running(true) failing(false) I1005 08:32:49.280189 1 render_controller.go:377] Error syncing machineconfigpool master: ControllerConfig has not completed: completed(false) running(true) failing(false) W1005 08:32:49.500267 1 warnings.go:70] unknown field "spec.dns.spec.platform" I1005 08:32:51.967119 1 render_controller.go:510] Generated machineconfig rendered-worker-5e057f853f45f3601ca25f39bbbd5378 from 4 configs: [{MachineConfig 00-worker machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-ssh machineconfiguration.openshift.io/v1 }] I1005 08:32:51.967550 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"7582", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-worker-5e057f853f45f3601ca25f39bbbd5378 successfully generated (release version: 4.14.0-0.ci.test-2023-10-05-080602-ci-ln-6f80fqk-latest, controller version: 8e2ca527ec2990eee93be55b61eaa6825451b17f) I1005 08:32:51.994354 1 render_controller.go:536] Pool worker: now targeting: rendered-worker-5e057f853f45f3601ca25f39bbbd5378 I1005 08:32:51.997783 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 08:32:52.025061 1 render_controller.go:510] Generated machineconfig rendered-master-aff5bdae4669508edf17221e9ed66098 from 4 configs: [{MachineConfig 00-master machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-ssh machineconfiguration.openshift.io/v1 }] I1005 08:32:52.025382 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"7581", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-master-aff5bdae4669508edf17221e9ed66098 successfully generated (release version: 4.14.0-0.ci.test-2023-10-05-080602-ci-ln-6f80fqk-latest, controller version: 8e2ca527ec2990eee93be55b61eaa6825451b17f) I1005 08:32:52.035449 1 render_controller.go:536] Pool master: now targeting: rendered-master-aff5bdae4669508edf17221e9ed66098 I1005 08:32:52.049899 1 render_controller.go:377] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again I1005 08:32:52.395446 1 kubelet_config_nodes.go:165] Applied Node configuration 97-worker-generated-kubelet on MachineConfigPool worker I1005 08:32:52.426334 1 kubelet_config_features.go:118] Applied FeatureSet cluster on MachineConfigPool master I1005 08:32:52.432271 1 container_runtime_config_controller.go:889] Applied ImageConfig cluster on MachineConfigPool master I1005 08:32:52.601997 1 container_runtime_config_controller.go:889] Applied ImageConfig cluster on MachineConfigPool worker I1005 08:32:52.974361 1 kubelet_config_nodes.go:165] Applied Node configuration 97-master-generated-kubelet on MachineConfigPool master I1005 08:32:53.361339 1 node_controller.go:1050] Updated controlPlaneTopology annotation of node ip-10-0-1-133.us-east-2.compute.internal from to I1005 08:32:53.375544 1 node_controller.go:1050] Updated controlPlaneTopology annotation of node ip-10-0-60-194.us-east-2.compute.internal from to I1005 08:32:53.411066 1 node_controller.go:1050] Updated controlPlaneTopology annotation of node ip-10-0-89-70.us-east-2.compute.internal from to I1005 08:32:53.463992 1 node_controller.go:493] Pool master[zone=us-east-2a]: node ip-10-0-1-133.us-east-2.compute.internal: changed taints I1005 08:32:53.619351 1 kubelet_config_features.go:118] Applied FeatureSet cluster on MachineConfigPool worker I1005 08:32:53.960736 1 node_controller.go:493] Pool master[zone=us-east-2b]: node ip-10-0-60-194.us-east-2.compute.internal: changed taints I1005 08:32:54.162592 1 node_controller.go:1096] No nodes available for updates I1005 08:32:54.162695 1 status.go:126] Degraded Machine: ip-10-0-89-70.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:32:54.162712 1 status.go:126] Degraded Machine: ip-10-0-1-133.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:32:54.162724 1 status.go:126] Degraded Machine: ip-10-0-60-194.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:32:54.174177 1 kubelet_config_features.go:118] Applied FeatureSet cluster on MachineConfigPool master I1005 08:32:54.561088 1 node_controller.go:493] Pool master[zone=us-east-2c]: node ip-10-0-89-70.us-east-2.compute.internal: changed taints I1005 08:32:54.771921 1 kubelet_config_features.go:118] Applied FeatureSet cluster on MachineConfigPool worker I1005 08:32:56.747095 1 node_controller.go:493] Pool master[zone=us-east-2c]: node ip-10-0-89-70.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = machineconfig.machineconfiguration.openshift.io "rendered-master-00b7ba71dbcfdf5565c619ede667bf5d" not found I1005 08:32:56.747199 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"9499", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-89-70.us-east-2.compute.internal now has machineconfiguration.openshift.io/reason=machineconfig.machineconfiguration.openshift.io "rendered-master-00b7ba71dbcfdf5565c619ede667bf5d" not found I1005 08:32:57.011491 1 status.go:109] Pool worker: All nodes are updated with MachineConfig rendered-worker-5e057f853f45f3601ca25f39bbbd5378 I1005 08:32:57.097690 1 render_controller.go:510] Generated machineconfig rendered-worker-c09e64406f95cc75c7e4253c6900e173 from 7 configs: [{MachineConfig 00-worker machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 97-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-ssh machineconfiguration.openshift.io/v1 }] I1005 08:32:57.098100 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"9401", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-worker-c09e64406f95cc75c7e4253c6900e173 successfully generated (release version: 4.14.0-0.ci.test-2023-10-05-080602-ci-ln-6f80fqk-latest, controller version: 8e2ca527ec2990eee93be55b61eaa6825451b17f) E1005 08:32:57.107576 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 08:32:57.107595 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 08:32:57.148434 1 render_controller.go:510] Generated machineconfig rendered-master-00b7ba71dbcfdf5565c619ede667bf5d from 7 configs: [{MachineConfig 00-master machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 97-master-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-master-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-ssh machineconfiguration.openshift.io/v1 }] I1005 08:32:57.149105 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"9499", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-master-00b7ba71dbcfdf5565c619ede667bf5d successfully generated (release version: 4.14.0-0.ci.test-2023-10-05-080602-ci-ln-6f80fqk-latest, controller version: 8e2ca527ec2990eee93be55b61eaa6825451b17f) I1005 08:32:57.156841 1 render_controller.go:536] Pool master: now targeting: rendered-master-00b7ba71dbcfdf5565c619ede667bf5d I1005 08:32:57.160768 1 render_controller.go:377] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again I1005 08:32:58.490342 1 node_controller.go:493] Pool master[zone=us-east-2a]: node ip-10-0-1-133.us-east-2.compute.internal: changed taints I1005 08:32:58.506969 1 node_controller.go:493] Pool master[zone=us-east-2b]: node ip-10-0-60-194.us-east-2.compute.internal: changed taints I1005 08:32:58.507455 1 node_controller.go:1096] No nodes available for updates I1005 08:32:58.507521 1 status.go:126] Degraded Machine: ip-10-0-1-133.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:32:58.507533 1 status.go:126] Degraded Machine: ip-10-0-60-194.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:32:58.507541 1 status.go:126] Degraded Machine: ip-10-0-89-70.us-east-2.compute.internal and Degraded Reason: machineconfig.machineconfiguration.openshift.io "rendered-master-00b7ba71dbcfdf5565c619ede667bf5d" not found I1005 08:32:58.521257 1 node_controller.go:493] Pool master[zone=us-east-2c]: node ip-10-0-89-70.us-east-2.compute.internal: changed taints I1005 08:33:02.081009 1 render_controller.go:536] Pool worker: now targeting: rendered-worker-c09e64406f95cc75c7e4253c6900e173 I1005 08:33:02.094711 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 08:33:03.493632 1 node_controller.go:1096] No nodes available for updates I1005 08:33:03.493710 1 status.go:126] Degraded Machine: ip-10-0-1-133.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:33:03.493721 1 status.go:126] Degraded Machine: ip-10-0-60-194.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:33:03.493731 1 status.go:126] Degraded Machine: ip-10-0-89-70.us-east-2.compute.internal and Degraded Reason: machineconfig.machineconfiguration.openshift.io "rendered-master-00b7ba71dbcfdf5565c619ede667bf5d" not found I1005 08:33:06.142323 1 node_controller.go:493] Pool master[zone=us-east-2c]: node ip-10-0-89-70.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = open /etc/docker/certs.d: no such file or directory I1005 08:33:06.142482 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"9772", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-89-70.us-east-2.compute.internal now has machineconfiguration.openshift.io/reason=open /etc/docker/certs.d: no such file or directory I1005 08:33:07.082635 1 status.go:109] Pool worker: All nodes are updated with MachineConfig rendered-worker-c09e64406f95cc75c7e4253c6900e173 E1005 08:33:07.150539 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 08:33:07.150555 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 08:33:11.143479 1 node_controller.go:1096] No nodes available for updates I1005 08:33:11.143545 1 status.go:126] Degraded Machine: ip-10-0-89-70.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:33:11.143555 1 status.go:126] Degraded Machine: ip-10-0-1-133.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:33:11.143564 1 status.go:126] Degraded Machine: ip-10-0-60-194.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:33:16.159735 1 node_controller.go:1096] No nodes available for updates I1005 08:33:16.159824 1 status.go:126] Degraded Machine: ip-10-0-1-133.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:33:16.159834 1 status.go:126] Degraded Machine: ip-10-0-60-194.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:33:16.159844 1 status.go:126] Degraded Machine: ip-10-0-89-70.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory E1005 08:33:16.284171 1 render_controller.go:460] Error updating MachineConfigPool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again I1005 08:33:16.284191 1 render_controller.go:377] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again I1005 08:33:21.169774 1 node_controller.go:1096] No nodes available for updates I1005 08:33:21.169919 1 status.go:126] Degraded Machine: ip-10-0-1-133.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:33:21.170037 1 status.go:126] Degraded Machine: ip-10-0-60-194.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:33:21.170060 1 status.go:126] Degraded Machine: ip-10-0-89-70.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:33:27.066826 1 drain_controller.go:173] node ip-10-0-1-133.us-east-2.compute.internal: uncordoning I1005 08:33:27.066915 1 drain_controller.go:173] node ip-10-0-1-133.us-east-2.compute.internal: initiating uncordon (currently schedulable: true) I1005 08:33:27.071395 1 drain_controller.go:173] node ip-10-0-1-133.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 08:33:27.071414 1 drain_controller.go:173] node ip-10-0-1-133.us-east-2.compute.internal: operation successful; applying completion annotation I1005 08:33:34.787476 1 node_controller.go:493] Pool master[zone=us-east-2a]: node ip-10-0-1-133.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Done I1005 08:33:34.787518 1 node_controller.go:493] Pool master[zone=us-east-2a]: node ip-10-0-1-133.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = I1005 08:33:34.787568 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"10378", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-1-133.us-east-2.compute.internal now has machineconfiguration.openshift.io/state=Done I1005 08:33:34.787598 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"10378", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-1-133.us-east-2.compute.internal now has machineconfiguration.openshift.io/reason= I1005 08:33:39.787819 1 node_controller.go:1096] No nodes available for updates I1005 08:33:39.787881 1 status.go:126] Degraded Machine: ip-10-0-60-194.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:33:39.787890 1 status.go:126] Degraded Machine: ip-10-0-89-70.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:33:44.865571 1 node_controller.go:1096] No nodes available for updates I1005 08:33:44.865656 1 status.go:126] Degraded Machine: ip-10-0-89-70.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:33:44.865689 1 status.go:126] Degraded Machine: ip-10-0-60-194.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory E1005 08:33:44.942739 1 render_controller.go:460] Error updating MachineConfigPool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again I1005 08:33:44.942753 1 render_controller.go:377] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again I1005 08:33:49.848561 1 drain_controller.go:173] node ip-10-0-60-194.us-east-2.compute.internal: uncordoning I1005 08:33:49.848591 1 drain_controller.go:173] node ip-10-0-60-194.us-east-2.compute.internal: initiating uncordon (currently schedulable: true) I1005 08:33:49.852453 1 drain_controller.go:173] node ip-10-0-60-194.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 08:33:49.852465 1 drain_controller.go:173] node ip-10-0-60-194.us-east-2.compute.internal: operation successful; applying completion annotation I1005 08:33:49.882448 1 node_controller.go:1096] No nodes available for updates I1005 08:33:49.882529 1 status.go:126] Degraded Machine: ip-10-0-60-194.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:33:49.882542 1 status.go:126] Degraded Machine: ip-10-0-89-70.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:33:54.893863 1 node_controller.go:1096] No nodes available for updates I1005 08:33:54.893928 1 status.go:126] Degraded Machine: ip-10-0-60-194.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:33:54.893956 1 status.go:126] Degraded Machine: ip-10-0-89-70.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:33:55.314109 1 node_controller.go:493] Pool master[zone=us-east-2b]: node ip-10-0-60-194.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Done I1005 08:33:55.314141 1 node_controller.go:493] Pool master[zone=us-east-2b]: node ip-10-0-60-194.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = I1005 08:33:55.314236 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"11621", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-60-194.us-east-2.compute.internal now has machineconfiguration.openshift.io/state=Done I1005 08:33:55.314264 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"11621", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-60-194.us-east-2.compute.internal now has machineconfiguration.openshift.io/reason= I1005 08:34:00.314601 1 node_controller.go:1096] No nodes available for updates I1005 08:34:00.314667 1 status.go:126] Degraded Machine: ip-10-0-89-70.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:34:05.325384 1 node_controller.go:1096] No nodes available for updates I1005 08:34:05.325477 1 status.go:126] Degraded Machine: ip-10-0-89-70.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:34:09.220848 1 node_controller.go:493] Pool master[zone=us-east-2a]: node ip-10-0-1-133.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Degraded I1005 08:34:09.220880 1 node_controller.go:493] Pool master[zone=us-east-2a]: node ip-10-0-1-133.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = open /etc/docker/certs.d: no such file or directory I1005 08:34:09.220974 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"11843", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-1-133.us-east-2.compute.internal now has machineconfiguration.openshift.io/state=Degraded I1005 08:34:09.221019 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"11843", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-1-133.us-east-2.compute.internal now has machineconfiguration.openshift.io/reason=open /etc/docker/certs.d: no such file or directory I1005 08:34:09.648084 1 node_controller.go:493] Pool master[zone=us-east-2b]: node ip-10-0-60-194.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Degraded I1005 08:34:09.648136 1 node_controller.go:493] Pool master[zone=us-east-2b]: node ip-10-0-60-194.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = open /etc/docker/certs.d: no such file or directory I1005 08:34:09.648209 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"11843", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-60-194.us-east-2.compute.internal now has machineconfiguration.openshift.io/state=Degraded I1005 08:34:09.648259 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"11843", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-60-194.us-east-2.compute.internal now has machineconfiguration.openshift.io/reason=open /etc/docker/certs.d: no such file or directory I1005 08:34:11.739267 1 drain_controller.go:173] node ip-10-0-89-70.us-east-2.compute.internal: uncordoning I1005 08:34:11.739365 1 drain_controller.go:173] node ip-10-0-89-70.us-east-2.compute.internal: initiating uncordon (currently schedulable: true) I1005 08:34:11.744693 1 drain_controller.go:173] node ip-10-0-89-70.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 08:34:11.744705 1 drain_controller.go:173] node ip-10-0-89-70.us-east-2.compute.internal: operation successful; applying completion annotation I1005 08:34:14.221518 1 node_controller.go:1096] No nodes available for updates I1005 08:34:14.221567 1 status.go:126] Degraded Machine: ip-10-0-1-133.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:34:14.221573 1 status.go:126] Degraded Machine: ip-10-0-60-194.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:34:14.221579 1 status.go:126] Degraded Machine: ip-10-0-89-70.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:34:19.230523 1 node_controller.go:1096] No nodes available for updates I1005 08:34:19.230686 1 status.go:126] Degraded Machine: ip-10-0-1-133.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:34:19.230717 1 status.go:126] Degraded Machine: ip-10-0-60-194.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:34:19.230736 1 status.go:126] Degraded Machine: ip-10-0-89-70.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:34:21.556709 1 node_controller.go:493] Pool master[zone=us-east-2c]: node ip-10-0-89-70.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Done I1005 08:34:21.556784 1 node_controller.go:493] Pool master[zone=us-east-2c]: node ip-10-0-89-70.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = I1005 08:34:21.556878 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"12115", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-89-70.us-east-2.compute.internal now has machineconfiguration.openshift.io/state=Done I1005 08:34:21.556924 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"12115", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-89-70.us-east-2.compute.internal now has machineconfiguration.openshift.io/reason= I1005 08:34:23.109210 1 node_controller.go:493] Pool master[zone=us-east-2b]: node ip-10-0-60-194.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Done I1005 08:34:23.109301 1 node_controller.go:493] Pool master[zone=us-east-2b]: node ip-10-0-60-194.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = I1005 08:34:23.109498 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"12115", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-60-194.us-east-2.compute.internal now has machineconfiguration.openshift.io/state=Done I1005 08:34:23.109568 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"12115", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-60-194.us-east-2.compute.internal now has machineconfiguration.openshift.io/reason= I1005 08:34:23.725358 1 node_controller.go:493] Pool master[zone=us-east-2a]: node ip-10-0-1-133.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Done I1005 08:34:23.725526 1 node_controller.go:493] Pool master[zone=us-east-2a]: node ip-10-0-1-133.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = I1005 08:34:23.725833 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"12115", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-1-133.us-east-2.compute.internal now has machineconfiguration.openshift.io/state=Done I1005 08:34:23.725876 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"12115", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-1-133.us-east-2.compute.internal now has machineconfiguration.openshift.io/reason= I1005 08:34:26.558122 1 status.go:109] Pool master: All nodes are updated with MachineConfig rendered-master-00b7ba71dbcfdf5565c619ede667bf5d I1005 08:35:29.619217 1 node_controller.go:493] Pool master[zone=us-east-2c]: node ip-10-0-89-70.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Degraded I1005 08:35:29.619300 1 node_controller.go:493] Pool master[zone=us-east-2c]: node ip-10-0-89-70.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = open /etc/docker/certs.d: no such file or directory I1005 08:35:29.619661 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"12382", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-89-70.us-east-2.compute.internal now has machineconfiguration.openshift.io/state=Degraded I1005 08:35:29.619719 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"12382", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-89-70.us-east-2.compute.internal now has machineconfiguration.openshift.io/reason=open /etc/docker/certs.d: no such file or directory I1005 08:35:31.163983 1 node_controller.go:493] Pool master[zone=us-east-2a]: node ip-10-0-1-133.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Degraded I1005 08:35:31.164073 1 node_controller.go:493] Pool master[zone=us-east-2a]: node ip-10-0-1-133.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = open /etc/docker/certs.d: no such file or directory I1005 08:35:31.164145 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"12382", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-1-133.us-east-2.compute.internal now has machineconfiguration.openshift.io/state=Degraded I1005 08:35:31.164280 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"12382", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-1-133.us-east-2.compute.internal now has machineconfiguration.openshift.io/reason=open /etc/docker/certs.d: no such file or directory I1005 08:35:31.602616 1 node_controller.go:493] Pool master[zone=us-east-2b]: node ip-10-0-60-194.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Degraded I1005 08:35:31.602702 1 node_controller.go:493] Pool master[zone=us-east-2b]: node ip-10-0-60-194.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = open /etc/docker/certs.d: no such file or directory I1005 08:35:31.602928 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"12382", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-60-194.us-east-2.compute.internal now has machineconfiguration.openshift.io/state=Degraded I1005 08:35:31.603060 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"12382", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-60-194.us-east-2.compute.internal now has machineconfiguration.openshift.io/reason=open /etc/docker/certs.d: no such file or directory I1005 08:35:34.619683 1 node_controller.go:1096] No nodes available for updates I1005 08:35:34.619751 1 status.go:126] Degraded Machine: ip-10-0-89-70.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:35:34.619761 1 status.go:126] Degraded Machine: ip-10-0-1-133.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:35:34.619768 1 status.go:126] Degraded Machine: ip-10-0-60-194.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:35:39.630372 1 node_controller.go:1096] No nodes available for updates I1005 08:35:39.630451 1 status.go:126] Degraded Machine: ip-10-0-1-133.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:35:39.630459 1 status.go:126] Degraded Machine: ip-10-0-60-194.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:35:39.630465 1 status.go:126] Degraded Machine: ip-10-0-89-70.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory E1005 08:35:39.714962 1 render_controller.go:460] Error updating MachineConfigPool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again I1005 08:35:39.714979 1 render_controller.go:377] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again I1005 08:35:42.829123 1 node_controller.go:493] Pool master[zone=us-east-2c]: node ip-10-0-89-70.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Done I1005 08:35:42.829162 1 node_controller.go:493] Pool master[zone=us-east-2c]: node ip-10-0-89-70.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = I1005 08:35:42.829236 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"13908", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-89-70.us-east-2.compute.internal now has machineconfiguration.openshift.io/state=Done I1005 08:35:42.829257 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"13908", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-89-70.us-east-2.compute.internal now has machineconfiguration.openshift.io/reason= I1005 08:35:44.641889 1 node_controller.go:1096] No nodes available for updates I1005 08:35:44.641954 1 status.go:126] Degraded Machine: ip-10-0-1-133.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:35:44.641968 1 status.go:126] Degraded Machine: ip-10-0-60-194.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:35:45.789020 1 node_controller.go:493] Pool master[zone=us-east-2b]: node ip-10-0-60-194.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Done I1005 08:35:45.789051 1 node_controller.go:493] Pool master[zone=us-east-2b]: node ip-10-0-60-194.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = I1005 08:35:45.789120 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"14346", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-60-194.us-east-2.compute.internal now has machineconfiguration.openshift.io/state=Done I1005 08:35:45.789146 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"14346", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-60-194.us-east-2.compute.internal now has machineconfiguration.openshift.io/reason= I1005 08:35:46.068155 1 node_controller.go:493] Pool master[zone=us-east-2a]: node ip-10-0-1-133.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Done I1005 08:35:46.068188 1 node_controller.go:493] Pool master[zone=us-east-2a]: node ip-10-0-1-133.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = I1005 08:35:46.068270 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"14346", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-1-133.us-east-2.compute.internal now has machineconfiguration.openshift.io/state=Done I1005 08:35:46.068292 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"14346", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-1-133.us-east-2.compute.internal now has machineconfiguration.openshift.io/reason= I1005 08:35:47.618976 1 node_controller.go:1050] Updated controlPlaneTopology annotation of node ip-10-0-4-193.us-east-2.compute.internal from to I1005 08:35:47.627293 1 node_controller.go:1050] Updated controlPlaneTopology annotation of node ip-10-0-75-39.us-east-2.compute.internal from to I1005 08:35:47.635659 1 node_controller.go:1050] Updated controlPlaneTopology annotation of node ip-10-0-49-13.us-east-2.compute.internal from to I1005 08:35:47.699383 1 node_controller.go:1096] No nodes available for updates E1005 08:35:49.735387 1 render_controller.go:460] Error updating MachineConfigPool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again I1005 08:35:49.735401 1 render_controller.go:377] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again I1005 08:35:52.647963 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/currentConfig = rendered-worker-c09e64406f95cc75c7e4253c6900e173 I1005 08:35:52.647981 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-c09e64406f95cc75c7e4253c6900e173 I1005 08:35:52.647987 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Done I1005 08:35:52.745987 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed taints I1005 08:35:52.749514 1 node_controller.go:1096] No nodes available for updates E1005 08:35:53.145341 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 08:35:53.145365 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 08:35:53.437182 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/currentConfig = rendered-worker-c09e64406f95cc75c7e4253c6900e173 I1005 08:35:53.437199 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-c09e64406f95cc75c7e4253c6900e173 I1005 08:35:53.437205 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Done I1005 08:35:54.512490 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/currentConfig = rendered-worker-c09e64406f95cc75c7e4253c6900e173 I1005 08:35:54.512586 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-c09e64406f95cc75c7e4253c6900e173 I1005 08:35:54.512609 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Done I1005 08:35:57.776359 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 08:35:57.805348 1 node_controller.go:1096] No nodes available for updates I1005 08:35:57.809230 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 08:36:02.777000 1 node_controller.go:1096] No nodes available for updates I1005 08:36:18.437044 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordoning I1005 08:36:18.437060 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating uncordon (currently schedulable: true) I1005 08:36:18.441184 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 08:36:18.441198 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 08:36:19.094924 1 drain_controller.go:173] node ip-10-0-75-39.us-east-2.compute.internal: uncordoning I1005 08:36:19.094940 1 drain_controller.go:173] node ip-10-0-75-39.us-east-2.compute.internal: initiating uncordon (currently schedulable: true) I1005 08:36:19.097721 1 drain_controller.go:173] node ip-10-0-75-39.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 08:36:19.097733 1 drain_controller.go:173] node ip-10-0-75-39.us-east-2.compute.internal: operation successful; applying completion annotation I1005 08:36:19.858711 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordoning I1005 08:36:19.858727 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating uncordon (currently schedulable: true) I1005 08:36:19.861545 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 08:36:19.861596 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 08:36:22.058203 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: Reporting ready I1005 08:36:22.078310 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed taints I1005 08:36:23.233204 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed taints I1005 08:36:23.839877 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed labels I1005 08:36:24.077172 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = I1005 08:36:24.256067 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting ready I1005 08:36:24.270293 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 08:36:24.882105 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = I1005 08:36:25.236555 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = I1005 08:36:25.362774 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting ready I1005 08:36:25.392017 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 08:36:25.581027 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed labels I1005 08:36:26.700480 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed labels I1005 08:36:28.240307 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 08:36:28.251926 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 08:36:51.863858 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Degraded I1005 08:36:51.863878 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = open /etc/docker/certs.d: no such file or directory I1005 08:36:52.503398 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Degraded I1005 08:36:52.503440 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = open /etc/docker/certs.d: no such file or directory I1005 08:36:53.683189 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Degraded I1005 08:36:53.683255 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = open /etc/docker/certs.d: no such file or directory I1005 08:36:56.864651 1 node_controller.go:1096] No nodes available for updates I1005 08:36:56.864706 1 status.go:126] Degraded Machine: ip-10-0-4-193.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:36:56.864712 1 status.go:126] Degraded Machine: ip-10-0-75-39.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:36:56.864716 1 status.go:126] Degraded Machine: ip-10-0-49-13.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:37:01.874555 1 node_controller.go:1096] No nodes available for updates I1005 08:37:01.874621 1 status.go:126] Degraded Machine: ip-10-0-4-193.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:37:01.874629 1 status.go:126] Degraded Machine: ip-10-0-75-39.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:37:01.874635 1 status.go:126] Degraded Machine: ip-10-0-49-13.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:37:05.291051 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Done I1005 08:37:05.291072 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = I1005 08:37:06.374206 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Done I1005 08:37:06.374224 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = I1005 08:37:07.083663 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Done I1005 08:37:07.083682 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = I1005 08:37:13.288386 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Degraded I1005 08:37:13.288572 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = open /etc/docker/certs.d: no such file or directory I1005 08:37:13.451474 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Degraded I1005 08:37:13.451547 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = open /etc/docker/certs.d: no such file or directory I1005 08:37:14.438758 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Degraded I1005 08:37:14.438852 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = open /etc/docker/certs.d: no such file or directory I1005 08:37:15.301249 1 node_controller.go:1096] No nodes available for updates I1005 08:37:15.301308 1 status.go:126] Degraded Machine: ip-10-0-4-193.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:37:15.301315 1 status.go:126] Degraded Machine: ip-10-0-75-39.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:37:15.301319 1 status.go:126] Degraded Machine: ip-10-0-49-13.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory E1005 08:37:15.378324 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 08:37:15.378344 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 08:37:20.312217 1 node_controller.go:1096] No nodes available for updates I1005 08:37:20.312316 1 status.go:126] Degraded Machine: ip-10-0-49-13.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:37:20.312347 1 status.go:126] Degraded Machine: ip-10-0-4-193.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:37:20.312357 1 status.go:126] Degraded Machine: ip-10-0-75-39.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:37:25.351636 1 node_controller.go:1096] No nodes available for updates I1005 08:37:25.351771 1 status.go:126] Degraded Machine: ip-10-0-4-193.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:37:25.351796 1 status.go:126] Degraded Machine: ip-10-0-75-39.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:37:25.351816 1 status.go:126] Degraded Machine: ip-10-0-49-13.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory E1005 08:37:25.465027 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 08:37:25.465066 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 08:37:26.063928 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Done I1005 08:37:26.064010 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = I1005 08:37:26.299674 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Done I1005 08:37:26.299746 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = I1005 08:37:27.063741 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Done I1005 08:37:27.063804 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = I1005 08:37:33.791032 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Degraded I1005 08:37:33.791047 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = open /etc/docker/certs.d: no such file or directory I1005 08:37:33.947808 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Degraded I1005 08:37:33.947901 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = open /etc/docker/certs.d: no such file or directory I1005 08:37:34.934186 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Degraded I1005 08:37:34.934261 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = open /etc/docker/certs.d: no such file or directory I1005 08:37:35.372239 1 node_controller.go:1096] No nodes available for updates I1005 08:37:35.372293 1 status.go:126] Degraded Machine: ip-10-0-4-193.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:37:35.372299 1 status.go:126] Degraded Machine: ip-10-0-75-39.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:37:35.372304 1 status.go:126] Degraded Machine: ip-10-0-49-13.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory E1005 08:37:35.450021 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 08:37:35.450036 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 08:37:40.382650 1 node_controller.go:1096] No nodes available for updates I1005 08:37:40.382699 1 status.go:126] Degraded Machine: ip-10-0-4-193.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:37:40.382705 1 status.go:126] Degraded Machine: ip-10-0-75-39.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:37:40.382709 1 status.go:126] Degraded Machine: ip-10-0-49-13.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:37:46.526699 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Done I1005 08:37:46.526797 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = I1005 08:37:46.816182 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Done I1005 08:37:46.816285 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = I1005 08:37:47.552656 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Done I1005 08:37:47.552716 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = W1005 08:39:14.011621 1 warnings.go:70] unknown field "spec.dns.spec.platform" W1005 08:39:14.022521 1 warnings.go:70] unknown field "spec.dns.spec.platform" W1005 08:39:14.046707 1 warnings.go:70] unknown field "spec.dns.spec.platform" W1005 08:39:15.211984 1 warnings.go:70] unknown field "spec.dns.spec.platform" E1005 08:39:19.867637 1 leaderelection.go:327] error retrieving resource lock openshift-machine-config-operator/machine-config-controller: the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io machine-config-controller) I1005 08:39:40.842579 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Degraded I1005 08:39:40.843487 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = open /etc/docker/certs.d: no such file or directory I1005 08:39:40.889119 1 template_controller.go:134] Re-syncing ControllerConfig due to secret pull-secret change I1005 08:39:41.000821 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Degraded I1005 08:39:41.002458 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = open /etc/docker/certs.d: no such file or directory I1005 08:39:45.843988 1 node_controller.go:1096] No nodes available for updates I1005 08:39:45.844123 1 status.go:126] Degraded Machine: ip-10-0-4-193.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:39:45.844150 1 status.go:126] Degraded Machine: ip-10-0-75-39.us-east-2.compute.internal and Degraded Reason: open /etc/docker/certs.d: no such file or directory I1005 08:39:46.577798 1 render_controller.go:510] Generated machineconfig rendered-worker-898581a84fcd701560513cb931ec6f1d from 7 configs: [{MachineConfig 00-worker machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 97-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-ssh machineconfiguration.openshift.io/v1 }] I1005 08:39:46.590029 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"23110", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-worker-898581a84fcd701560513cb931ec6f1d successfully generated (release version: 4.14.0-0.ci.test-2023-10-05-080602-ci-ln-6f80fqk-latest, controller version: 8e2ca527ec2990eee93be55b61eaa6825451b17f) I1005 08:39:46.590811 1 render_controller.go:510] Generated machineconfig rendered-master-a669adc0efde176f2e9d2626aeba36ce from 7 configs: [{MachineConfig 00-master machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 97-master-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-master-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-ssh machineconfiguration.openshift.io/v1 }] I1005 08:39:46.591072 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"14497", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-master-a669adc0efde176f2e9d2626aeba36ce successfully generated (release version: 4.14.0-0.ci.test-2023-10-05-080602-ci-ln-6f80fqk-latest, controller version: 8e2ca527ec2990eee93be55b61eaa6825451b17f) I1005 08:39:46.599002 1 render_controller.go:536] Pool worker: now targeting: rendered-worker-898581a84fcd701560513cb931ec6f1d I1005 08:39:46.601692 1 render_controller.go:536] Pool master: now targeting: rendered-master-a669adc0efde176f2e9d2626aeba36ce W1005 08:39:48.903908 1 warnings.go:70] unknown field "spec.dns.spec.platform" W1005 08:39:48.921333 1 warnings.go:70] unknown field "spec.dns.spec.platform" W1005 08:39:48.960320 1 warnings.go:70] unknown field "spec.dns.spec.platform" I1005 08:39:49.566315 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Done I1005 08:39:49.566386 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = I1005 08:39:49.745520 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Done I1005 08:39:49.745586 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/reason = W1005 08:39:50.108265 1 warnings.go:70] unknown field "spec.dns.spec.platform" I1005 08:39:50.882992 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 08:39:50.906532 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed taints I1005 08:39:50.938142 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 08:39:50.943588 1 node_controller.go:483] Pool worker: 3 candidate nodes in 3 zones for update, capacity: 1 I1005 08:39:50.943603 1 node_controller.go:483] Pool worker: Setting node ip-10-0-4-193.us-east-2.compute.internal target to MachineConfig rendered-worker-898581a84fcd701560513cb931ec6f1d I1005 08:39:50.970365 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-898581a84fcd701560513cb931ec6f1d I1005 08:39:50.970610 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"23259", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-4-193.us-east-2.compute.internal to MachineConfig rendered-worker-898581a84fcd701560513cb931ec6f1d I1005 08:39:51.668887 1 render_controller.go:510] Generated machineconfig rendered-worker-6bf803109332579a2637f8dc27f9f58f from 7 configs: [{MachineConfig 00-worker machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 97-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-ssh machineconfiguration.openshift.io/v1 }] I1005 08:39:51.669233 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"23666", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-worker-6bf803109332579a2637f8dc27f9f58f successfully generated (release version: 4.14.0-0.ci.test-2023-10-05-080602-ci-ln-6f80fqk-latest, controller version: 8e2ca527ec2990eee93be55b61eaa6825451b17f) I1005 08:39:51.670064 1 node_controller.go:493] Pool master[zone=us-east-2a]: node ip-10-0-1-133.us-east-2.compute.internal: changed taints I1005 08:39:51.682471 1 render_controller.go:536] Pool worker: now targeting: rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 08:39:51.696983 1 render_controller.go:510] Generated machineconfig rendered-master-76f1d53bfd4be696610794ea1e7d3803 from 7 configs: [{MachineConfig 00-master machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-master-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 97-master-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-master-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-master-ssh machineconfiguration.openshift.io/v1 }] I1005 08:39:51.697317 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"23260", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-master-76f1d53bfd4be696610794ea1e7d3803 successfully generated (release version: 4.14.0-0.ci.test-2023-10-05-080602-ci-ln-6f80fqk-latest, controller version: 8e2ca527ec2990eee93be55b61eaa6825451b17f) I1005 08:39:51.716484 1 render_controller.go:536] Pool master: now targeting: rendered-master-76f1d53bfd4be696610794ea1e7d3803 I1005 08:39:52.265740 1 node_controller.go:493] Pool master[zone=us-east-2b]: node ip-10-0-60-194.us-east-2.compute.internal: changed taints I1005 08:39:52.265795 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 08:39:52.470979 1 node_controller.go:483] Pool master: 3 candidate nodes in 3 zones for update, capacity: 1 I1005 08:39:52.471129 1 node_controller.go:1164] Deferring update of machine config operator node: ip-10-0-1-133.us-east-2.compute.internal I1005 08:39:52.471137 1 node_controller.go:483] Pool master: filtered to 2 candidate nodes for update, capacity: 1 I1005 08:39:52.471145 1 node_controller.go:483] Pool master: Setting node ip-10-0-60-194.us-east-2.compute.internal target to MachineConfig rendered-master-a669adc0efde176f2e9d2626aeba36ce I1005 08:39:52.471253 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"23260", FieldPath:""}): type: 'Normal' reason: 'DeferringOperatorNodeUpdate' Deferring update of machine config operator node ip-10-0-1-133.us-east-2.compute.internal I1005 08:39:53.068180 1 node_controller.go:493] Pool master[zone=us-east-2c]: node ip-10-0-89-70.us-east-2.compute.internal: changed taints I1005 08:39:53.268559 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"23260", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-60-194.us-east-2.compute.internal to MachineConfig rendered-master-a669adc0efde176f2e9d2626aeba36ce I1005 08:39:53.275207 1 node_controller.go:811] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again I1005 08:39:53.280775 1 node_controller.go:1096] No nodes available for updates I1005 08:39:53.482348 1 node_controller.go:493] Pool master[zone=us-east-2b]: node ip-10-0-60-194.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-master-a669adc0efde176f2e9d2626aeba36ce I1005 08:39:53.482482 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"23839", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-60-194.us-east-2.compute.internal now has machineconfiguration.openshift.io/desiredConfig=rendered-master-a669adc0efde176f2e9d2626aeba36ce I1005 08:39:54.761349 1 node_controller.go:493] Pool master[zone=us-east-2b]: node ip-10-0-60-194.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 08:39:54.761556 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"23839", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-60-194.us-east-2.compute.internal now has machineconfiguration.openshift.io/state=Working I1005 08:39:55.883633 1 node_controller.go:1096] No nodes available for updates I1005 08:39:58.302628 1 node_controller.go:1096] No nodes available for updates I1005 08:39:58.972534 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordoning I1005 08:39:58.972557 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating uncordon (currently schedulable: true) I1005 08:39:58.981261 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 08:39:58.981283 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 08:40:00.908854 1 node_controller.go:1096] No nodes available for updates I1005 08:40:03.048464 1 drain_controller.go:173] node ip-10-0-60-194.us-east-2.compute.internal: uncordoning I1005 08:40:03.048541 1 drain_controller.go:173] node ip-10-0-60-194.us-east-2.compute.internal: initiating uncordon (currently schedulable: true) I1005 08:40:03.055204 1 drain_controller.go:173] node ip-10-0-60-194.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 08:40:03.055273 1 drain_controller.go:173] node ip-10-0-60-194.us-east-2.compute.internal: operation successful; applying completion annotation I1005 08:40:04.242616 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Completed update to rendered-worker-898581a84fcd701560513cb931ec6f1d I1005 08:40:04.547966 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 08:40:08.077348 1 node_controller.go:493] Pool master[zone=us-east-2b]: node ip-10-0-60-194.us-east-2.compute.internal: Completed update to rendered-master-a669adc0efde176f2e9d2626aeba36ce I1005 08:40:09.243658 1 node_controller.go:1096] No nodes available for updates I1005 08:40:13.077916 1 node_controller.go:483] Pool master: 3 candidate nodes in 3 zones for update, capacity: 1 I1005 08:40:13.078026 1 node_controller.go:1164] Deferring update of machine config operator node: ip-10-0-1-133.us-east-2.compute.internal I1005 08:40:13.078034 1 node_controller.go:483] Pool master: filtered to 2 candidate nodes for update, capacity: 1 I1005 08:40:13.078041 1 node_controller.go:483] Pool master: Setting node ip-10-0-60-194.us-east-2.compute.internal target to MachineConfig rendered-master-76f1d53bfd4be696610794ea1e7d3803 I1005 08:40:13.078083 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"23839", FieldPath:""}): type: 'Normal' reason: 'DeferringOperatorNodeUpdate' Deferring update of machine config operator node ip-10-0-1-133.us-east-2.compute.internal I1005 08:40:13.096730 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"23839", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-60-194.us-east-2.compute.internal to MachineConfig rendered-master-76f1d53bfd4be696610794ea1e7d3803 I1005 08:40:13.112033 1 node_controller.go:493] Pool master[zone=us-east-2b]: node ip-10-0-60-194.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-master-76f1d53bfd4be696610794ea1e7d3803 I1005 08:40:13.112297 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"23839", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-60-194.us-east-2.compute.internal now has machineconfiguration.openshift.io/desiredConfig=rendered-master-76f1d53bfd4be696610794ea1e7d3803 I1005 08:40:14.409994 1 node_controller.go:493] Pool master[zone=us-east-2b]: node ip-10-0-60-194.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 08:40:14.410083 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"23839", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-60-194.us-east-2.compute.internal now has machineconfiguration.openshift.io/state=Working I1005 08:40:16.265701 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Done I1005 08:40:18.096739 1 drain_controller.go:173] node ip-10-0-60-194.us-east-2.compute.internal: uncordoning I1005 08:40:18.096758 1 drain_controller.go:173] node ip-10-0-60-194.us-east-2.compute.internal: initiating uncordon (currently schedulable: true) I1005 08:40:18.103495 1 drain_controller.go:173] node ip-10-0-60-194.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 08:40:18.103507 1 drain_controller.go:173] node ip-10-0-60-194.us-east-2.compute.internal: operation successful; applying completion annotation I1005 08:40:18.141125 1 node_controller.go:1096] No nodes available for updates I1005 08:40:18.152246 1 node_controller.go:493] Pool master[zone=us-east-2b]: node ip-10-0-60-194.us-east-2.compute.internal: changed taints I1005 08:40:21.266329 1 node_controller.go:483] Pool worker: 3 candidate nodes in 3 zones for update, capacity: 1 I1005 08:40:21.266345 1 node_controller.go:483] Pool worker: Setting node ip-10-0-4-193.us-east-2.compute.internal target to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 08:40:21.280738 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 08:40:21.280818 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"23910", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-4-193.us-east-2.compute.internal to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 08:40:22.792747 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 08:40:23.152468 1 node_controller.go:1096] No nodes available for updates I1005 08:40:26.280925 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordoning I1005 08:40:26.280994 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating uncordon (currently schedulable: true) I1005 08:40:26.285022 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 08:40:26.285035 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 08:40:26.297294 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 08:40:26.300601 1 node_controller.go:1096] No nodes available for updates I1005 08:40:27.233575 1 node_controller.go:493] Pool master[zone=us-east-2b]: node ip-10-0-60-194.us-east-2.compute.internal: Completed update to rendered-master-76f1d53bfd4be696610794ea1e7d3803 I1005 08:40:31.297566 1 node_controller.go:1096] No nodes available for updates I1005 08:40:32.234699 1 node_controller.go:483] Pool master: 2 candidate nodes in 2 zones for update, capacity: 1 I1005 08:40:32.234806 1 node_controller.go:1164] Deferring update of machine config operator node: ip-10-0-1-133.us-east-2.compute.internal I1005 08:40:32.234815 1 node_controller.go:483] Pool master: filtered to 1 candidate nodes for update, capacity: 1 I1005 08:40:32.234821 1 node_controller.go:483] Pool master: Setting node ip-10-0-89-70.us-east-2.compute.internal target to MachineConfig rendered-master-76f1d53bfd4be696610794ea1e7d3803 I1005 08:40:32.234906 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"23839", FieldPath:""}): type: 'Normal' reason: 'DeferringOperatorNodeUpdate' Deferring update of machine config operator node ip-10-0-1-133.us-east-2.compute.internal I1005 08:40:32.265017 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"23839", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-89-70.us-east-2.compute.internal to MachineConfig rendered-master-76f1d53bfd4be696610794ea1e7d3803 I1005 08:40:32.279662 1 node_controller.go:493] Pool master[zone=us-east-2c]: node ip-10-0-89-70.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-master-76f1d53bfd4be696610794ea1e7d3803 I1005 08:40:32.279763 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"25032", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-89-70.us-east-2.compute.internal now has machineconfiguration.openshift.io/desiredConfig=rendered-master-76f1d53bfd4be696610794ea1e7d3803 I1005 08:40:33.752575 1 node_controller.go:493] Pool master[zone=us-east-2c]: node ip-10-0-89-70.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 08:40:33.752688 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"25032", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-89-70.us-east-2.compute.internal now has machineconfiguration.openshift.io/state=Working I1005 08:40:35.560842 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Completed update to rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 08:40:37.270563 1 drain_controller.go:173] node ip-10-0-89-70.us-east-2.compute.internal: uncordoning I1005 08:40:37.270580 1 drain_controller.go:173] node ip-10-0-89-70.us-east-2.compute.internal: initiating uncordon (currently schedulable: true) I1005 08:40:37.275063 1 drain_controller.go:173] node ip-10-0-89-70.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 08:40:37.275073 1 drain_controller.go:173] node ip-10-0-89-70.us-east-2.compute.internal: operation successful; applying completion annotation I1005 08:40:37.310650 1 node_controller.go:1096] No nodes available for updates I1005 08:40:37.331576 1 node_controller.go:493] Pool master[zone=us-east-2c]: node ip-10-0-89-70.us-east-2.compute.internal: changed taints E1005 08:40:37.386822 1 render_controller.go:460] Error updating MachineConfigPool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again I1005 08:40:37.386889 1 render_controller.go:377] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again I1005 08:40:40.561997 1 node_controller.go:483] Pool worker: 2 candidate nodes in 2 zones for update, capacity: 1 I1005 08:40:40.562017 1 node_controller.go:483] Pool worker: Setting node ip-10-0-49-13.us-east-2.compute.internal target to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 08:40:40.577283 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"23910", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-49-13.us-east-2.compute.internal to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 08:40:40.578383 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 08:40:42.002160 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 08:40:42.332256 1 node_controller.go:1096] No nodes available for updates I1005 08:40:45.578862 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordoning I1005 08:40:45.578897 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating uncordon (currently schedulable: true) I1005 08:40:45.597356 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 08:40:45.597372 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 08:40:45.643297 1 node_controller.go:1096] No nodes available for updates I1005 08:40:45.644318 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints E1005 08:40:45.745786 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 08:40:45.745804 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 08:40:47.162343 1 node_controller.go:493] Pool master[zone=us-east-2c]: node ip-10-0-89-70.us-east-2.compute.internal: Completed update to rendered-master-76f1d53bfd4be696610794ea1e7d3803 I1005 08:40:50.645080 1 node_controller.go:1096] No nodes available for updates I1005 08:40:52.162585 1 node_controller.go:483] Pool master: 1 candidate nodes in 1 zones for update, capacity: 1 I1005 08:40:52.162600 1 node_controller.go:483] Pool master: filtered to 1 candidate nodes for update, capacity: 1 I1005 08:40:52.162603 1 node_controller.go:483] Pool master: Setting node ip-10-0-1-133.us-east-2.compute.internal target to MachineConfig rendered-master-76f1d53bfd4be696610794ea1e7d3803 I1005 08:40:52.182960 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"25192", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-1-133.us-east-2.compute.internal to MachineConfig rendered-master-76f1d53bfd4be696610794ea1e7d3803 I1005 08:40:52.201623 1 node_controller.go:493] Pool master[zone=us-east-2a]: node ip-10-0-1-133.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-master-76f1d53bfd4be696610794ea1e7d3803 I1005 08:40:52.201746 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"25543", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-1-133.us-east-2.compute.internal now has machineconfiguration.openshift.io/desiredConfig=rendered-master-76f1d53bfd4be696610794ea1e7d3803 I1005 08:40:53.640444 1 node_controller.go:493] Pool master[zone=us-east-2a]: node ip-10-0-1-133.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 08:40:53.640664 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"master", UID:"b830dd47-3c30-45cf-b675-c4d210ec2b70", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"25543", FieldPath:""}): type: 'Normal' reason: 'AnnotationChange' Node ip-10-0-1-133.us-east-2.compute.internal now has machineconfiguration.openshift.io/state=Working I1005 08:40:53.821064 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Completed update to rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 08:40:57.228202 1 node_controller.go:1096] No nodes available for updates I1005 08:40:57.248994 1 node_controller.go:493] Pool master[zone=us-east-2a]: node ip-10-0-1-133.us-east-2.compute.internal: changed taints E1005 08:40:57.327328 1 render_controller.go:460] Error updating MachineConfigPool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again I1005 08:40:57.327344 1 render_controller.go:377] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again I1005 08:40:58.821333 1 node_controller.go:483] Pool worker: 1 candidate nodes in 1 zones for update, capacity: 1 I1005 08:40:58.821352 1 node_controller.go:483] Pool worker: Setting node ip-10-0-75-39.us-east-2.compute.internal target to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 08:40:58.838753 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"25326", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-75-39.us-east-2.compute.internal to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 08:40:58.847808 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 08:41:00.218845 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 08:41:02.232404 1 drain_controller.go:173] node ip-10-0-1-133.us-east-2.compute.internal: uncordoning I1005 08:41:02.232445 1 drain_controller.go:173] node ip-10-0-1-133.us-east-2.compute.internal: initiating uncordon (currently schedulable: true) I1005 08:41:02.241634 1 node_controller.go:1096] No nodes available for updates I1005 08:41:02.244691 1 drain_controller.go:173] node ip-10-0-1-133.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 08:41:02.244705 1 drain_controller.go:173] node ip-10-0-1-133.us-east-2.compute.internal: operation successful; applying completion annotation I1005 08:41:03.848579 1 drain_controller.go:173] node ip-10-0-75-39.us-east-2.compute.internal: uncordoning I1005 08:41:03.848598 1 drain_controller.go:173] node ip-10-0-75-39.us-east-2.compute.internal: initiating uncordon (currently schedulable: true) I1005 08:41:03.853764 1 drain_controller.go:173] node ip-10-0-75-39.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 08:41:03.853777 1 drain_controller.go:173] node ip-10-0-75-39.us-east-2.compute.internal: operation successful; applying completion annotation I1005 08:41:03.864376 1 node_controller.go:1096] No nodes available for updates I1005 08:41:03.866789 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed taints E1005 08:41:03.955130 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 08:41:03.955145 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 08:41:07.907965 1 node_controller.go:493] Pool master[zone=us-east-2a]: node ip-10-0-1-133.us-east-2.compute.internal: Completed update to rendered-master-76f1d53bfd4be696610794ea1e7d3803 I1005 08:41:08.867341 1 node_controller.go:1096] No nodes available for updates I1005 08:41:12.118905 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: Completed update to rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 08:41:12.908719 1 status.go:109] Pool master: All nodes are updated with MachineConfig rendered-master-76f1d53bfd4be696610794ea1e7d3803 I1005 08:41:17.120242 1 status.go:109] Pool worker: All nodes are updated with MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 09:05:23.406246 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: Reporting unready: node ip-10-0-75-39.us-east-2.compute.internal is reporting Unschedulable I1005 09:05:23.462001 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed taints I1005 09:05:28.407099 1 node_controller.go:1096] No nodes available for updates I1005 09:05:33.415713 1 node_controller.go:1096] No nodes available for updates I1005 09:06:04.081132 1 template_controller.go:134] Re-syncing ControllerConfig due to secret pull-secret change I1005 09:06:55.435496 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: Reporting unready: node ip-10-0-75-39.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 09:06:55.489469 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed taints I1005 09:07:00.436204 1 node_controller.go:1096] No nodes available for updates I1005 09:07:00.855943 1 node_controller.go:493] Pool worker[zone=us-east-2c]: node ip-10-0-75-39.us-east-2.compute.internal: changed taints I1005 09:07:05.856471 1 node_controller.go:1096] No nodes available for updates I1005 09:32:27.260894 1 template_controller.go:134] Re-syncing ControllerConfig due to secret pull-secret change I1005 09:58:50.440459 1 template_controller.go:134] Re-syncing ControllerConfig due to secret pull-secret change I1005 10:10:51.757025 1 container_runtime_config_controller.go:686] Applied ContainerRuntimeConfig change-ctr-cr-config on MachineConfigPool worker I1005 10:10:56.832309 1 render_controller.go:510] Generated machineconfig rendered-worker-a7c86cc4c3da8242c59a1ae1e3cfaa83 from 8 configs: [{MachineConfig 00-worker machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 97-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-generated-containerruntime machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-ssh machineconfiguration.openshift.io/v1 }] I1005 10:10:56.832530 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"38471", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-worker-a7c86cc4c3da8242c59a1ae1e3cfaa83 successfully generated (release version: 4.14.0-0.ci.test-2023-10-05-080602-ci-ln-6f80fqk-latest, controller version: 8e2ca527ec2990eee93be55b61eaa6825451b17f) I1005 10:10:56.842102 1 render_controller.go:536] Pool worker: now targeting: rendered-worker-a7c86cc4c3da8242c59a1ae1e3cfaa83 I1005 10:11:01.856537 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:11:01.876601 1 node_controller.go:483] Pool worker: 2 candidate nodes in 2 zones for update, capacity: 1 I1005 10:11:01.876619 1 node_controller.go:483] Pool worker: Setting node ip-10-0-4-193.us-east-2.compute.internal target to MachineConfig rendered-worker-a7c86cc4c3da8242c59a1ae1e3cfaa83 I1005 10:11:01.876777 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:11:01.897718 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"60501", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-4-193.us-east-2.compute.internal to MachineConfig rendered-worker-a7c86cc4c3da8242c59a1ae1e3cfaa83 I1005 10:11:01.897966 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-a7c86cc4c3da8242c59a1ae1e3cfaa83 E1005 10:11:01.984905 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 10:11:01.984980 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 10:11:04.024328 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 10:11:06.855927 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 10:11:06.856013 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 10:11:06.878175 1 node_controller.go:1096] No nodes available for updates I1005 10:11:06.878900 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 10:11:06.883358 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain I1005 10:11:06.883311 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:11:06.947157 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints E1005 10:11:07.528333 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 10:11:07.530353 1 drain_controller.go:144] evicting pod openshift-operator-lifecycle-manager/collect-profiles-28275000-7sq6h I1005 10:11:07.530413 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-65d7564d89-gjksw I1005 10:11:07.530441 1 drain_controller.go:144] evicting pod openshift-console/downloads-6565ffd4cd-lv6bq I1005 10:11:07.530448 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-zqlzz I1005 10:11:07.530462 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-tg4zs I1005 10:11:07.530469 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-q8wbg I1005 10:11:07.530469 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-1 I1005 10:11:07.530477 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-1 I1005 10:11:07.530477 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-gcpr2 I1005 10:11:07.530485 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-vc24d I1005 10:11:07.530492 1 drain_controller.go:144] evicting pod openshift-operator-lifecycle-manager/collect-profiles-28274970-bg9n7 I1005 10:11:07.530493 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-gsbtw I1005 10:11:07.530502 1 drain_controller.go:144] evicting pod openshift-operator-lifecycle-manager/collect-profiles-28274985-p574x I1005 10:11:07.584705 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-operator-lifecycle-manager/collect-profiles-28275000-7sq6h I1005 10:11:07.586908 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-console/downloads-6565ffd4cd-lv6bq I1005 10:11:07.885617 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-operator-lifecycle-manager/collect-profiles-28274970-bg9n7 I1005 10:11:08.176895 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-operator-lifecycle-manager/collect-profiles-28274985-p574x I1005 10:11:09.363982 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-gcpr2 I1005 10:11:09.769530 1 request.go:696] Waited for 1.151392685s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/prometheus-operator-admission-webhook-7c8dc7fcdb-vc24d I1005 10:11:09.776217 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-vc24d I1005 10:11:09.965188 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-gsbtw I1005 10:11:10.166746 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-1 I1005 10:11:10.363698 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-65d7564d89-gjksw I1005 10:11:10.567371 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-1 I1005 10:11:10.960850 1 request.go:696] Waited for 1.37213935s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/telemeter-client-578654767d-q8wbg I1005 10:11:10.963863 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-q8wbg I1005 10:11:11.884304 1 node_controller.go:1096] No nodes available for updates I1005 10:11:35.598685 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-zqlzz I1005 10:11:55.590393 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-tg4zs I1005 10:11:55.590440 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:12:55.802740 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting NotReady=False I1005 10:12:55.842999 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:12:56.504988 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:13:00.802928 1 node_controller.go:1096] No nodes available for updates I1005 10:13:05.937042 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting Unschedulable I1005 10:13:05.973573 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:13:06.495309 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:13:10.936495 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordoning I1005 10:13:10.936512 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 10:13:10.937378 1 node_controller.go:1096] No nodes available for updates I1005 10:13:10.958239 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 10:13:10.958515 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:13:10.976361 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:13:15.977468 1 node_controller.go:1096] No nodes available for updates I1005 10:13:16.712779 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Completed update to rendered-worker-a7c86cc4c3da8242c59a1ae1e3cfaa83 I1005 10:13:21.713825 1 node_controller.go:483] Pool worker: 1 candidate nodes in 1 zones for update, capacity: 1 I1005 10:13:21.713842 1 node_controller.go:483] Pool worker: Setting node ip-10-0-49-13.us-east-2.compute.internal target to MachineConfig rendered-worker-a7c86cc4c3da8242c59a1ae1e3cfaa83 I1005 10:13:21.732285 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-a7c86cc4c3da8242c59a1ae1e3cfaa83 I1005 10:13:21.734099 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"60566", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-49-13.us-east-2.compute.internal to MachineConfig rendered-worker-a7c86cc4c3da8242c59a1ae1e3cfaa83 I1005 10:13:23.190607 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 10:13:23.397866 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 10:13:23.397974 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 10:13:23.431042 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 10:13:23.431110 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain I1005 10:13:23.465569 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints E1005 10:13:24.083558 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 10:13:24.085675 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-hdb6j I1005 10:13:24.085677 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-bqz7n I1005 10:13:24.085677 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-tstc5 I1005 10:13:24.085687 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-52ghd I1005 10:13:24.085689 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-7tns9 I1005 10:13:24.085689 1 drain_controller.go:144] evicting pod openshift-console/downloads-6565ffd4cd-vsctv I1005 10:13:24.085695 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 10:13:24.085697 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-65d7564d89-k88tn I1005 10:13:24.085702 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-gnhhx I1005 10:13:24.085700 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-dnz75 I1005 10:13:24.085705 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 10:13:24.085710 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-n8g72 I1005 10:13:24.085712 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-ccvcz E1005 10:13:24.105046 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 10:13:24.125911 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-tstc5" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 10:13:24.185722 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-console/downloads-6565ffd4cd-vsctv E1005 10:13:24.305661 1 drain_controller.go:144] error when evicting pods/"prometheus-k8s-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 10:13:25.136469 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-hdb6j I1005 10:13:25.170132 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-7tns9 I1005 10:13:25.913538 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-ccvcz I1005 10:13:26.149711 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-dnz75 I1005 10:13:26.314749 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-gnhhx I1005 10:13:26.715521 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-65d7564d89-k88tn I1005 10:13:26.755251 1 node_controller.go:1096] No nodes available for updates I1005 10:13:26.763232 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:13:26.915323 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-bqz7n I1005 10:13:27.114691 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-n8g72 I1005 10:13:29.105928 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 10:13:29.113307 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 10:13:29.126367 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-tstc5 E1005 10:13:29.130992 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-tstc5" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 10:13:29.306259 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 10:13:30.347332 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-0 I1005 10:13:31.763767 1 node_controller.go:1096] No nodes available for updates I1005 10:13:34.113473 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 10:13:34.121075 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 10:13:34.131067 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-tstc5 I1005 10:13:39.122156 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 10:13:39.129687 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 10:13:44.130118 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 10:13:46.161479 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-0 I1005 10:14:00.164977 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-tstc5 I1005 10:14:11.165197 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-52ghd I1005 10:14:11.165228 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:14:11.177392 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 10:14:11.177483 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: false) I1005 10:14:11.185647 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 10:14:11.185704 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain E1005 10:14:11.813250 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 10:14:11.813275 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:15:04.718636 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting NotReady=False I1005 10:15:04.746084 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:15:06.540315 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:15:09.719573 1 node_controller.go:1096] No nodes available for updates I1005 10:15:14.920660 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting Unschedulable I1005 10:15:14.942653 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:15:16.524512 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:15:19.921607 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordoning I1005 10:15:19.921629 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 10:15:19.921649 1 node_controller.go:1096] No nodes available for updates I1005 10:15:19.940748 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 10:15:19.940766 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:15:19.958930 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:15:24.960073 1 node_controller.go:1096] No nodes available for updates I1005 10:15:25.577557 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Completed update to rendered-worker-a7c86cc4c3da8242c59a1ae1e3cfaa83 I1005 10:15:30.582661 1 status.go:109] Pool worker: All nodes are updated with MachineConfig rendered-worker-a7c86cc4c3da8242c59a1ae1e3cfaa83 I1005 10:16:08.889210 1 render_controller.go:536] Pool worker: now targeting: rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:16:13.910086 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:16:13.936478 1 node_controller.go:483] Pool worker: 2 candidate nodes in 2 zones for update, capacity: 1 I1005 10:16:13.936542 1 node_controller.go:483] Pool worker: Setting node ip-10-0-4-193.us-east-2.compute.internal target to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:16:13.936713 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:16:13.967837 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"64236", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-4-193.us-east-2.compute.internal to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:16:13.968721 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-6bf803109332579a2637f8dc27f9f58f E1005 10:16:14.027954 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 10:16:14.027971 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 10:16:15.922027 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 10:16:18.924248 1 node_controller.go:1096] No nodes available for updates I1005 10:16:18.937078 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:16:20.922634 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 10:16:20.922681 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 10:16:20.949974 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 10:16:20.950068 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain I1005 10:16:20.965685 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints E1005 10:16:21.595818 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 10:16:21.597658 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-hszfq I1005 10:16:21.597727 1 drain_controller.go:144] evicting pod openshift-console/downloads-6565ffd4cd-bjq2l I1005 10:16:21.597658 1 drain_controller.go:144] evicting pod openshift-operator-lifecycle-manager/collect-profiles-28275015-pfdgn I1005 10:16:21.597668 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-1 I1005 10:16:21.597675 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-kjfqc I1005 10:16:21.597684 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-b7drp I1005 10:16:21.597686 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-65d7564d89-rt46c I1005 10:16:21.597691 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-ll6xp I1005 10:16:21.597698 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-p9bdx I1005 10:16:21.597697 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-b2bml I1005 10:16:21.597709 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-1 I1005 10:16:21.597709 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-v65xn I1005 10:16:21.597715 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-rtnlm I1005 10:16:21.597717 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-chxjm I1005 10:16:21.639949 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-operator-lifecycle-manager/collect-profiles-28275015-pfdgn I1005 10:16:21.676411 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-console/downloads-6565ffd4cd-bjq2l I1005 10:16:23.040362 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-kjfqc I1005 10:16:23.433934 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-65d7564d89-rt46c I1005 10:16:23.831392 1 request.go:696] Waited for 1.144613914s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-7ddd77864b-b2bml I1005 10:16:23.834717 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-b2bml I1005 10:16:23.937986 1 node_controller.go:1096] No nodes available for updates I1005 10:16:24.036904 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-1 I1005 10:16:24.236052 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-1 I1005 10:16:24.432840 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-v65xn I1005 10:16:24.635851 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-rtnlm I1005 10:16:24.831491 1 request.go:696] Waited for 1.391182435s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/kube-state-metrics-794c8bd776-chxjm I1005 10:16:24.835353 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-chxjm I1005 10:16:25.234084 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-b7drp I1005 10:16:25.435986 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-ll6xp I1005 10:16:48.716948 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-hszfq I1005 10:17:08.668528 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-p9bdx I1005 10:17:08.668557 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:17:08.689049 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 10:17:08.689065 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: false) I1005 10:17:08.696685 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 10:17:08.696743 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain E1005 10:17:09.327153 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 10:17:09.327177 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:17:51.565878 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 10:17:51.596479 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:17:56.566931 1 node_controller.go:1096] No nodes available for updates I1005 10:17:56.996589 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:18:01.997637 1 node_controller.go:1096] No nodes available for updates I1005 10:18:07.567295 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting NotReady=False I1005 10:18:07.592169 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:18:07.625341 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:18:11.999186 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:18:12.019935 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:18:12.570487 1 node_controller.go:1096] No nodes available for updates I1005 10:18:16.613447 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting Unschedulable I1005 10:18:16.639946 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:18:17.032160 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:18:21.613752 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordoning I1005 10:18:21.613770 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 10:18:21.613820 1 node_controller.go:1096] No nodes available for updates I1005 10:18:21.636745 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 10:18:21.636809 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:18:21.662464 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:18:26.663762 1 node_controller.go:1096] No nodes available for updates I1005 10:18:27.845688 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Completed update to rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:18:32.845949 1 node_controller.go:483] Pool worker: 1 candidate nodes in 1 zones for update, capacity: 1 I1005 10:18:32.845965 1 node_controller.go:483] Pool worker: Setting node ip-10-0-49-13.us-east-2.compute.internal target to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:18:32.863796 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"64301", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-49-13.us-east-2.compute.internal to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:18:32.864981 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:18:34.313940 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 10:18:37.886891 1 node_controller.go:1096] No nodes available for updates I1005 10:18:37.887251 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints E1005 10:18:38.023965 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 10:18:38.023980 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 10:18:39.314568 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 10:18:39.314601 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 10:18:39.335369 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 10:18:39.335536 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain I1005 10:18:39.347452 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints E1005 10:18:39.983210 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 10:18:39.984978 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-dvnvk I1005 10:18:39.984982 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 10:18:39.984986 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-65d7564d89-sknll I1005 10:18:39.984993 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-qfjbn I1005 10:18:39.984996 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-b6fsq I1005 10:18:39.984999 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-9tpm5 I1005 10:18:39.985008 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-fjc9l I1005 10:18:39.985009 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-5k9cf I1005 10:18:39.985014 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 10:18:39.985015 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-gsb7s I1005 10:18:39.985018 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-78zgn I1005 10:18:39.985024 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-sqd87 E1005 10:18:39.992677 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-qfjbn" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 10:18:40.005487 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 10:18:41.076964 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-dvnvk I1005 10:18:41.224734 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-b6fsq I1005 10:18:41.624805 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-0 I1005 10:18:41.821490 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-78zgn I1005 10:18:42.022216 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-sqd87 I1005 10:18:42.225504 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-gsb7s I1005 10:18:42.421046 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-5k9cf I1005 10:18:42.823133 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-fjc9l I1005 10:18:42.888303 1 node_controller.go:1096] No nodes available for updates I1005 10:18:43.023903 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-65d7564d89-sknll I1005 10:18:44.993253 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-qfjbn I1005 10:18:45.006451 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 10:18:45.014356 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 10:18:50.014477 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 10:18:50.022971 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 10:18:55.024003 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 10:18:57.072056 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-0 I1005 10:19:13.021994 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-qfjbn I1005 10:19:28.076938 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-9tpm5 I1005 10:19:28.076972 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:19:28.098907 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 10:19:28.099017 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: false) I1005 10:19:28.103041 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 10:19:28.103090 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain E1005 10:19:28.783203 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 10:19:28.783226 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:20:12.056724 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 10:20:12.092061 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:20:15.665866 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting NotReady=False I1005 10:20:15.701807 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:20:15.719778 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:20:17.057340 1 node_controller.go:1096] No nodes available for updates I1005 10:20:17.524282 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:20:22.524729 1 node_controller.go:1096] No nodes available for updates I1005 10:20:35.160618 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting Unschedulable I1005 10:20:35.178785 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:20:37.493825 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:20:40.161673 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordoning I1005 10:20:40.161694 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 10:20:40.161675 1 node_controller.go:1096] No nodes available for updates I1005 10:20:40.178558 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 10:20:40.178627 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:20:40.201637 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:20:45.202508 1 node_controller.go:1096] No nodes available for updates I1005 10:20:46.447208 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Completed update to rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:20:51.447642 1 status.go:109] Pool worker: All nodes are updated with MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:21:14.473866 1 kubelet_config_controller.go:659] Applied KubeletConfig change-maxpods-kubelet-config on MachineConfigPool worker I1005 10:21:19.528327 1 render_controller.go:510] Generated machineconfig rendered-worker-b7d5e5e3f298ab96749a6979533c95ae from 8 configs: [{MachineConfig 00-worker machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 97-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-ssh machineconfiguration.openshift.io/v1 }] I1005 10:21:19.529027 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"67815", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-worker-b7d5e5e3f298ab96749a6979533c95ae successfully generated (release version: 4.14.0-0.ci.test-2023-10-05-080602-ci-ln-6f80fqk-latest, controller version: 8e2ca527ec2990eee93be55b61eaa6825451b17f) I1005 10:21:19.537749 1 render_controller.go:536] Pool worker: now targeting: rendered-worker-b7d5e5e3f298ab96749a6979533c95ae I1005 10:21:24.561894 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:21:24.586332 1 node_controller.go:483] Pool worker: 2 candidate nodes in 2 zones for update, capacity: 1 I1005 10:21:24.586404 1 node_controller.go:483] Pool worker: Setting node ip-10-0-4-193.us-east-2.compute.internal target to MachineConfig rendered-worker-b7d5e5e3f298ab96749a6979533c95ae I1005 10:21:24.596693 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:21:24.644754 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"68067", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-4-193.us-east-2.compute.internal to MachineConfig rendered-worker-b7d5e5e3f298ab96749a6979533c95ae I1005 10:21:24.645030 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-b7d5e5e3f298ab96749a6979533c95ae E1005 10:21:24.700631 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 10:21:24.700649 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 10:21:26.066432 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 10:21:29.576138 1 node_controller.go:1096] No nodes available for updates I1005 10:21:29.583784 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:21:31.128500 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 10:21:31.128537 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 10:21:31.151645 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 10:21:31.152159 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain I1005 10:21:31.163605 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints E1005 10:21:31.795599 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 10:21:31.797931 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-6rgdl I1005 10:21:31.798091 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-27sqd I1005 10:21:31.798113 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-65d7564d89-2xxnb I1005 10:21:31.798175 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-1 I1005 10:21:31.798235 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-rncdt I1005 10:21:31.798279 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-cgqt8 I1005 10:21:31.798330 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-djhv7 I1005 10:21:31.798389 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-66xrf I1005 10:21:31.798453 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-1 I1005 10:21:31.798491 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-gc9t6 I1005 10:21:31.798532 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-mckdv I1005 10:21:31.798556 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-5rsqk I1005 10:21:33.055165 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-6rgdl I1005 10:21:33.973736 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-rncdt I1005 10:21:34.171447 1 request.go:696] Waited for 1.013708046s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/kube-state-metrics-794c8bd776-mckdv I1005 10:21:34.375966 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-1 I1005 10:21:34.576323 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-1 I1005 10:21:34.584748 1 node_controller.go:1096] No nodes available for updates I1005 10:21:34.774336 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-5rsqk I1005 10:21:34.974954 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-27sqd I1005 10:21:35.173643 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-66xrf I1005 10:21:35.373133 1 request.go:696] Waited for 1.278305754s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/thanos-querier-6c99b68589-gc9t6 I1005 10:21:35.381706 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-gc9t6 I1005 10:21:35.774011 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-65d7564d89-2xxnb I1005 10:21:36.174407 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-mckdv I1005 10:21:59.119493 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-cgqt8 I1005 10:22:22.124552 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-djhv7 I1005 10:22:22.124578 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:22:22.139162 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 10:22:22.139178 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: false) I1005 10:22:22.149232 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 10:22:22.149247 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain E1005 10:22:22.781544 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 10:22:22.781565 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:23:02.544832 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 10:23:02.572543 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:23:07.276652 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting NotReady=False I1005 10:23:07.295094 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:23:07.316897 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:23:07.545351 1 node_controller.go:1096] No nodes available for updates I1005 10:23:08.003473 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:23:13.003651 1 node_controller.go:1096] No nodes available for updates I1005 10:23:25.850452 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting Unschedulable I1005 10:23:25.897925 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:23:27.938875 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:23:28.851592 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordoning I1005 10:23:28.851612 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 10:23:28.871883 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 10:23:28.871901 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:23:29.086447 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:23:30.857611 1 node_controller.go:1096] No nodes available for updates I1005 10:23:38.105288 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Completed update to rendered-worker-b7d5e5e3f298ab96749a6979533c95ae I1005 10:23:43.106378 1 node_controller.go:483] Pool worker: 1 candidate nodes in 1 zones for update, capacity: 1 I1005 10:23:43.106395 1 node_controller.go:483] Pool worker: Setting node ip-10-0-49-13.us-east-2.compute.internal target to MachineConfig rendered-worker-b7d5e5e3f298ab96749a6979533c95ae I1005 10:23:43.121885 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-b7d5e5e3f298ab96749a6979533c95ae I1005 10:23:43.123080 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"68133", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-49-13.us-east-2.compute.internal to MachineConfig rendered-worker-b7d5e5e3f298ab96749a6979533c95ae I1005 10:23:45.164542 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 10:23:48.138050 1 node_controller.go:1096] No nodes available for updates I1005 10:23:48.139402 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:23:50.226746 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 10:23:50.226828 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 10:23:50.245247 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 10:23:50.245334 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain I1005 10:23:50.270187 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints E1005 10:23:50.903462 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 10:23:50.905379 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-59jdv I1005 10:23:50.905392 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-gb454 I1005 10:23:50.905389 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-5df95 I1005 10:23:50.905514 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-n2wsj I1005 10:23:50.905594 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-fb6lj I1005 10:23:50.905624 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 10:23:50.905381 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 10:23:50.905705 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-5fpxr I1005 10:23:50.905745 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-lcq7q I1005 10:23:50.905788 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-5gtbd I1005 10:23:50.905806 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-q7vb5 I1005 10:23:50.905870 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-65d7564d89-l82rz E1005 10:23:50.918261 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 10:23:52.142008 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-59jdv I1005 10:23:52.344187 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-lcq7q I1005 10:23:52.547984 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-5df95 I1005 10:23:52.743211 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-fb6lj I1005 10:23:52.944385 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-0 I1005 10:23:53.144845 1 node_controller.go:1096] No nodes available for updates I1005 10:23:53.149719 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-q7vb5 I1005 10:23:53.343528 1 request.go:696] Waited for 1.007317493s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/prometheus-adapter-65d7564d89-l82rz I1005 10:23:53.349476 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-65d7564d89-l82rz I1005 10:23:53.550932 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-5gtbd I1005 10:23:54.142133 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-5fpxr I1005 10:23:55.918637 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 10:23:55.927120 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 10:24:00.927173 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 10:24:02.962847 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-0 I1005 10:24:17.955650 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-gb454 I1005 10:24:38.957055 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-n2wsj I1005 10:24:38.957086 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:24:38.979949 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 10:24:38.980033 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: false) I1005 10:24:38.985106 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 10:24:38.985190 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain E1005 10:24:39.619115 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 10:24:39.619140 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:25:13.619460 1 template_controller.go:134] Re-syncing ControllerConfig due to secret pull-secret change I1005 10:25:23.009274 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 10:25:23.049407 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:25:28.009582 1 node_controller.go:1096] No nodes available for updates I1005 10:25:28.667306 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:25:33.668937 1 node_controller.go:1096] No nodes available for updates I1005 10:25:35.957590 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting NotReady=False I1005 10:25:35.982415 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:25:36.012081 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:25:38.591573 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:25:38.613932 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:25:40.958561 1 node_controller.go:1096] No nodes available for updates I1005 10:25:45.022363 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting Unschedulable I1005 10:25:45.048951 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:25:48.635894 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:25:50.022754 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordoning I1005 10:25:50.022773 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 10:25:50.022794 1 node_controller.go:1096] No nodes available for updates I1005 10:25:50.045256 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 10:25:50.045337 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:25:50.066535 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:25:55.067612 1 node_controller.go:1096] No nodes available for updates I1005 10:25:56.689934 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Completed update to rendered-worker-b7d5e5e3f298ab96749a6979533c95ae I1005 10:26:01.690813 1 status.go:109] Pool worker: All nodes are updated with MachineConfig rendered-worker-b7d5e5e3f298ab96749a6979533c95ae I1005 10:26:24.851366 1 render_controller.go:536] Pool worker: now targeting: rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:26:29.873694 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:26:29.898276 1 node_controller.go:483] Pool worker: 2 candidate nodes in 2 zones for update, capacity: 1 I1005 10:26:29.898346 1 node_controller.go:483] Pool worker: Setting node ip-10-0-4-193.us-east-2.compute.internal target to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:26:29.906580 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:26:29.919377 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"71877", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-4-193.us-east-2.compute.internal to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:26:29.929154 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-6bf803109332579a2637f8dc27f9f58f E1005 10:26:30.001364 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 10:26:30.001383 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 10:26:31.613363 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 10:26:34.875889 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 10:26:34.875929 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 10:26:34.896473 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 10:26:34.896548 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain I1005 10:26:34.898932 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:26:34.899058 1 node_controller.go:1096] No nodes available for updates I1005 10:26:34.935315 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints E1005 10:26:35.557836 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 10:26:35.560023 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-wh7f7 I1005 10:26:35.560068 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-1 I1005 10:26:35.560096 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-grr54 I1005 10:26:35.560083 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-rnxf4 I1005 10:26:35.560229 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-ffcrh I1005 10:26:35.560039 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-9pnd8 I1005 10:26:35.560308 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-bj24c I1005 10:26:35.560050 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-z9466 I1005 10:26:35.560039 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-x64kr I1005 10:26:35.560051 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-65d7564d89-wbs74 I1005 10:26:35.560061 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-7zq9c I1005 10:26:35.560622 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-1 I1005 10:26:36.636543 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-bj24c I1005 10:26:36.637934 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-x64kr I1005 10:26:36.801377 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-wh7f7 I1005 10:26:37.603139 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-grr54 I1005 10:26:37.799718 1 request.go:696] Waited for 1.150408408s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/kube-state-metrics-794c8bd776-ffcrh I1005 10:26:37.806693 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-ffcrh I1005 10:26:38.202520 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-1 I1005 10:26:38.401686 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-1 I1005 10:26:38.601341 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-rnxf4 I1005 10:26:38.800680 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-9pnd8 I1005 10:26:38.997772 1 request.go:696] Waited for 1.354337473s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/prometheus-adapter-65d7564d89-wbs74 I1005 10:26:39.000717 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-65d7564d89-wbs74 I1005 10:26:39.899509 1 node_controller.go:1096] No nodes available for updates I1005 10:27:02.651190 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-z9466 I1005 10:27:27.649177 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-7zq9c I1005 10:27:27.649213 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:28:23.101590 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting NotReady=False I1005 10:28:23.131256 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:28:23.766783 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:28:28.103347 1 node_controller.go:1096] No nodes available for updates I1005 10:28:33.391948 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting Unschedulable I1005 10:28:33.426020 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:28:33.684305 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:28:38.392522 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordoning I1005 10:28:38.392612 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 10:28:38.392572 1 node_controller.go:1096] No nodes available for updates I1005 10:28:38.423505 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 10:28:38.423580 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:28:38.448853 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:28:43.449571 1 node_controller.go:1096] No nodes available for updates I1005 10:28:43.875209 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Completed update to rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:28:48.876186 1 node_controller.go:483] Pool worker: 1 candidate nodes in 1 zones for update, capacity: 1 I1005 10:28:48.876203 1 node_controller.go:483] Pool worker: Setting node ip-10-0-49-13.us-east-2.compute.internal target to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:28:48.888830 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"71935", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-49-13.us-east-2.compute.internal to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:28:48.902027 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:28:50.986259 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 10:28:53.902052 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 10:28:53.902091 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 10:28:53.923358 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:28:53.925704 1 node_controller.go:1096] No nodes available for updates I1005 10:28:53.967959 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 10:28:53.977339 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain I1005 10:28:53.988176 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints E1005 10:28:54.073829 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 10:28:54.073850 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again E1005 10:28:54.617771 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 10:28:54.619567 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-5vtt9 I1005 10:28:54.619595 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-65d7564d89-ptl9l I1005 10:28:54.619626 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-bdphf I1005 10:28:54.619711 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 10:28:54.619722 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-zdtz4 I1005 10:28:54.619783 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-pwsdf I1005 10:28:54.619795 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-8h2hc I1005 10:28:54.619784 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-l4s2r I1005 10:28:54.619852 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 10:28:54.619899 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-952hx I1005 10:28:54.619905 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-qp7ss I1005 10:28:54.619575 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-2qt2l E1005 10:28:54.626903 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-bdphf" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 10:28:54.627229 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 10:28:56.046580 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-952hx I1005 10:28:56.446237 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-qp7ss I1005 10:28:56.645797 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-2qt2l I1005 10:28:56.850002 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-l4s2r I1005 10:28:57.049736 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-65d7564d89-ptl9l I1005 10:28:57.448143 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-zdtz4 I1005 10:28:57.648871 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-8h2hc I1005 10:28:57.843495 1 request.go:696] Waited for 1.155069012s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/kube-state-metrics-794c8bd776-pwsdf I1005 10:28:57.847254 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-pwsdf I1005 10:28:58.049330 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-0 I1005 10:28:58.923970 1 node_controller.go:1096] No nodes available for updates I1005 10:28:59.627015 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-bdphf I1005 10:28:59.627252 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 10:28:59.633092 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 10:29:04.633209 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 10:29:04.642687 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 10:29:09.643553 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 10:29:11.685302 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-0 I1005 10:29:26.657331 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-bdphf I1005 10:29:45.676922 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-5vtt9 I1005 10:29:45.676956 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:29:45.696304 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 10:29:45.696438 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: false) I1005 10:29:45.703407 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 10:29:45.703445 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain E1005 10:29:46.348408 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 10:29:46.348446 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:30:42.614301 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting NotReady=False I1005 10:30:42.654614 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:30:43.826860 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:30:47.615491 1 node_controller.go:1096] No nodes available for updates I1005 10:30:52.681782 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting Unschedulable I1005 10:30:52.704956 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:30:53.714818 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:30:57.682176 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordoning I1005 10:30:57.682204 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 10:30:57.682231 1 node_controller.go:1096] No nodes available for updates I1005 10:30:57.701280 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 10:30:57.701353 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:30:57.716466 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:31:02.716921 1 node_controller.go:1096] No nodes available for updates I1005 10:31:03.515882 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Completed update to rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:31:08.516450 1 status.go:109] Pool worker: All nodes are updated with MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:31:36.030762 1 render_controller.go:510] Generated machineconfig rendered-worker-d0e019f3377d7efd6afac635e4e32be1 from 8 configs: [{MachineConfig 00-worker machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 97-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-ssh machineconfiguration.openshift.io/v1 } {MachineConfig change-worker-kernel-argument-gqsguboy machineconfiguration.openshift.io/v1 }] I1005 10:31:36.031228 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"75442", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-worker-d0e019f3377d7efd6afac635e4e32be1 successfully generated (release version: 4.14.0-0.ci.test-2023-10-05-080602-ci-ln-6f80fqk-latest, controller version: 8e2ca527ec2990eee93be55b61eaa6825451b17f) I1005 10:31:36.050131 1 render_controller.go:536] Pool worker: now targeting: rendered-worker-d0e019f3377d7efd6afac635e4e32be1 I1005 10:31:41.076228 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:31:41.096585 1 node_controller.go:483] Pool worker: 2 candidate nodes in 2 zones for update, capacity: 1 I1005 10:31:41.096654 1 node_controller.go:483] Pool worker: Setting node ip-10-0-4-193.us-east-2.compute.internal target to MachineConfig rendered-worker-d0e019f3377d7efd6afac635e4e32be1 I1005 10:31:41.099040 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:31:41.133301 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-d0e019f3377d7efd6afac635e4e32be1 I1005 10:31:41.141661 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"75685", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-4-193.us-east-2.compute.internal to MachineConfig rendered-worker-d0e019f3377d7efd6afac635e4e32be1 E1005 10:31:41.267047 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 10:31:41.267118 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 10:31:43.156560 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 10:31:46.096255 1 node_controller.go:1096] No nodes available for updates I1005 10:31:46.097839 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:31:48.157188 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 10:31:48.157224 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 10:31:48.177975 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 10:31:48.177992 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain I1005 10:31:48.198333 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints E1005 10:31:48.833564 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 10:31:48.835714 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-r2bcr I1005 10:31:48.835746 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-65d7564d89-k5m45 I1005 10:31:48.835747 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-7t62b I1005 10:31:48.835781 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-1 I1005 10:31:48.835850 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-1 I1005 10:31:48.835865 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-5tc5l I1005 10:31:48.835881 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-r8tbt I1005 10:31:48.835919 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-vl75v I1005 10:31:48.835857 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-w2q4n I1005 10:31:48.835976 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-https I1005 10:31:48.835999 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-74wdv I1005 10:31:48.836024 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-d6frq I1005 10:31:48.835721 1 drain_controller.go:144] evicting pod openshift-operator-lifecycle-manager/collect-profiles-28275030-5r6lv I1005 10:31:49.497551 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-operator-lifecycle-manager/collect-profiles-28275030-5r6lv I1005 10:31:50.687012 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-w2q4n I1005 10:31:50.895387 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-r8tbt I1005 10:31:51.085494 1 request.go:696] Waited for 1.160678631s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-image-registry/pods/image-registry-5b6f4c84f4-7t62b I1005 10:31:51.098133 1 node_controller.go:1096] No nodes available for updates I1005 10:31:51.291539 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-https I1005 10:31:51.489722 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-1 I1005 10:31:51.691089 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-74wdv I1005 10:31:51.894617 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-d6frq I1005 10:31:52.088342 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-5tc5l I1005 10:31:52.284702 1 request.go:696] Waited for 1.37517953s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-1 I1005 10:31:52.289274 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-1 I1005 10:31:52.490905 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-vl75v I1005 10:31:52.889810 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-65d7564d89-k5m45 I1005 10:32:15.927035 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-7t62b I1005 10:32:38.920601 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-r2bcr I1005 10:32:38.920628 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:35:13.793006 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 10:35:13.810720 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:35:18.793577 1 node_controller.go:1096] No nodes available for updates I1005 10:35:19.302146 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:35:24.303794 1 node_controller.go:1096] No nodes available for updates I1005 10:35:29.317823 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting NotReady=False I1005 10:35:29.342986 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:35:29.371195 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:35:34.219648 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:35:34.318464 1 node_controller.go:1096] No nodes available for updates I1005 10:35:34.379173 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:35:37.685504 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting Unschedulable I1005 10:35:37.715119 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:35:39.385578 1 node_controller.go:1096] No nodes available for updates I1005 10:35:39.386059 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:35:44.386558 1 node_controller.go:1096] No nodes available for updates I1005 10:35:51.309500 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordoning I1005 10:35:51.309516 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 10:35:51.340572 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 10:35:51.340591 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:35:51.350570 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:35:56.354601 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Completed update to rendered-worker-d0e019f3377d7efd6afac635e4e32be1 I1005 10:35:56.354704 1 node_controller.go:483] Pool worker: 1 candidate nodes in 1 zones for update, capacity: 1 I1005 10:35:56.354740 1 node_controller.go:483] Pool worker: Setting node ip-10-0-49-13.us-east-2.compute.internal target to MachineConfig rendered-worker-d0e019f3377d7efd6afac635e4e32be1 I1005 10:35:56.372755 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"75714", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-49-13.us-east-2.compute.internal to MachineConfig rendered-worker-d0e019f3377d7efd6afac635e4e32be1 I1005 10:35:56.374985 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-d0e019f3377d7efd6afac635e4e32be1 I1005 10:35:57.772519 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 10:36:01.375736 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 10:36:01.375843 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 10:36:01.395968 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 10:36:01.396074 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain I1005 10:36:01.424253 1 node_controller.go:1096] No nodes available for updates I1005 10:36:01.424598 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints E1005 10:36:01.546266 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 10:36:01.546316 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 10:36:01.638648 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints E1005 10:36:02.060102 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 10:36:02.063453 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-5hf5v I1005 10:36:02.063756 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-q2znz I1005 10:36:02.063912 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-fn7np I1005 10:36:02.064081 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 10:36:02.064258 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-s4gj2 I1005 10:36:02.064443 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-wjpwn I1005 10:36:02.064528 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 10:36:02.064607 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-65d7564d89-djwmw I1005 10:36:02.064634 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-q86hl I1005 10:36:02.064593 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-zv4zt I1005 10:36:02.064597 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-vz5zx I1005 10:36:02.064674 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-xnvtw E1005 10:36:02.073461 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-q2znz" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 10:36:02.073491 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 10:36:02.073642 1 drain_controller.go:144] error when evicting pods/"prometheus-adapter-65d7564d89-djwmw" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 10:36:02.073701 1 drain_controller.go:144] error when evicting pods/"prometheus-k8s-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 10:36:04.106910 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-5hf5v I1005 10:36:04.296679 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-wjpwn I1005 10:36:04.496952 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-zv4zt I1005 10:36:04.697513 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-vz5zx I1005 10:36:04.896203 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-xnvtw I1005 10:36:05.113563 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-s4gj2 I1005 10:36:05.495125 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-q86hl I1005 10:36:06.425793 1 node_controller.go:1096] No nodes available for updates I1005 10:36:07.074558 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-q2znz I1005 10:36:07.074561 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-65d7564d89-djwmw I1005 10:36:07.074566 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 10:36:07.074570 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 10:36:07.079977 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-q2znz" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 10:36:07.079977 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 10:36:07.081626 1 drain_controller.go:144] error when evicting pods/"prometheus-k8s-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 10:36:10.105399 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-65d7564d89-djwmw I1005 10:36:12.080884 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 10:36:12.080897 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-q2znz I1005 10:36:12.081967 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 E1005 10:36:12.089117 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 10:36:14.130340 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-0 I1005 10:36:17.090009 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 10:36:17.096239 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 10:36:22.097199 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 10:36:24.143478 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-0 I1005 10:36:39.121054 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-q2znz I1005 10:36:50.120087 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-fn7np I1005 10:36:50.120130 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:36:50.141673 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 10:36:50.141689 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: false) I1005 10:36:50.145785 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 10:36:50.145800 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain E1005 10:36:50.777204 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 10:36:50.777225 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:39:24.432178 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 10:39:24.491572 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:39:29.432597 1 node_controller.go:1096] No nodes available for updates I1005 10:39:30.009267 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:39:35.010168 1 node_controller.go:1096] No nodes available for updates I1005 10:39:40.281449 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting NotReady=False I1005 10:39:40.302934 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:39:40.341446 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:39:44.996167 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:39:45.013561 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:39:45.282633 1 node_controller.go:1096] No nodes available for updates I1005 10:39:49.225683 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting Unschedulable I1005 10:39:49.254003 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:39:50.026785 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:39:54.226266 1 node_controller.go:1096] No nodes available for updates I1005 10:40:02.385258 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordoning I1005 10:40:02.385278 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 10:40:02.411056 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 10:40:02.411134 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:40:02.428784 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:40:07.402886 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Completed update to rendered-worker-d0e019f3377d7efd6afac635e4e32be1 I1005 10:40:07.429253 1 status.go:109] Pool worker: All nodes are updated with MachineConfig rendered-worker-d0e019f3377d7efd6afac635e4e32be1 I1005 10:40:42.793624 1 render_controller.go:536] Pool worker: now targeting: rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:40:47.816333 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:40:47.835508 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:40:47.858032 1 node_controller.go:483] Pool worker: 2 candidate nodes in 2 zones for update, capacity: 1 I1005 10:40:47.858054 1 node_controller.go:483] Pool worker: Setting node ip-10-0-4-193.us-east-2.compute.internal target to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:40:47.879493 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:40:47.886595 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"80808", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-4-193.us-east-2.compute.internal to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f E1005 10:40:47.934364 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 10:40:47.934382 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 10:40:49.988502 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 10:40:51.255379 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 10:40:51.255415 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 10:40:51.279896 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 10:40:51.279970 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain I1005 10:40:51.300590 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints E1005 10:40:51.925361 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 10:40:51.927442 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-1 I1005 10:40:51.927495 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-gms9g I1005 10:40:51.927442 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-pm96h I1005 10:40:51.927496 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-cf248 I1005 10:40:51.927459 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-b92d4 I1005 10:40:51.927471 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-65d7564d89-vsbh2 I1005 10:40:51.927461 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-rm6sk I1005 10:40:51.927474 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-1 I1005 10:40:51.927482 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-ntdvh I1005 10:40:51.927487 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-rxhgs I1005 10:40:51.927479 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-hzzjc I1005 10:40:51.927508 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-qvq86 I1005 10:40:52.839349 1 node_controller.go:1096] No nodes available for updates I1005 10:40:52.840982 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:40:53.174051 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-ntdvh I1005 10:40:53.378985 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-rxhgs I1005 10:40:53.973346 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-cf248 I1005 10:40:54.171369 1 request.go:696] Waited for 1.133735813s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-1 I1005 10:40:54.176225 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-1 I1005 10:40:54.374265 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-rm6sk I1005 10:40:54.572948 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-hzzjc I1005 10:40:54.775252 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-qvq86 I1005 10:40:54.974391 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-gms9g I1005 10:40:55.174395 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-65d7564d89-vsbh2 I1005 10:40:55.371550 1 request.go:696] Waited for 1.353298549s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-1 I1005 10:40:55.375304 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-1 I1005 10:40:57.842031 1 node_controller.go:1096] No nodes available for updates I1005 10:41:20.029603 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-pm96h I1005 10:41:40.023566 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-b92d4 I1005 10:41:40.023596 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:41:40.037751 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 10:41:40.037770 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: false) I1005 10:41:40.046100 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 10:41:40.046167 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain E1005 10:41:40.688298 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 10:41:40.688320 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:42:30.067407 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 10:42:30.109964 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:42:35.068310 1 node_controller.go:1096] No nodes available for updates I1005 10:42:35.606543 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:42:38.353208 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting NotReady=False I1005 10:42:38.376132 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:42:38.581877 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:42:40.546023 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:42:40.564741 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:42:40.607101 1 node_controller.go:1096] No nodes available for updates I1005 10:42:57.864526 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting Unschedulable I1005 10:42:57.923817 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:43:00.581541 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:43:02.865180 1 node_controller.go:1096] No nodes available for updates I1005 10:43:09.576660 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordoning I1005 10:43:09.576675 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 10:43:09.611083 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 10:43:09.611152 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:43:09.621949 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:43:14.612021 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Completed update to rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:43:14.622876 1 node_controller.go:483] Pool worker: 1 candidate nodes in 1 zones for update, capacity: 1 I1005 10:43:14.622890 1 node_controller.go:483] Pool worker: Setting node ip-10-0-49-13.us-east-2.compute.internal target to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:43:14.646268 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:43:14.646580 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"80855", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-49-13.us-east-2.compute.internal to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:43:16.109288 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 10:43:19.645872 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 10:43:19.645893 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 10:43:19.679261 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 10:43:19.679340 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain I1005 10:43:19.680816 1 node_controller.go:1096] No nodes available for updates I1005 10:43:19.681005 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:43:19.699848 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints E1005 10:43:20.332308 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 10:43:20.334407 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-7swhv I1005 10:43:20.334438 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-x72dw I1005 10:43:20.334451 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 10:43:20.334531 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-9knqx I1005 10:43:20.334572 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-2h8rj I1005 10:43:20.334551 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-ztzmv I1005 10:43:20.334676 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-fss8l I1005 10:43:20.334686 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-6hbx8 I1005 10:43:20.334563 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-hjh49 I1005 10:43:20.334733 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-65d7564d89-lq69w I1005 10:43:20.334407 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-c9fcm I1005 10:43:20.334804 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 E1005 10:43:20.342929 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-x72dw" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 10:43:20.344896 1 drain_controller.go:144] error when evicting pods/"prometheus-adapter-65d7564d89-lq69w" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 10:43:20.346216 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 10:43:20.742473 1 drain_controller.go:144] error when evicting pods/"prometheus-k8s-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 10:43:21.372368 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-7swhv I1005 10:43:22.410609 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-6hbx8 I1005 10:43:22.427344 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-2h8rj I1005 10:43:22.968594 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-c9fcm I1005 10:43:23.425393 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-hjh49 I1005 10:43:23.573224 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-9knqx I1005 10:43:23.769303 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-fss8l I1005 10:43:24.681450 1 node_controller.go:1096] No nodes available for updates I1005 10:43:25.342975 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-x72dw I1005 10:43:25.345070 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-65d7564d89-lq69w I1005 10:43:25.347206 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 10:43:25.349907 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-x72dw" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 10:43:25.354569 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 10:43:25.743633 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 E1005 10:43:25.750775 1 drain_controller.go:144] error when evicting pods/"prometheus-k8s-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 10:43:27.379942 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-65d7564d89-lq69w I1005 10:43:30.350535 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-x72dw I1005 10:43:30.355387 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 10:43:30.362598 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 10:43:30.751531 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 10:43:33.796221 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-0 I1005 10:43:35.362798 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 10:43:35.378308 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 10:43:40.379233 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 10:43:42.439313 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-0 I1005 10:43:57.386433 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-x72dw I1005 10:44:07.427663 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-ztzmv I1005 10:44:07.427697 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:44:07.445108 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 10:44:07.445184 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: false) I1005 10:44:07.457800 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 10:44:07.457888 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain E1005 10:44:08.091007 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 10:44:08.091038 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:45:10.612622 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 10:45:10.658353 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:45:13.900208 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting NotReady=False I1005 10:45:13.921353 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:45:13.939589 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:45:15.613239 1 node_controller.go:1096] No nodes available for updates I1005 10:45:16.116200 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:45:21.116651 1 node_controller.go:1096] No nodes available for updates I1005 10:45:32.767626 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting Unschedulable I1005 10:45:32.796840 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:45:36.079757 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:45:37.768620 1 node_controller.go:1096] No nodes available for updates I1005 10:45:44.045257 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordoning I1005 10:45:44.045280 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 10:45:44.071803 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 10:45:44.071888 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:45:44.261597 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:45:49.068403 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Completed update to rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:45:49.262392 1 status.go:109] Pool worker: All nodes are updated with MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:46:53.230906 1 render_controller.go:377] Error syncing machineconfigpool worker: parsing Ignition config failed: invalid version. Supported spec versions: 2.2, 3.0, 3.1, 3.2, 3.3, 3.4 I1005 10:46:53.259739 1 render_controller.go:377] Error syncing machineconfigpool worker: parsing Ignition config failed: invalid version. Supported spec versions: 2.2, 3.0, 3.1, 3.2, 3.3, 3.4 I1005 10:46:53.293689 1 render_controller.go:377] Error syncing machineconfigpool worker: parsing Ignition config failed: invalid version. Supported spec versions: 2.2, 3.0, 3.1, 3.2, 3.3, 3.4 I1005 10:46:53.321999 1 render_controller.go:377] Error syncing machineconfigpool worker: parsing Ignition config failed: invalid version. Supported spec versions: 2.2, 3.0, 3.1, 3.2, 3.3, 3.4 I1005 10:46:53.384239 1 render_controller.go:377] Error syncing machineconfigpool worker: parsing Ignition config failed: invalid version. Supported spec versions: 2.2, 3.0, 3.1, 3.2, 3.3, 3.4 I1005 10:46:53.486133 1 render_controller.go:377] Error syncing machineconfigpool worker: parsing Ignition config failed: invalid version. Supported spec versions: 2.2, 3.0, 3.1, 3.2, 3.3, 3.4 I1005 10:46:53.655168 1 render_controller.go:377] Error syncing machineconfigpool worker: parsing Ignition config failed: invalid version. Supported spec versions: 2.2, 3.0, 3.1, 3.2, 3.3, 3.4 I1005 10:46:53.997532 1 render_controller.go:377] Error syncing machineconfigpool worker: parsing Ignition config failed: invalid version. Supported spec versions: 2.2, 3.0, 3.1, 3.2, 3.3, 3.4 I1005 10:46:54.658861 1 render_controller.go:377] Error syncing machineconfigpool worker: parsing Ignition config failed: invalid version. Supported spec versions: 2.2, 3.0, 3.1, 3.2, 3.3, 3.4 I1005 10:46:55.950059 1 render_controller.go:377] Error syncing machineconfigpool worker: parsing Ignition config failed: invalid version. Supported spec versions: 2.2, 3.0, 3.1, 3.2, 3.3, 3.4 I1005 10:46:58.517654 1 render_controller.go:377] Error syncing machineconfigpool worker: parsing Ignition config failed: invalid version. Supported spec versions: 2.2, 3.0, 3.1, 3.2, 3.3, 3.4 I1005 10:47:03.662808 1 render_controller.go:377] Error syncing machineconfigpool worker: parsing Ignition config failed: invalid version. Supported spec versions: 2.2, 3.0, 3.1, 3.2, 3.3, 3.4 I1005 10:47:13.924701 1 render_controller.go:377] Error syncing machineconfigpool worker: parsing Ignition config failed: invalid version. Supported spec versions: 2.2, 3.0, 3.1, 3.2, 3.3, 3.4 I1005 10:47:34.422386 1 render_controller.go:377] Error syncing machineconfigpool worker: parsing Ignition config failed: invalid version. Supported spec versions: 2.2, 3.0, 3.1, 3.2, 3.3, 3.4 E1005 10:48:04.774574 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 10:48:04.774593 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 10:49:02.173068 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed labels I1005 10:49:10.642330 1 node_controller.go:882] Pool infra is unconfigured, pausing 5s for renderer to initialize I1005 10:49:10.709580 1 render_controller.go:510] Generated machineconfig rendered-infra-6bf803109332579a2637f8dc27f9f58f from 7 configs: [{MachineConfig 00-worker machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 97-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-ssh machineconfiguration.openshift.io/v1 }] I1005 10:49:10.710088 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"infra", UID:"05a6fca8-8d2d-43c0-afa2-cb3587b456f8", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"86001", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-infra-6bf803109332579a2637f8dc27f9f58f successfully generated (release version: 4.14.0-0.ci.test-2023-10-05-080602-ci-ln-6f80fqk-latest, controller version: 8e2ca527ec2990eee93be55b61eaa6825451b17f) I1005 10:49:10.716687 1 render_controller.go:536] Pool infra: now targeting: rendered-infra-6bf803109332579a2637f8dc27f9f58f I1005 10:49:10.720067 1 render_controller.go:377] Error syncing machineconfigpool infra: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "infra": the object has been modified; please apply your changes to the latest version and try again I1005 10:49:15.739955 1 node_controller.go:483] Pool infra: 1 candidate nodes in 1 zones for update, capacity: 1 I1005 10:49:15.740051 1 node_controller.go:483] Pool infra: Setting node ip-10-0-4-193.us-east-2.compute.internal target to MachineConfig rendered-infra-6bf803109332579a2637f8dc27f9f58f I1005 10:49:15.741989 1 node_controller.go:493] Pool infra[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:49:15.769312 1 node_controller.go:493] Pool infra[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-infra-6bf803109332579a2637f8dc27f9f58f I1005 10:49:15.769693 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"infra", UID:"05a6fca8-8d2d-43c0-afa2-cb3587b456f8", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"86037", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-4-193.us-east-2.compute.internal to MachineConfig rendered-infra-6bf803109332579a2637f8dc27f9f58f I1005 10:49:17.791662 1 node_controller.go:493] Pool infra[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 10:49:20.742569 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordoning I1005 10:49:20.742588 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating uncordon (currently schedulable: true) I1005 10:49:20.746948 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 10:49:20.746960 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:49:20.765677 1 node_controller.go:493] Pool infra[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:49:20.766368 1 node_controller.go:1096] No nodes available for updates I1005 10:49:25.766716 1 node_controller.go:1096] No nodes available for updates I1005 10:49:30.001415 1 node_controller.go:493] Pool infra[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Completed update to rendered-infra-6bf803109332579a2637f8dc27f9f58f I1005 10:49:35.001823 1 status.go:109] Pool infra: All nodes are updated with MachineConfig rendered-infra-6bf803109332579a2637f8dc27f9f58f I1005 10:50:42.097821 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed from pool infra I1005 10:50:42.097848 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed labels I1005 10:50:47.115983 1 node_controller.go:483] Pool worker: 1 candidate nodes in 1 zones for update, capacity: 1 I1005 10:50:47.116062 1 node_controller.go:483] Pool worker: Setting node ip-10-0-4-193.us-east-2.compute.internal target to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:50:47.116318 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:50:47.147715 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:50:47.148214 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"86013", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-4-193.us-east-2.compute.internal to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:50:49.278656 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 10:50:52.113777 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordoning I1005 10:50:52.113795 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating uncordon (currently schedulable: true) I1005 10:50:52.117094 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 10:50:52.117152 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:50:52.161210 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:50:52.163082 1 node_controller.go:1096] No nodes available for updates I1005 10:50:57.162168 1 node_controller.go:1096] No nodes available for updates I1005 10:51:01.410019 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Completed update to rendered-worker-6bf803109332579a2637f8dc27f9f58f E1005 10:51:33.441079 1 render_controller.go:254] error finding pools for machineconfig: no MachineConfigPool found for MachineConfig rendered-infra-6bf803109332579a2637f8dc27f9f58f because it has no labels I1005 10:51:36.799050 1 template_controller.go:134] Re-syncing ControllerConfig due to secret pull-secret change I1005 10:53:57.593798 1 render_controller.go:510] Generated machineconfig rendered-worker-e915108afc02e9627e4ad5f37d175e21 from 8 configs: [{MachineConfig 00-worker machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 97-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-ssh machineconfiguration.openshift.io/v1 } {MachineConfig change-worker-jrnl-configuration-1d6dlyr6 machineconfiguration.openshift.io/v1 }] I1005 10:53:57.594296 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"86727", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-worker-e915108afc02e9627e4ad5f37d175e21 successfully generated (release version: 4.14.0-0.ci.test-2023-10-05-080602-ci-ln-6f80fqk-latest, controller version: 8e2ca527ec2990eee93be55b61eaa6825451b17f) I1005 10:53:57.607547 1 render_controller.go:536] Pool worker: now targeting: rendered-worker-e915108afc02e9627e4ad5f37d175e21 I1005 10:54:02.624291 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:54:02.643705 1 node_controller.go:483] Pool worker: 2 candidate nodes in 2 zones for update, capacity: 1 I1005 10:54:02.643777 1 node_controller.go:483] Pool worker: Setting node ip-10-0-4-193.us-east-2.compute.internal target to MachineConfig rendered-worker-e915108afc02e9627e4ad5f37d175e21 I1005 10:54:02.645597 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:54:02.690486 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-e915108afc02e9627e4ad5f37d175e21 I1005 10:54:02.690706 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"87859", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-4-193.us-east-2.compute.internal to MachineConfig rendered-worker-e915108afc02e9627e4ad5f37d175e21 E1005 10:54:02.754306 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 10:54:02.754390 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 10:54:04.685822 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 10:54:07.623683 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 10:54:07.623726 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 10:54:07.661581 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 10:54:07.661664 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain I1005 10:54:07.662522 1 node_controller.go:1096] No nodes available for updates I1005 10:54:07.662794 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:54:07.854842 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints E1005 10:54:08.303038 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 10:54:08.305187 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-2kk6l I1005 10:54:08.305187 1 drain_controller.go:144] evicting pod openshift-operator-lifecycle-manager/collect-profiles-28275045-f78s7 I1005 10:54:08.305195 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-1 I1005 10:54:08.305203 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-smfs9 I1005 10:54:08.305210 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-8tl6l I1005 10:54:08.305207 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-2skm6 I1005 10:54:08.305214 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-1 I1005 10:54:08.305219 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-vdfsd I1005 10:54:08.305224 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-xf7gk I1005 10:54:08.305222 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-65d7564d89-45qtx I1005 10:54:08.305226 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-lvp2k I1005 10:54:08.305233 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-mzlsz I1005 10:54:08.305232 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-dqzzb I1005 10:54:08.377432 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-operator-lifecycle-manager/collect-profiles-28275045-f78s7 I1005 10:54:09.380587 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-8tl6l I1005 10:54:10.563412 1 request.go:696] Waited for 1.160357146s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-1 I1005 10:54:10.572496 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-1 I1005 10:54:10.768358 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-smfs9 I1005 10:54:10.966811 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-lvp2k I1005 10:54:11.166821 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-mzlsz I1005 10:54:11.367609 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-dqzzb I1005 10:54:11.566963 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-vdfsd I1005 10:54:11.763545 1 request.go:696] Waited for 1.383976824s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/openshift-state-metrics-547dffdc-xf7gk I1005 10:54:11.767374 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-xf7gk I1005 10:54:11.966848 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-65d7564d89-45qtx I1005 10:54:12.367773 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-1 I1005 10:54:12.663164 1 node_controller.go:1096] No nodes available for updates I1005 10:54:35.403381 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-2skm6 I1005 10:54:55.398648 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-2kk6l I1005 10:54:55.398680 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:54:55.413563 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 10:54:55.413578 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: false) I1005 10:54:55.420237 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 10:54:55.420820 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain E1005 10:54:56.076723 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 10:54:56.076799 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:55:55.714626 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting NotReady=False I1005 10:55:55.750086 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:55:56.266200 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:56:00.715292 1 node_controller.go:1096] No nodes available for updates I1005 10:56:05.965296 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting Unschedulable I1005 10:56:05.995040 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:56:06.190838 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:56:10.965496 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordoning I1005 10:56:10.965515 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 10:56:10.965579 1 node_controller.go:1096] No nodes available for updates I1005 10:56:10.996492 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 10:56:10.996522 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:56:11.002349 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:56:16.005500 1 node_controller.go:1096] No nodes available for updates I1005 10:56:16.646791 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Completed update to rendered-worker-e915108afc02e9627e4ad5f37d175e21 I1005 10:56:21.647915 1 node_controller.go:483] Pool worker: 1 candidate nodes in 1 zones for update, capacity: 1 I1005 10:56:21.648007 1 node_controller.go:483] Pool worker: Setting node ip-10-0-49-13.us-east-2.compute.internal target to MachineConfig rendered-worker-e915108afc02e9627e4ad5f37d175e21 I1005 10:56:21.668923 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-e915108afc02e9627e4ad5f37d175e21 I1005 10:56:21.669114 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"87938", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-49-13.us-east-2.compute.internal to MachineConfig rendered-worker-e915108afc02e9627e4ad5f37d175e21 I1005 10:56:23.713041 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 10:56:26.689959 1 node_controller.go:1096] No nodes available for updates I1005 10:56:26.690541 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:56:28.713953 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 10:56:28.713989 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 10:56:28.740707 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 10:56:28.740790 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain I1005 10:56:28.757002 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints E1005 10:56:29.393842 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 10:56:29.397129 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-njsgc I1005 10:56:29.397168 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-62m27 I1005 10:56:29.397333 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-65d7564d89-d99gx I1005 10:56:29.397403 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-x6tfq I1005 10:56:29.397480 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-88fbl I1005 10:56:29.397545 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 10:56:29.397579 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 10:56:29.397631 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-blt6b I1005 10:56:29.397682 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-j2znm I1005 10:56:29.397713 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-44qf4 I1005 10:56:29.397752 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-hsqd2 I1005 10:56:29.397798 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-tw9gh E1005 10:56:29.406866 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-x6tfq" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 10:56:29.417840 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 10:56:30.459851 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-blt6b I1005 10:56:30.460248 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-njsgc I1005 10:56:30.471394 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-j2znm I1005 10:56:30.471621 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-62m27 I1005 10:56:31.231030 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-hsqd2 I1005 10:56:31.430093 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-tw9gh I1005 10:56:31.691039 1 node_controller.go:1096] No nodes available for updates I1005 10:56:31.832457 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-65d7564d89-d99gx I1005 10:56:32.037238 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-0 I1005 10:56:32.230291 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-44qf4 I1005 10:56:34.407707 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-x6tfq I1005 10:56:34.418830 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 10:56:34.429265 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 10:56:39.429468 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 10:56:39.435830 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 10:56:44.436568 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 10:56:46.485618 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-0 I1005 10:57:00.445201 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-x6tfq I1005 10:57:18.469059 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-88fbl I1005 10:57:18.469095 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:58:01.225517 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 10:58:01.271024 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:58:06.226166 1 node_controller.go:1096] No nodes available for updates I1005 10:58:06.817817 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:58:11.818980 1 node_controller.go:1096] No nodes available for updates I1005 10:58:14.707022 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting NotReady=False I1005 10:58:14.724790 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:58:14.751624 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:58:16.728785 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:58:16.774326 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:58:19.710170 1 node_controller.go:1096] No nodes available for updates I1005 10:58:23.683214 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting Unschedulable I1005 10:58:23.723152 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:58:26.783323 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:58:28.683900 1 node_controller.go:1096] No nodes available for updates I1005 10:58:28.683900 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordoning I1005 10:58:28.684008 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 10:58:28.723917 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 10:58:28.724001 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 10:58:28.763605 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:58:33.764505 1 node_controller.go:1096] No nodes available for updates I1005 10:58:34.836707 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Completed update to rendered-worker-e915108afc02e9627e4ad5f37d175e21 I1005 10:58:39.837302 1 status.go:109] Pool worker: All nodes are updated with MachineConfig rendered-worker-e915108afc02e9627e4ad5f37d175e21 I1005 10:59:05.789623 1 render_controller.go:536] Pool worker: now targeting: rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:59:10.805817 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:59:10.847229 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 10:59:10.851096 1 node_controller.go:483] Pool worker: 2 candidate nodes in 2 zones for update, capacity: 1 I1005 10:59:10.851123 1 node_controller.go:483] Pool worker: Setting node ip-10-0-4-193.us-east-2.compute.internal target to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:59:10.878931 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 10:59:10.878978 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"91652", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-4-193.us-east-2.compute.internal to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f E1005 10:59:10.923759 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 10:59:10.923813 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 10:59:12.845346 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 10:59:13.914750 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 10:59:13.914785 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 10:59:13.934712 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 10:59:13.934786 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain I1005 10:59:13.980009 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints E1005 10:59:14.574285 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 10:59:14.576811 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-hktmg I1005 10:59:14.576861 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-1 I1005 10:59:14.576831 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-7b7sj I1005 10:59:14.576863 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-czhj2 I1005 10:59:14.576841 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-4clk9 I1005 10:59:14.576849 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-n74vn I1005 10:59:14.576848 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-xj4bj I1005 10:59:14.576854 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-1 I1005 10:59:14.576869 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-w7kc7 I1005 10:59:14.576878 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-zmznt I1005 10:59:14.576859 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-65d7564d89-w4xp2 I1005 10:59:14.576881 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-9ftkh I1005 10:59:15.661173 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-hktmg I1005 10:59:15.829326 1 node_controller.go:1096] No nodes available for updates I1005 10:59:15.834252 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 10:59:16.431575 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-n74vn I1005 10:59:16.822196 1 request.go:696] Waited for 1.143785572s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/monitoring-plugin-6d8f5944dc-w7kc7 I1005 10:59:16.824740 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-w7kc7 I1005 10:59:17.027401 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-1 I1005 10:59:17.423656 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-9ftkh I1005 10:59:17.625658 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-zmznt I1005 10:59:18.021555 1 request.go:696] Waited for 1.36256421s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-1 I1005 10:59:18.026758 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-1 I1005 10:59:18.225403 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-czhj2 I1005 10:59:18.426313 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-7b7sj I1005 10:59:18.825137 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-65d7564d89-w4xp2 I1005 10:59:20.835467 1 node_controller.go:1096] No nodes available for updates I1005 10:59:41.681668 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-4clk9 I1005 11:00:02.662920 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-xj4bj I1005 11:00:02.662957 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:00:02.678333 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 11:00:02.679526 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: false) I1005 11:00:02.687608 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 11:00:02.687668 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain E1005 11:00:03.324562 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 11:00:03.324583 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:00:41.826519 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 11:00:41.864628 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:00:43.810654 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting NotReady=False I1005 11:00:43.810792 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:00:43.832180 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:00:46.827530 1 node_controller.go:1096] No nodes available for updates I1005 11:00:47.409436 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:00:52.410589 1 node_controller.go:1096] No nodes available for updates I1005 11:01:02.708783 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting Unschedulable I1005 11:01:02.747358 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:01:07.322212 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:01:07.709813 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordoning I1005 11:01:07.709869 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 11:01:07.749974 1 node_controller.go:1096] No nodes available for updates I1005 11:01:07.767900 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 11:01:07.767916 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:01:07.801669 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:01:12.802526 1 node_controller.go:1096] No nodes available for updates I1005 11:01:14.066708 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Completed update to rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:01:19.067800 1 node_controller.go:483] Pool worker: 1 candidate nodes in 1 zones for update, capacity: 1 I1005 11:01:19.067814 1 node_controller.go:483] Pool worker: Setting node ip-10-0-49-13.us-east-2.compute.internal target to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:01:19.084184 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"91960", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-49-13.us-east-2.compute.internal to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:01:19.089325 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:01:20.508322 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 11:01:24.089563 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 11:01:24.089611 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 11:01:24.113822 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 11:01:24.113898 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain I1005 11:01:24.123778 1 node_controller.go:1096] No nodes available for updates I1005 11:01:24.175493 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints E1005 11:01:24.308100 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 11:01:24.308179 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 11:01:24.343637 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints E1005 11:01:24.802030 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 11:01:24.804385 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-9cxtf I1005 11:01:24.804386 1 drain_controller.go:144] evicting pod openshift-operator-lifecycle-manager/collect-profiles-28275060-8zqkq I1005 11:01:24.804395 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-65d7564d89-ctjjx I1005 11:01:24.804399 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-kk5nf I1005 11:01:24.804403 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-j6n2j I1005 11:01:24.804410 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-fgz8r I1005 11:01:24.804410 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-x42gk I1005 11:01:24.804412 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-8npx2 I1005 11:01:24.804430 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 11:01:24.804437 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-27j28 I1005 11:01:24.804446 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-hf688 I1005 11:01:24.804448 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-h8dhs I1005 11:01:24.804449 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 E1005 11:01:24.817506 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-9cxtf" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 11:01:24.819883 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 11:01:24.873715 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-operator-lifecycle-manager/collect-profiles-28275060-8zqkq I1005 11:01:26.436863 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-27j28 I1005 11:01:26.635957 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-hf688 I1005 11:01:26.835667 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-h8dhs I1005 11:01:27.036339 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-0 I1005 11:01:27.235710 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-kk5nf I1005 11:01:27.436232 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-x42gk I1005 11:01:27.636943 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-65d7564d89-ctjjx I1005 11:01:27.837234 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-8npx2 I1005 11:01:28.032568 1 request.go:696] Waited for 1.157428778s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-ingress/pods/router-default-d49dc89bd-fgz8r I1005 11:01:28.244249 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-j6n2j I1005 11:01:29.138009 1 node_controller.go:1096] No nodes available for updates I1005 11:01:29.818401 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-9cxtf I1005 11:01:29.820528 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 11:01:29.827459 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 11:01:34.828437 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 11:01:34.834373 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 11:01:39.834854 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 11:01:42.876886 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-0 I1005 11:01:56.849322 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-9cxtf I1005 11:02:11.878154 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-fgz8r I1005 11:02:11.878187 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:02:11.891256 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 11:02:11.891282 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: false) I1005 11:02:11.896472 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 11:02:11.896526 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain E1005 11:02:12.535296 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 11:02:12.535318 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:03:12.341497 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting NotReady=False I1005 11:03:12.380468 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:03:12.470841 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:03:17.342622 1 node_controller.go:1096] No nodes available for updates I1005 11:03:22.455046 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting Unschedulable I1005 11:03:22.489808 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:03:27.364238 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:03:27.455353 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordoning I1005 11:03:27.455373 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 11:03:27.455484 1 node_controller.go:1096] No nodes available for updates I1005 11:03:27.478436 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 11:03:27.478476 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:03:27.490113 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:03:32.490666 1 node_controller.go:1096] No nodes available for updates I1005 11:03:32.865081 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Completed update to rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:03:37.865928 1 status.go:109] Pool worker: All nodes are updated with MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f W1005 11:04:29.640851 1 warnings.go:70] unknown field "spec.dns.spec.platform" W1005 11:04:29.685109 1 warnings.go:70] unknown field "spec.dns.spec.platform" W1005 11:04:29.760166 1 warnings.go:70] unknown field "spec.dns.spec.platform" W1005 11:04:30.840913 1 warnings.go:70] unknown field "spec.dns.spec.platform" I1005 11:08:37.697515 1 render_controller.go:510] Generated machineconfig rendered-worker-ae467f0e9a17ca0045c7c74e53e2027f from 8 configs: [{MachineConfig 00-worker machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 97-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-ssh machineconfiguration.openshift.io/v1 } {MachineConfig ztc-42361-change-workers-chrony-configuration-7vvu0wnd machineconfiguration.openshift.io/v1 }] I1005 11:08:37.698052 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"95208", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-worker-ae467f0e9a17ca0045c7c74e53e2027f successfully generated (release version: 4.14.0-0.ci.test-2023-10-05-080602-ci-ln-6f80fqk-latest, controller version: 8e2ca527ec2990eee93be55b61eaa6825451b17f) I1005 11:08:37.709131 1 render_controller.go:536] Pool worker: now targeting: rendered-worker-ae467f0e9a17ca0045c7c74e53e2027f I1005 11:08:42.736254 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:08:42.752184 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:08:42.752879 1 node_controller.go:483] Pool worker: 2 candidate nodes in 2 zones for update, capacity: 1 I1005 11:08:42.752961 1 node_controller.go:483] Pool worker: Setting node ip-10-0-4-193.us-east-2.compute.internal target to MachineConfig rendered-worker-ae467f0e9a17ca0045c7c74e53e2027f I1005 11:08:42.776549 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-ae467f0e9a17ca0045c7c74e53e2027f I1005 11:08:42.777847 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"97969", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-4-193.us-east-2.compute.internal to MachineConfig rendered-worker-ae467f0e9a17ca0045c7c74e53e2027f E1005 11:08:42.843953 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 11:08:42.843972 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 11:08:44.807022 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 11:08:47.735919 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 11:08:47.736020 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 11:08:47.762829 1 node_controller.go:1096] No nodes available for updates I1005 11:08:47.763834 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 11:08:47.763899 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain I1005 11:08:47.764566 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:08:47.909064 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints E1005 11:08:48.407278 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 11:08:48.409148 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-qwtzl I1005 11:08:48.409165 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-4qr2w I1005 11:08:48.409203 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-5f5bfcdcb5-nw2tc I1005 11:08:48.409285 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-1 I1005 11:08:48.409331 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-6pbgq I1005 11:08:48.409387 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-lm9gv I1005 11:08:48.409314 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-1 I1005 11:08:48.409478 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-6qqc2 I1005 11:08:48.409323 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-qcm5k I1005 11:08:48.409550 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-jrj2b I1005 11:08:48.409578 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-d8lxl I1005 11:08:48.409622 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-gs8fz I1005 11:08:50.245230 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-6qqc2 I1005 11:08:50.642020 1 request.go:696] Waited for 1.150793959s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/monitoring-plugin-6d8f5944dc-6pbgq I1005 11:08:50.646605 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-6pbgq I1005 11:08:51.244332 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-gs8fz I1005 11:08:51.444694 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-qcm5k I1005 11:08:51.841545 1 request.go:696] Waited for 1.369772719s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/prometheus-operator-admission-webhook-7c8dc7fcdb-4qr2w I1005 11:08:51.844404 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-4qr2w I1005 11:08:52.047201 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-1 I1005 11:08:52.245154 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-5f5bfcdcb5-nw2tc I1005 11:08:52.447400 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-jrj2b I1005 11:08:52.765373 1 node_controller.go:1096] No nodes available for updates I1005 11:08:52.846680 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-1 I1005 11:08:53.042171 1 request.go:696] Waited for 1.370094891s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/openshift-state-metrics-547dffdc-d8lxl I1005 11:08:53.046226 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-d8lxl I1005 11:09:15.474023 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-lm9gv I1005 11:09:40.486126 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-qwtzl I1005 11:09:40.486155 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:10:36.035992 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting NotReady=False I1005 11:10:36.081292 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:10:37.589274 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:10:41.037176 1 node_controller.go:1096] No nodes available for updates I1005 11:10:44.693471 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting Unschedulable I1005 11:10:44.722146 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:10:47.465076 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:10:49.694574 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordoning I1005 11:10:49.694675 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 11:10:49.694648 1 node_controller.go:1096] No nodes available for updates I1005 11:10:49.714124 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 11:10:49.714140 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:10:49.736402 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:10:54.760147 1 node_controller.go:1096] No nodes available for updates I1005 11:10:56.961222 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Completed update to rendered-worker-ae467f0e9a17ca0045c7c74e53e2027f I1005 11:11:01.961648 1 node_controller.go:483] Pool worker: 1 candidate nodes in 1 zones for update, capacity: 1 I1005 11:11:01.961664 1 node_controller.go:483] Pool worker: Setting node ip-10-0-49-13.us-east-2.compute.internal target to MachineConfig rendered-worker-ae467f0e9a17ca0045c7c74e53e2027f I1005 11:11:01.983570 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"98004", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-49-13.us-east-2.compute.internal to MachineConfig rendered-worker-ae467f0e9a17ca0045c7c74e53e2027f I1005 11:11:01.984084 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-ae467f0e9a17ca0045c7c74e53e2027f I1005 11:11:04.057209 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 11:11:06.985088 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 11:11:06.985138 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 11:11:07.007330 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 11:11:07.007457 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain I1005 11:11:07.023680 1 node_controller.go:1096] No nodes available for updates I1005 11:11:07.024048 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints E1005 11:11:07.134789 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 11:11:07.134806 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 11:11:07.240699 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints E1005 11:11:07.669207 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 11:11:07.671305 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-q8g85 I1005 11:11:07.671308 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-ddt9x I1005 11:11:07.671318 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 11:11:07.671320 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-tldqn I1005 11:11:07.671327 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-thq2q I1005 11:11:07.671330 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 11:11:07.671337 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-sk2h7 I1005 11:11:07.671339 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-w74vk I1005 11:11:07.671343 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-5jnrn I1005 11:11:07.671346 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-dtfl6 I1005 11:11:07.671347 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-5f5bfcdcb5-f7lbn I1005 11:11:07.671353 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-zpkgz E1005 11:11:07.680787 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-5jnrn" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 11:11:07.680883 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 11:11:08.727160 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-q8g85 I1005 11:11:08.741203 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-ddt9x I1005 11:11:08.915182 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-dtfl6 I1005 11:11:09.111526 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-0 I1005 11:11:09.511281 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-5f5bfcdcb5-f7lbn I1005 11:11:09.709792 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-zpkgz I1005 11:11:09.910602 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-w74vk I1005 11:11:10.308785 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-sk2h7 I1005 11:11:10.510074 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-thq2q I1005 11:11:12.024657 1 node_controller.go:1096] No nodes available for updates I1005 11:11:12.681228 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-5jnrn I1005 11:11:12.681228 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 11:11:12.687115 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 11:11:17.687696 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 11:11:17.693367 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 11:11:22.694164 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 11:11:24.745385 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-0 I1005 11:11:39.724762 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-5jnrn I1005 11:11:57.742762 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-tldqn I1005 11:11:57.742808 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:11:57.768369 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 11:11:57.768463 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: false) I1005 11:11:57.779411 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 11:11:57.779459 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain E1005 11:11:58.414816 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 11:11:58.414840 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:12:45.380490 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting NotReady=False I1005 11:12:45.405607 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:12:47.511335 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:12:50.381389 1 node_controller.go:1096] No nodes available for updates I1005 11:13:04.068316 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting Unschedulable I1005 11:13:04.102093 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:13:07.500587 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:13:09.068765 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordoning I1005 11:13:09.068787 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 11:13:09.068852 1 node_controller.go:1096] No nodes available for updates I1005 11:13:09.094614 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 11:13:09.094714 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:13:09.108876 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:13:14.109117 1 node_controller.go:1096] No nodes available for updates I1005 11:13:16.333292 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Completed update to rendered-worker-ae467f0e9a17ca0045c7c74e53e2027f I1005 11:13:21.334282 1 status.go:109] Pool worker: All nodes are updated with MachineConfig rendered-worker-ae467f0e9a17ca0045c7c74e53e2027f I1005 11:13:51.913694 1 render_controller.go:536] Pool worker: now targeting: rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:13:56.940153 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:13:56.947857 1 node_controller.go:483] Pool worker: 2 candidate nodes in 2 zones for update, capacity: 1 I1005 11:13:56.947915 1 node_controller.go:483] Pool worker: Setting node ip-10-0-4-193.us-east-2.compute.internal target to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:13:56.956503 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:13:56.976014 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:13:56.976309 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"101669", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-4-193.us-east-2.compute.internal to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f E1005 11:13:57.040356 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 11:13:57.040383 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 11:13:58.375729 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 11:14:01.957254 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 11:14:01.957339 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 11:14:01.958834 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting ready I1005 11:14:01.958900 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:14:01.962900 1 node_controller.go:1096] No nodes available for updates I1005 11:14:01.973789 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting Unschedulable I1005 11:14:01.976990 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 11:14:01.977054 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain I1005 11:14:01.997079 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints E1005 11:14:02.643312 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 11:14:02.645119 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-7szgp I1005 11:14:02.645125 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-p28gq I1005 11:14:02.645130 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-5f5bfcdcb5-5t24s I1005 11:14:02.645133 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-1 I1005 11:14:02.645139 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-dcqzl I1005 11:14:02.645141 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-5w624 I1005 11:14:02.645143 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-hnm25 I1005 11:14:02.645148 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-42hgt I1005 11:14:02.645148 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-zlj94 I1005 11:14:02.645151 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-1 I1005 11:14:02.645155 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-sdn4z I1005 11:14:02.645157 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-7w4bl I1005 11:14:03.693073 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-p28gq I1005 11:14:03.702813 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-dcqzl I1005 11:14:03.877955 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-5w624 I1005 11:14:04.874718 1 request.go:696] Waited for 1.134120132s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-1 I1005 11:14:04.897179 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-1 I1005 11:14:05.279367 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-sdn4z I1005 11:14:05.679567 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-zlj94 I1005 11:14:05.877174 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-42hgt I1005 11:14:06.074413 1 request.go:696] Waited for 1.361326567s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/prometheus-adapter-5f5bfcdcb5-5t24s I1005 11:14:06.076661 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-5f5bfcdcb5-5t24s I1005 11:14:06.278894 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-hnm25 I1005 11:14:06.478209 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-1 I1005 11:14:06.960087 1 node_controller.go:1096] No nodes available for updates E1005 11:14:10.950112 1 render_controller.go:460] Error updating MachineConfigPool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again I1005 11:14:10.950127 1 render_controller.go:377] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again I1005 11:14:30.119967 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-7w4bl I1005 11:14:49.757566 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-7szgp I1005 11:14:49.757598 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:15:32.533724 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 11:15:32.566692 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:15:37.534625 1 node_controller.go:1096] No nodes available for updates I1005 11:15:38.049953 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:15:39.316669 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting NotReady=False I1005 11:15:39.337381 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:15:39.357030 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:15:43.008862 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:15:43.028072 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:15:43.050350 1 node_controller.go:1096] No nodes available for updates I1005 11:15:58.664398 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting Unschedulable I1005 11:15:58.699479 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:16:03.049827 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:16:03.664406 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordoning I1005 11:16:03.664436 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 11:16:03.665402 1 node_controller.go:1096] No nodes available for updates I1005 11:16:03.688303 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 11:16:03.688319 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:16:03.710360 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:16:08.710633 1 node_controller.go:1096] No nodes available for updates I1005 11:16:10.453629 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Completed update to rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:16:15.453851 1 node_controller.go:483] Pool worker: 1 candidate nodes in 1 zones for update, capacity: 1 I1005 11:16:15.453866 1 node_controller.go:483] Pool worker: Setting node ip-10-0-49-13.us-east-2.compute.internal target to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:16:15.470225 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"101712", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-49-13.us-east-2.compute.internal to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:16:15.472369 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:16:16.885886 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 11:16:20.473565 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 11:16:20.473607 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 11:16:20.488758 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 11:16:20.488829 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain I1005 11:16:20.500847 1 node_controller.go:1096] No nodes available for updates I1005 11:16:20.502025 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:16:20.533045 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints E1005 11:16:20.643985 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 11:16:20.644001 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again E1005 11:16:21.183857 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 11:16:21.185574 1 drain_controller.go:144] evicting pod openshift-operator-lifecycle-manager/collect-profiles-28275075-trtl2 I1005 11:16:21.185584 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-gbjsd I1005 11:16:21.185595 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-5f5bfcdcb5-kz2gr I1005 11:16:21.185633 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-pbqgc I1005 11:16:21.185682 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-8zlwx I1005 11:16:21.185744 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-n44dq I1005 11:16:21.185747 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 11:16:21.185718 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-s8dr8 I1005 11:16:21.185724 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 11:16:21.185732 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-scj2c I1005 11:16:21.185733 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-m6lwg I1005 11:16:21.185758 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-fpxtr I1005 11:16:21.185574 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-925v2 E1005 11:16:21.196466 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-gbjsd" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 11:16:21.197351 1 drain_controller.go:144] error when evicting pods/"prometheus-k8s-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 11:16:21.197555 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 11:16:21.233953 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-operator-lifecycle-manager/collect-profiles-28275075-trtl2 I1005 11:16:22.237665 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-n44dq I1005 11:16:23.236764 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-s8dr8 I1005 11:16:23.630888 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-5f5bfcdcb5-kz2gr I1005 11:16:23.825273 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-scj2c I1005 11:16:24.023319 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-pbqgc I1005 11:16:24.226443 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-m6lwg I1005 11:16:24.428070 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-fpxtr I1005 11:16:24.823044 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-8zlwx I1005 11:16:25.502964 1 node_controller.go:1096] No nodes available for updates I1005 11:16:26.197232 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-gbjsd I1005 11:16:26.197488 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 11:16:26.198300 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 11:16:26.204032 1 drain_controller.go:144] error when evicting pods/"prometheus-k8s-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 11:16:26.204350 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 11:16:31.205131 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 11:16:31.205153 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 11:16:31.211948 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 11:16:33.272448 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-0 I1005 11:16:36.212039 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 11:16:38.245155 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-0 I1005 11:16:52.227696 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-gbjsd I1005 11:17:08.816583 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-925v2 I1005 11:17:08.816615 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:17:53.082048 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 11:17:53.101995 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:17:58.082636 1 node_controller.go:1096] No nodes available for updates I1005 11:17:58.108542 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting NotReady=False I1005 11:17:58.134096 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:17:58.156281 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:17:58.668135 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:17:59.978073 1 template_controller.go:134] Re-syncing ControllerConfig due to secret pull-secret change I1005 11:18:03.109648 1 node_controller.go:1096] No nodes available for updates I1005 11:18:17.071555 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting Unschedulable I1005 11:18:17.108074 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:18:18.663314 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:18:22.072567 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordoning I1005 11:18:22.072588 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 11:18:22.072659 1 node_controller.go:1096] No nodes available for updates I1005 11:18:22.098860 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 11:18:22.098947 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:18:22.132151 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:18:27.132856 1 node_controller.go:1096] No nodes available for updates I1005 11:18:29.443953 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Completed update to rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:18:34.445047 1 status.go:109] Pool worker: All nodes are updated with MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:19:09.901063 1 render_controller.go:510] Generated machineconfig rendered-worker-3842b3f61f31818448216b91fdb6a62c from 8 configs: [{MachineConfig 00-worker machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 97-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-ssh machineconfiguration.openshift.io/v1 } {MachineConfig change-worker-all-extensions-vc8xmwbt machineconfiguration.openshift.io/v1 }] I1005 11:19:09.903015 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"105261", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-worker-3842b3f61f31818448216b91fdb6a62c successfully generated (release version: 4.14.0-0.ci.test-2023-10-05-080602-ci-ln-6f80fqk-latest, controller version: 8e2ca527ec2990eee93be55b61eaa6825451b17f) I1005 11:19:09.915911 1 render_controller.go:536] Pool worker: now targeting: rendered-worker-3842b3f61f31818448216b91fdb6a62c I1005 11:19:14.933485 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:19:14.959171 1 node_controller.go:483] Pool worker: 2 candidate nodes in 2 zones for update, capacity: 1 I1005 11:19:14.959260 1 node_controller.go:483] Pool worker: Setting node ip-10-0-4-193.us-east-2.compute.internal target to MachineConfig rendered-worker-3842b3f61f31818448216b91fdb6a62c I1005 11:19:14.966217 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:19:14.976524 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"105727", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-4-193.us-east-2.compute.internal to MachineConfig rendered-worker-3842b3f61f31818448216b91fdb6a62c I1005 11:19:14.977170 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-3842b3f61f31818448216b91fdb6a62c E1005 11:19:14.993112 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 11:19:14.993130 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 11:19:17.027189 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 11:19:18.556979 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 11:19:18.557023 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 11:19:18.576327 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 11:19:18.576408 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain I1005 11:19:18.596844 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints E1005 11:19:19.239770 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 11:19:19.241792 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-jtqfb I1005 11:19:19.241838 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-gt22c I1005 11:19:19.241791 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-1 I1005 11:19:19.241801 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-9mgjh I1005 11:19:19.241810 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-k47rd I1005 11:19:19.241815 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-c846b I1005 11:19:19.241818 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-gxhvj I1005 11:19:19.241819 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-5f5bfcdcb5-2fj6s I1005 11:19:19.241829 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-z7jkq I1005 11:19:19.241830 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-55cxn I1005 11:19:19.241835 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-1 I1005 11:19:19.241827 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-8vhxl I1005 11:19:19.967915 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:19:19.968695 1 node_controller.go:1096] No nodes available for updates I1005 11:19:20.321027 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-jtqfb I1005 11:19:21.309772 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-k47rd I1005 11:19:21.506763 1 request.go:696] Waited for 1.17806674s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/telemeter-client-578654767d-z7jkq I1005 11:19:21.511674 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-z7jkq I1005 11:19:21.913067 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-1 I1005 11:19:22.110075 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-8vhxl I1005 11:19:22.309671 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-55cxn I1005 11:19:22.508927 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-gxhvj I1005 11:19:22.706468 1 request.go:696] Waited for 1.38713587s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/prometheus-adapter-5f5bfcdcb5-2fj6s I1005 11:19:22.711748 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-5f5bfcdcb5-2fj6s I1005 11:19:23.110362 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-gt22c I1005 11:19:23.309994 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-1 I1005 11:19:24.968689 1 node_controller.go:1096] No nodes available for updates I1005 11:19:46.341989 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-c846b I1005 11:20:09.326087 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-9mgjh I1005 11:20:09.326117 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:22:08.721274 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 11:22:08.753461 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:22:13.722281 1 node_controller.go:1096] No nodes available for updates I1005 11:22:14.231783 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:22:19.232018 1 node_controller.go:1096] No nodes available for updates I1005 11:22:21.778809 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting NotReady=False I1005 11:22:21.802653 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:22:21.823258 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:22:24.188030 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:22:24.232040 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:22:26.779354 1 node_controller.go:1096] No nodes available for updates I1005 11:22:30.821573 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting Unschedulable I1005 11:22:30.851113 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:22:34.247830 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:22:35.822179 1 node_controller.go:1096] No nodes available for updates I1005 11:22:46.037549 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordoning I1005 11:22:46.037565 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 11:22:46.063059 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 11:22:46.063142 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:22:46.090302 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:22:51.055140 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Completed update to rendered-worker-3842b3f61f31818448216b91fdb6a62c I1005 11:22:51.091442 1 node_controller.go:483] Pool worker: 1 candidate nodes in 1 zones for update, capacity: 1 I1005 11:22:51.091461 1 node_controller.go:483] Pool worker: Setting node ip-10-0-49-13.us-east-2.compute.internal target to MachineConfig rendered-worker-3842b3f61f31818448216b91fdb6a62c I1005 11:22:51.111947 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"106020", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-49-13.us-east-2.compute.internal to MachineConfig rendered-worker-3842b3f61f31818448216b91fdb6a62c I1005 11:22:51.114806 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-3842b3f61f31818448216b91fdb6a62c I1005 11:22:53.120935 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 11:22:56.115721 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 11:22:56.115759 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 11:22:56.135811 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 11:22:56.135894 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain I1005 11:22:56.136759 1 node_controller.go:1096] No nodes available for updates I1005 11:22:56.138294 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:22:56.165684 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints E1005 11:22:56.264902 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 11:22:56.264921 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again E1005 11:22:56.793635 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 11:22:56.795789 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-6nbqg I1005 11:22:56.795806 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 11:22:56.795818 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-lh4z4 I1005 11:22:56.795909 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-h7zhh I1005 11:22:56.795922 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-gcssf I1005 11:22:56.795968 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-njvc2 I1005 11:22:56.795978 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-gn2bx I1005 11:22:56.796027 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-lr5nw I1005 11:22:56.796030 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-5f5bfcdcb5-l6wm7 I1005 11:22:56.795793 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-krnb6 I1005 11:22:56.796081 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 11:22:56.796156 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-phl9v E1005 11:22:56.803880 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 11:22:56.804231 1 drain_controller.go:144] error when evicting pods/"prometheus-adapter-5f5bfcdcb5-l6wm7" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 11:22:56.805919 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-lh4z4" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 11:22:57.018764 1 drain_controller.go:144] error when evicting pods/"prometheus-k8s-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 11:22:58.845123 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-6nbqg I1005 11:22:58.852601 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-krnb6 I1005 11:22:58.852800 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-gcssf I1005 11:22:59.431000 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-njvc2 I1005 11:22:59.840550 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-gn2bx I1005 11:23:00.032065 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-lr5nw I1005 11:23:00.232001 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-h7zhh I1005 11:23:01.139054 1 node_controller.go:1096] No nodes available for updates I1005 11:23:01.804280 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-5f5bfcdcb5-l6wm7 I1005 11:23:01.804294 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 11:23:01.806388 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-lh4z4 E1005 11:23:01.810772 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 11:23:01.812088 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-lh4z4" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 11:23:02.019334 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 11:23:03.833995 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-5f5bfcdcb5-l6wm7 I1005 11:23:06.810890 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 11:23:06.813013 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-lh4z4 E1005 11:23:06.817117 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 11:23:11.817724 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 11:23:11.822981 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 11:23:12.059542 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-0 I1005 11:23:16.823939 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 11:23:18.860279 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-0 I1005 11:23:32.842662 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-lh4z4 I1005 11:23:45.231820 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-phl9v I1005 11:23:45.231854 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:25:39.298395 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 11:25:39.331458 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:25:44.311799 1 node_controller.go:1096] No nodes available for updates I1005 11:25:44.772350 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:25:49.772620 1 node_controller.go:1096] No nodes available for updates I1005 11:25:55.590507 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting NotReady=False I1005 11:25:55.615523 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:25:55.644045 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:25:59.771088 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:25:59.782149 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:26:00.591272 1 node_controller.go:1096] No nodes available for updates I1005 11:26:04.474449 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting Unschedulable I1005 11:26:04.489962 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:26:04.797460 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:26:09.475681 1 node_controller.go:1096] No nodes available for updates I1005 11:26:20.572545 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordoning I1005 11:26:20.572636 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 11:26:20.596091 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 11:26:20.596108 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:26:20.748612 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:26:25.583349 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Completed update to rendered-worker-3842b3f61f31818448216b91fdb6a62c I1005 11:26:25.749916 1 status.go:109] Pool worker: All nodes are updated with MachineConfig rendered-worker-3842b3f61f31818448216b91fdb6a62c I1005 11:27:16.717493 1 render_controller.go:536] Pool worker: now targeting: rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:27:21.740123 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:27:21.755398 1 node_controller.go:483] Pool worker: 2 candidate nodes in 2 zones for update, capacity: 1 I1005 11:27:21.755436 1 node_controller.go:483] Pool worker: Setting node ip-10-0-4-193.us-east-2.compute.internal target to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:27:21.769932 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:27:21.791311 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"110437", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-4-193.us-east-2.compute.internal to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:27:21.791369 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-6bf803109332579a2637f8dc27f9f58f E1005 11:27:21.879545 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 11:27:21.879623 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 11:27:23.822324 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 11:27:26.760414 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:27:26.767625 1 node_controller.go:1096] No nodes available for updates I1005 11:27:28.822587 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 11:27:28.822624 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 11:27:28.842391 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 11:27:28.842449 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain I1005 11:27:28.857358 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints E1005 11:27:29.485493 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 11:27:29.487878 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-vbnhf I1005 11:27:29.488089 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-5qjjp I1005 11:27:29.488233 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-pnhkp I1005 11:27:29.488343 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-1 I1005 11:27:29.488470 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-dd2zs I1005 11:27:29.488586 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-mztrm I1005 11:27:29.488699 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-k7pzh I1005 11:27:29.488719 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-njq4r I1005 11:27:29.488808 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-1 I1005 11:27:29.488848 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-xlcml I1005 11:27:29.488870 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-b66dg I1005 11:27:29.488889 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-5f5bfcdcb5-8zhkh I1005 11:27:30.565357 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-vbnhf I1005 11:27:30.921891 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-njq4r I1005 11:27:31.523632 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-mztrm I1005 11:27:31.718296 1 request.go:696] Waited for 1.110446231s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-1 I1005 11:27:31.728869 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-1 I1005 11:27:31.760953 1 node_controller.go:1096] No nodes available for updates I1005 11:27:32.121118 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-b66dg I1005 11:27:32.322296 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-5f5bfcdcb5-8zhkh I1005 11:27:32.520665 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-dd2zs I1005 11:27:32.718838 1 request.go:696] Waited for 1.174003505s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/openshift-state-metrics-547dffdc-k7pzh I1005 11:27:32.727891 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-k7pzh I1005 11:27:32.930016 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-xlcml I1005 11:27:33.321790 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-1 I1005 11:27:56.599223 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-5qjjp I1005 11:28:16.634871 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-pnhkp I1005 11:28:16.634904 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:28:16.657629 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 11:28:16.657650 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: false) I1005 11:28:16.667415 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 11:28:16.667460 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain E1005 11:28:17.301456 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 11:28:17.301481 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:29:09.854403 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 11:29:09.881147 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:29:14.855664 1 node_controller.go:1096] No nodes available for updates I1005 11:29:15.336082 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:29:20.336589 1 node_controller.go:1096] No nodes available for updates I1005 11:29:21.672220 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting NotReady=False I1005 11:29:21.696811 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:29:21.723679 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:29:25.295079 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:29:25.315768 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:29:26.672714 1 node_controller.go:1096] No nodes available for updates I1005 11:29:40.421282 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting Unschedulable I1005 11:29:40.447728 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:29:45.346691 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:29:45.421857 1 node_controller.go:1096] No nodes available for updates I1005 11:29:54.324414 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordoning I1005 11:29:54.324458 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 11:29:54.351725 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 11:29:54.351755 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:29:54.367129 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:29:59.349650 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Completed update to rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:29:59.368266 1 node_controller.go:483] Pool worker: 1 candidate nodes in 1 zones for update, capacity: 1 I1005 11:29:59.368286 1 node_controller.go:483] Pool worker: Setting node ip-10-0-49-13.us-east-2.compute.internal target to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:29:59.386538 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"110513", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-49-13.us-east-2.compute.internal to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:29:59.395770 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:30:00.808386 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 11:30:04.395981 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 11:30:04.396027 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 11:30:04.432324 1 node_controller.go:1096] No nodes available for updates I1005 11:30:04.449101 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:30:04.456181 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 11:30:04.463449 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain E1005 11:30:04.566491 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 11:30:04.566511 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 11:30:04.589028 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints E1005 11:30:05.120024 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 11:30:05.121823 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-zgd9r I1005 11:30:05.121884 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-vxtl5 I1005 11:30:05.121886 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-dcnd8 I1005 11:30:05.121912 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-ghbj9 I1005 11:30:05.121843 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-pzrbl I1005 11:30:05.121834 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-5f5bfcdcb5-p6xrg I1005 11:30:05.121850 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-l4dwn I1005 11:30:05.121845 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-brkkb I1005 11:30:05.121859 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-w8cbv I1005 11:30:05.121867 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 11:30:05.121877 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 11:30:05.122363 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-9lzp4 E1005 11:30:05.130216 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 11:30:05.130974 1 drain_controller.go:144] error when evicting pods/"thanos-querier-6c99b68589-ghbj9" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 11:30:05.132993 1 drain_controller.go:144] error when evicting pods/"prometheus-adapter-5f5bfcdcb5-p6xrg" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 11:30:05.134992 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-w8cbv" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 11:30:05.336068 1 drain_controller.go:144] error when evicting pods/"prometheus-k8s-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 11:30:06.176936 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-zgd9r I1005 11:30:06.182566 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-pzrbl I1005 11:30:06.184869 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-vxtl5 I1005 11:30:07.159314 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-l4dwn I1005 11:30:07.178291 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-dcnd8 I1005 11:30:07.180579 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-brkkb I1005 11:30:09.449644 1 node_controller.go:1096] No nodes available for updates I1005 11:30:10.130946 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 11:30:10.131096 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-ghbj9 I1005 11:30:10.133075 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-5f5bfcdcb5-p6xrg I1005 11:30:10.135203 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-w8cbv E1005 11:30:10.138497 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 11:30:10.140193 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-w8cbv" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 11:30:10.336248 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 E1005 11:30:10.342767 1 drain_controller.go:144] error when evicting pods/"prometheus-k8s-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 11:30:12.173610 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-5f5bfcdcb5-p6xrg I1005 11:30:12.174388 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-ghbj9 I1005 11:30:15.139366 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 11:30:15.140458 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-w8cbv E1005 11:30:15.145275 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-w8cbv" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 11:30:15.147050 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 11:30:15.343167 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 11:30:16.380101 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-0 I1005 11:30:20.145492 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-w8cbv I1005 11:30:20.147615 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 11:30:20.153518 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 11:30:25.154013 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 11:30:25.164343 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 11:30:30.165153 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 11:30:31.205224 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-0 I1005 11:30:46.176724 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-w8cbv I1005 11:30:52.560984 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-9lzp4 I1005 11:30:52.561015 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:30:52.577161 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 11:30:52.577833 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: false) I1005 11:30:52.584042 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 11:30:52.584123 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain E1005 11:30:53.219337 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 11:30:53.219365 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:31:55.380774 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 11:31:55.419135 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:31:59.492323 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting NotReady=False I1005 11:31:59.541300 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:31:59.560640 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:32:00.381516 1 node_controller.go:1096] No nodes available for updates I1005 11:32:00.874159 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:32:05.874559 1 node_controller.go:1096] No nodes available for updates I1005 11:32:17.957211 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting Unschedulable I1005 11:32:17.994920 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:32:20.828212 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:32:22.958524 1 node_controller.go:1096] No nodes available for updates I1005 11:32:31.903079 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordoning I1005 11:32:31.903095 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 11:32:31.923377 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 11:32:31.923392 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:32:32.116265 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:32:36.919451 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Completed update to rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:32:37.117065 1 status.go:109] Pool worker: All nodes are updated with MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:35:33.549682 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfig", Namespace:"openshift-machine-config-operator", Name:"rendered-worker-66309e062b0f26aeefa1af0c4c330426", UID:"", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OSImageURLOverridden' OSImageURL was overridden via machineconfig in rendered-worker-66309e062b0f26aeefa1af0c4c330426 (was: is: quay.io/mcoqe/layering@sha256:71d824675db3a7783d79edfe78e9bf1a18df33baf687db60d765bb149e623234) I1005 11:35:33.595580 1 render_controller.go:510] Generated machineconfig rendered-worker-66309e062b0f26aeefa1af0c4c330426 from 8 configs: [{MachineConfig 00-worker machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 97-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-ssh machineconfiguration.openshift.io/v1 } {MachineConfig layering-mc-vpwxqjan machineconfiguration.openshift.io/v1 }] I1005 11:35:33.595876 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"114242", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-worker-66309e062b0f26aeefa1af0c4c330426 successfully generated (release version: 4.14.0-0.ci.test-2023-10-05-080602-ci-ln-6f80fqk-latest, controller version: 8e2ca527ec2990eee93be55b61eaa6825451b17f) I1005 11:35:33.611044 1 render_controller.go:536] Pool worker: now targeting: rendered-worker-66309e062b0f26aeefa1af0c4c330426 I1005 11:35:38.635728 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:35:38.666107 1 node_controller.go:483] Pool worker: 2 candidate nodes in 2 zones for update, capacity: 1 I1005 11:35:38.666135 1 node_controller.go:483] Pool worker: Setting node ip-10-0-4-193.us-east-2.compute.internal target to MachineConfig rendered-worker-66309e062b0f26aeefa1af0c4c330426 I1005 11:35:38.666894 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:35:38.704051 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfig", Namespace:"openshift-machine-config-operator", Name:"rendered-worker-66309e062b0f26aeefa1af0c4c330426", UID:"", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OSImageURLOverridden' OSImageURL was overridden via machineconfig in rendered-worker-66309e062b0f26aeefa1af0c4c330426 (was: is: quay.io/mcoqe/layering@sha256:71d824675db3a7783d79edfe78e9bf1a18df33baf687db60d765bb149e623234) I1005 11:35:38.705188 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"115776", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-4-193.us-east-2.compute.internal to MachineConfig rendered-worker-66309e062b0f26aeefa1af0c4c330426 I1005 11:35:38.734542 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-66309e062b0f26aeefa1af0c4c330426 E1005 11:35:38.771826 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 11:35:38.771847 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 11:35:38.829082 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfig", Namespace:"openshift-machine-config-operator", Name:"rendered-worker-66309e062b0f26aeefa1af0c4c330426", UID:"", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OSImageURLOverridden' OSImageURL was overridden via machineconfig in rendered-worker-66309e062b0f26aeefa1af0c4c330426 (was: is: quay.io/mcoqe/layering@sha256:71d824675db3a7783d79edfe78e9bf1a18df33baf687db60d765bb149e623234) I1005 11:35:40.914779 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 11:35:43.629565 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 11:35:43.629615 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 11:35:43.655629 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 11:35:43.655714 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain I1005 11:35:43.667662 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:35:43.683505 1 node_controller.go:1096] No nodes available for updates I1005 11:35:43.688991 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:35:43.705088 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints E1005 11:35:44.321521 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 11:35:44.324033 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-1 I1005 11:35:44.324095 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-1 I1005 11:35:44.324251 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-9jjnn I1005 11:35:44.324333 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-x4sgd I1005 11:35:44.324444 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-94l78 I1005 11:35:44.324531 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-qbws7 I1005 11:35:44.324619 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-l98hg I1005 11:35:44.324687 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-6cwql I1005 11:35:44.324764 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-cwtm9 I1005 11:35:44.324834 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-wcspc I1005 11:35:44.324883 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-5f5bfcdcb5-brckt I1005 11:35:44.324938 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-xxbx9 I1005 11:35:44.324960 1 drain_controller.go:144] evicting pod openshift-operator-lifecycle-manager/collect-profiles-28275090-6dp4k I1005 11:35:45.002346 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-operator-lifecycle-manager/collect-profiles-28275090-6dp4k I1005 11:35:46.598552 1 request.go:696] Waited for 1.164031567s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-7ddd77864b-wcspc I1005 11:35:46.615240 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-wcspc I1005 11:35:46.800439 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-94l78 I1005 11:35:47.003658 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-cwtm9 I1005 11:35:47.202256 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-5f5bfcdcb5-brckt I1005 11:35:47.603041 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-qbws7 I1005 11:35:47.798274 1 request.go:696] Waited for 1.384166647s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/kube-state-metrics-794c8bd776-9jjnn I1005 11:35:47.801350 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-9jjnn I1005 11:35:48.202050 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-l98hg I1005 11:35:48.403451 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-1 I1005 11:35:48.604006 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-1 I1005 11:35:48.668149 1 node_controller.go:1096] No nodes available for updates I1005 11:35:48.730925 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfig", Namespace:"openshift-machine-config-operator", Name:"rendered-worker-66309e062b0f26aeefa1af0c4c330426", UID:"", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OSImageURLOverridden' OSImageURL was overridden via machineconfig in rendered-worker-66309e062b0f26aeefa1af0c4c330426 (was: is: quay.io/mcoqe/layering@sha256:71d824675db3a7783d79edfe78e9bf1a18df33baf687db60d765bb149e623234) I1005 11:35:48.798629 1 request.go:696] Waited for 2.370134544s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/thanos-querier-6c99b68589-6cwql I1005 11:35:48.802265 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-6cwql I1005 11:36:12.805486 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-xxbx9 I1005 11:36:31.418376 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-x4sgd I1005 11:36:31.418407 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:36:31.435545 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 11:36:31.435617 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: false) I1005 11:36:31.440605 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 11:36:31.440954 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain E1005 11:36:32.082049 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 11:36:32.082070 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:38:17.189811 1 node_controller.go:1096] No nodes available for updates I1005 11:38:17.231460 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfig", Namespace:"openshift-machine-config-operator", Name:"rendered-worker-66309e062b0f26aeefa1af0c4c330426", UID:"", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OSImageURLOverridden' OSImageURL was overridden via machineconfig in rendered-worker-66309e062b0f26aeefa1af0c4c330426 (was: is: quay.io/mcoqe/layering@sha256:71d824675db3a7783d79edfe78e9bf1a18df33baf687db60d765bb149e623234) I1005 11:39:05.910232 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 11:39:05.951637 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:39:10.910588 1 node_controller.go:1096] No nodes available for updates I1005 11:39:11.647983 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:39:16.648633 1 node_controller.go:1096] No nodes available for updates I1005 11:39:22.110619 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting NotReady=False I1005 11:39:22.137075 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:39:22.161137 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:39:26.623553 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:39:26.650871 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:39:27.111084 1 node_controller.go:1096] No nodes available for updates I1005 11:39:31.065261 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting Unschedulable I1005 11:39:31.114177 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:39:31.670487 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:39:36.065865 1 node_controller.go:1096] No nodes available for updates I1005 11:39:43.246510 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordoning I1005 11:39:43.246608 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 11:39:43.277616 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 11:39:43.277864 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:39:43.297559 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:39:48.279013 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Completed update to rendered-worker-66309e062b0f26aeefa1af0c4c330426 I1005 11:39:48.302538 1 node_controller.go:483] Pool worker: 1 candidate nodes in 1 zones for update, capacity: 1 I1005 11:39:48.302555 1 node_controller.go:483] Pool worker: Setting node ip-10-0-49-13.us-east-2.compute.internal target to MachineConfig rendered-worker-66309e062b0f26aeefa1af0c4c330426 I1005 11:39:48.326537 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-66309e062b0f26aeefa1af0c4c330426 I1005 11:39:48.332375 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"115868", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-49-13.us-east-2.compute.internal to MachineConfig rendered-worker-66309e062b0f26aeefa1af0c4c330426 I1005 11:39:49.753305 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 11:39:53.327530 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 11:39:53.327567 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 11:39:53.345925 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 11:39:53.345942 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain I1005 11:39:53.360958 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:39:53.369926 1 node_controller.go:1096] No nodes available for updates I1005 11:39:53.418487 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfig", Namespace:"openshift-machine-config-operator", Name:"rendered-worker-66309e062b0f26aeefa1af0c4c330426", UID:"", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OSImageURLOverridden' OSImageURL was overridden via machineconfig in rendered-worker-66309e062b0f26aeefa1af0c4c330426 (was: is: quay.io/mcoqe/layering@sha256:71d824675db3a7783d79edfe78e9bf1a18df33baf687db60d765bb149e623234) I1005 11:39:53.589789 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints E1005 11:39:53.998743 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 11:39:54.001185 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-gg8sn I1005 11:39:54.001200 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-5f5bfcdcb5-gkt6b I1005 11:39:54.001187 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-p5ljz I1005 11:39:54.001271 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-dnthl I1005 11:39:54.001319 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 11:39:54.001377 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-v8tmb I1005 11:39:54.001443 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-pb6cc I1005 11:39:54.001439 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 11:39:54.001499 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-mjflk I1005 11:39:54.001492 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-zv5ch I1005 11:39:54.001190 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-9nk2z I1005 11:39:54.001596 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-qlg6n E1005 11:39:54.011164 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 11:39:54.012106 1 drain_controller.go:144] error when evicting pods/"prometheus-adapter-5f5bfcdcb5-gkt6b" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 11:39:54.012151 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-p5ljz" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 11:39:54.013219 1 drain_controller.go:144] error when evicting pods/"prometheus-k8s-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 11:39:55.053205 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-v8tmb I1005 11:39:55.444351 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-qlg6n I1005 11:39:56.038851 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-gg8sn I1005 11:39:56.236600 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-pb6cc I1005 11:39:57.052795 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-mjflk I1005 11:39:57.058300 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-zv5ch I1005 11:39:57.074517 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-dnthl I1005 11:39:58.361548 1 node_controller.go:1096] No nodes available for updates I1005 11:39:59.018084 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-5f5bfcdcb5-gkt6b I1005 11:39:59.018276 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 11:39:59.018283 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 11:39:59.018808 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-p5ljz E1005 11:39:59.030175 1 drain_controller.go:144] error when evicting pods/"prometheus-k8s-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 11:39:59.030479 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-p5ljz" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 11:39:59.030539 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 11:40:02.059989 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-5f5bfcdcb5-gkt6b I1005 11:40:04.030701 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 11:40:04.030701 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 11:40:04.030722 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-p5ljz E1005 11:40:04.039702 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 11:40:07.106123 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-0 I1005 11:40:09.040774 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 11:40:09.047738 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 11:40:14.048860 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 11:40:16.100501 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-0 I1005 11:40:32.113845 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-p5ljz I1005 11:40:41.267276 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-9nk2z I1005 11:40:41.267305 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:40:41.280490 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 11:40:41.280570 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: false) I1005 11:40:41.288705 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 11:40:41.288770 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain E1005 11:40:41.922355 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 11:40:41.922416 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:42:31.708980 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 11:42:31.763324 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:42:36.709573 1 node_controller.go:1096] No nodes available for updates I1005 11:42:37.245877 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:42:42.246168 1 node_controller.go:1096] No nodes available for updates I1005 11:42:42.279123 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting NotReady=False I1005 11:42:42.303253 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:42:42.366040 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:42:47.165144 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:42:47.183778 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:42:47.279915 1 node_controller.go:1096] No nodes available for updates I1005 11:43:01.854125 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting Unschedulable I1005 11:43:01.885859 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:43:02.219256 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:43:06.855629 1 node_controller.go:1096] No nodes available for updates I1005 11:43:14.232208 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordoning I1005 11:43:14.232227 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 11:43:14.249088 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 11:43:14.249103 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:43:14.428262 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:43:19.246208 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Completed update to rendered-worker-66309e062b0f26aeefa1af0c4c330426 I1005 11:43:19.428606 1 status.go:109] Pool worker: All nodes are updated with MachineConfig rendered-worker-66309e062b0f26aeefa1af0c4c330426 I1005 11:43:24.472589 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfig", Namespace:"openshift-machine-config-operator", Name:"rendered-worker-66309e062b0f26aeefa1af0c4c330426", UID:"", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OSImageURLOverridden' OSImageURL was overridden via machineconfig in rendered-worker-66309e062b0f26aeefa1af0c4c330426 (was: is: quay.io/mcoqe/layering@sha256:71d824675db3a7783d79edfe78e9bf1a18df33baf687db60d765bb149e623234) I1005 11:44:23.157580 1 template_controller.go:134] Re-syncing ControllerConfig due to secret pull-secret change I1005 11:44:51.232865 1 render_controller.go:536] Pool worker: now targeting: rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:44:56.269479 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:44:56.286254 1 node_controller.go:483] Pool worker: 2 candidate nodes in 2 zones for update, capacity: 1 I1005 11:44:56.286449 1 node_controller.go:483] Pool worker: Setting node ip-10-0-4-193.us-east-2.compute.internal target to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:44:56.286411 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:44:56.311579 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"121106", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-4-193.us-east-2.compute.internal to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:44:56.312850 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-6bf803109332579a2637f8dc27f9f58f E1005 11:44:56.379181 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 11:44:56.379200 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 11:44:58.170842 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 11:45:01.269748 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 11:45:01.269800 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 11:45:01.294765 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:45:01.294775 1 node_controller.go:1096] No nodes available for updates I1005 11:45:01.295385 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 11:45:01.295529 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain I1005 11:45:01.331006 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints E1005 11:45:01.965362 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 11:45:01.967770 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-vp7hl I1005 11:45:01.967837 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-gbxbq I1005 11:45:01.967847 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-1 I1005 11:45:01.967773 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-qpz4v I1005 11:45:01.967782 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-dlnmc I1005 11:45:01.967789 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-6wzlv I1005 11:45:01.968011 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-bq6cl I1005 11:45:01.967795 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-1 I1005 11:45:01.967802 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-9pq9f I1005 11:45:01.967808 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-kzdjm I1005 11:45:01.967818 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-5f5bfcdcb5-g7k6v I1005 11:45:01.967825 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-wtjhz I1005 11:45:03.045755 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-vp7hl I1005 11:45:03.405678 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-kzdjm I1005 11:45:04.198269 1 request.go:696] Waited for 1.141377955s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/openshift-state-metrics-547dffdc-qpz4v I1005 11:45:04.200452 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-qpz4v I1005 11:45:04.406742 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-1 I1005 11:45:04.602599 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-5f5bfcdcb5-g7k6v I1005 11:45:04.803371 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-wtjhz I1005 11:45:05.198375 1 request.go:696] Waited for 1.148710226s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/telemeter-client-578654767d-gbxbq I1005 11:45:05.201508 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-gbxbq I1005 11:45:05.601246 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-bq6cl I1005 11:45:05.809395 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-1 I1005 11:45:06.004979 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-9pq9f I1005 11:45:06.199228 1 request.go:696] Waited for 1.156837012s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-ingress/pods/router-default-d49dc89bd-6wzlv I1005 11:45:06.295293 1 node_controller.go:1096] No nodes available for updates I1005 11:45:29.053939 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-dlnmc I1005 11:45:52.044192 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-6wzlv I1005 11:45:52.044274 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:47:59.919729 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting NotReady=False I1005 11:47:59.967087 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:48:02.403798 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:48:04.920511 1 node_controller.go:1096] No nodes available for updates I1005 11:48:10.001300 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting Unschedulable I1005 11:48:10.036780 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:48:12.314199 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:48:15.001626 1 node_controller.go:1096] No nodes available for updates I1005 11:48:20.537234 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordoning I1005 11:48:20.537250 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 11:48:20.562758 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 11:48:20.562823 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:48:20.578402 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:48:25.555163 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Completed update to rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:48:25.590615 1 node_controller.go:483] Pool worker: 1 candidate nodes in 1 zones for update, capacity: 1 I1005 11:48:25.590715 1 node_controller.go:483] Pool worker: Setting node ip-10-0-49-13.us-east-2.compute.internal target to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:48:25.632508 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:48:25.632966 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"121173", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-49-13.us-east-2.compute.internal to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:48:27.448924 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 11:48:30.632105 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 11:48:30.632195 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 11:48:30.670364 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:48:30.670919 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 11:48:30.670937 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain I1005 11:48:30.671023 1 node_controller.go:1096] No nodes available for updates I1005 11:48:30.838664 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints E1005 11:48:31.324785 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 11:48:31.326871 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 11:48:31.326880 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-t2jrk I1005 11:48:31.326920 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-sjmds I1005 11:48:31.326975 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-v9nqk I1005 11:48:31.327027 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-26rm6 I1005 11:48:31.327088 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-n7f6p I1005 11:48:31.327137 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-5f5bfcdcb5-m9gnc I1005 11:48:31.326871 1 drain_controller.go:144] evicting pod openshift-operator-lifecycle-manager/collect-profiles-28275105-fq44l I1005 11:48:31.327253 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 11:48:31.327366 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-gf8pj I1005 11:48:31.327498 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-p8k49 I1005 11:48:31.327606 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-jqb7g I1005 11:48:31.327655 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-hbpcm E1005 11:48:31.334900 1 drain_controller.go:144] error when evicting pods/"prometheus-k8s-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 11:48:31.335842 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 11:48:31.336472 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-26rm6" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 11:48:31.342179 1 drain_controller.go:144] error when evicting pods/"prometheus-adapter-5f5bfcdcb5-m9gnc" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 11:48:31.375854 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-operator-lifecycle-manager/collect-profiles-28275105-fq44l I1005 11:48:32.757491 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-hbpcm I1005 11:48:33.378820 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-gf8pj I1005 11:48:33.570555 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-sjmds I1005 11:48:33.765650 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-p8k49 I1005 11:48:33.967597 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-jqb7g I1005 11:48:34.166029 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-t2jrk I1005 11:48:34.374989 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-v9nqk I1005 11:48:35.671082 1 node_controller.go:1096] No nodes available for updates I1005 11:48:36.335757 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 11:48:36.335894 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 11:48:36.336801 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-26rm6 E1005 11:48:36.341986 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-26rm6" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 11:48:36.342402 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-5f5bfcdcb5-m9gnc E1005 11:48:36.342459 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 11:48:38.378780 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-0 I1005 11:48:39.388495 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-5f5bfcdcb5-m9gnc I1005 11:48:41.342062 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-26rm6 I1005 11:48:41.343121 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 11:48:41.348819 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 11:48:46.349475 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 11:48:46.356582 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 11:48:51.357292 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 11:48:54.395301 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-0 I1005 11:49:08.376848 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-26rm6 I1005 11:49:18.378970 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-n7f6p I1005 11:49:18.379003 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:51:02.237637 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting NotReady=False I1005 11:51:02.277442 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:51:02.397961 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:51:07.238638 1 node_controller.go:1096] No nodes available for updates I1005 11:51:21.232612 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting Unschedulable I1005 11:51:21.275167 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:51:22.361315 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:51:26.233295 1 node_controller.go:1096] No nodes available for updates I1005 11:51:32.569131 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordoning I1005 11:51:32.569150 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 11:51:32.597282 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 11:51:32.597369 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 11:51:32.621739 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:51:37.581647 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Completed update to rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 11:51:37.622801 1 status.go:109] Pool worker: All nodes are updated with MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f E1005 11:54:18.358915 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 11:54:18.358935 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again E1005 11:54:18.738295 1 render_controller.go:460] Error updating MachineConfigPool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again I1005 11:54:18.738385 1 render_controller.go:377] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again W1005 11:54:22.034413 1 warnings.go:70] unknown field "spec.dns.spec.platform" W1005 11:54:22.088108 1 warnings.go:70] unknown field "spec.dns.spec.platform" W1005 11:54:22.125877 1 warnings.go:70] unknown field "spec.dns.spec.platform" W1005 11:54:23.233159 1 warnings.go:70] unknown field "spec.dns.spec.platform" W1005 11:54:29.250913 1 warnings.go:70] unknown field "spec.dns.spec.platform" W1005 11:54:29.295360 1 warnings.go:70] unknown field "spec.dns.spec.platform" W1005 11:54:29.324855 1 warnings.go:70] unknown field "spec.dns.spec.platform" W1005 11:54:29.397067 1 warnings.go:70] unknown field "spec.dns.spec.platform" W1005 11:54:31.432923 1 warnings.go:70] unknown field "spec.dns.spec.platform" E1005 11:54:54.919415 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 11:54:54.919445 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again E1005 11:54:55.118384 1 render_controller.go:460] Error updating MachineConfigPool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again I1005 11:54:55.118398 1 render_controller.go:377] Error syncing machineconfigpool master: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "master": the object has been modified; please apply your changes to the latest version and try again I1005 11:59:07.462895 1 render_controller.go:510] Generated machineconfig rendered-worker-6215d68f0c95e43c8bafe78e889d4a61 from 8 configs: [{MachineConfig 00-worker machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 97-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-ssh machineconfiguration.openshift.io/v1 } {MachineConfig change-worker-kernel-selinux-idlz5h28 machineconfiguration.openshift.io/v1 }] I1005 11:59:07.463517 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"127521", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-worker-6215d68f0c95e43c8bafe78e889d4a61 successfully generated (release version: 4.14.0-0.ci.test-2023-10-05-080602-ci-ln-6f80fqk-latest, controller version: 8e2ca527ec2990eee93be55b61eaa6825451b17f) I1005 11:59:07.471935 1 render_controller.go:536] Pool worker: now targeting: rendered-worker-6215d68f0c95e43c8bafe78e889d4a61 I1005 11:59:12.501830 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:59:12.519613 1 node_controller.go:483] Pool worker: 2 candidate nodes in 2 zones for update, capacity: 1 I1005 11:59:12.519687 1 node_controller.go:483] Pool worker: Setting node ip-10-0-4-193.us-east-2.compute.internal target to MachineConfig rendered-worker-6215d68f0c95e43c8bafe78e889d4a61 I1005 11:59:12.520280 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 11:59:12.551599 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"129074", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-4-193.us-east-2.compute.internal to MachineConfig rendered-worker-6215d68f0c95e43c8bafe78e889d4a61 I1005 11:59:12.558169 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-6215d68f0c95e43c8bafe78e889d4a61 E1005 11:59:12.604709 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 11:59:12.604794 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 11:59:13.889956 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 11:59:17.502570 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 11:59:17.502653 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 11:59:17.524286 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 11:59:17.524362 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain I1005 11:59:17.534514 1 node_controller.go:1096] No nodes available for updates I1005 11:59:17.534670 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 11:59:17.715529 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints E1005 11:59:18.186900 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 11:59:18.189118 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-6p5x8 I1005 11:59:18.189127 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-xchmr I1005 11:59:18.189137 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-n8xhj I1005 11:59:18.189190 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-1 I1005 11:59:18.189127 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-2h5qs I1005 11:59:18.189245 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-6m7sx I1005 11:59:18.189236 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-1 I1005 11:59:18.189313 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-q5dwp I1005 11:59:18.189326 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-699c557b9-dvmnk I1005 11:59:18.189339 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-mfkxs I1005 11:59:18.189346 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-gjh4v I1005 11:59:18.189348 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-849mf I1005 11:59:20.032684 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-q5dwp I1005 11:59:20.429497 1 request.go:696] Waited for 1.154930993s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-1 I1005 11:59:20.835655 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-gjh4v I1005 11:59:21.034366 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-849mf I1005 11:59:21.233727 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-6p5x8 I1005 11:59:21.429557 1 request.go:696] Waited for 1.172072539s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/openshift-state-metrics-547dffdc-6m7sx I1005 11:59:21.433047 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-6m7sx I1005 11:59:22.031823 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-xchmr I1005 11:59:22.233201 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-mfkxs I1005 11:59:22.432114 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-699c557b9-dvmnk I1005 11:59:22.535493 1 node_controller.go:1096] No nodes available for updates I1005 11:59:22.628829 1 request.go:696] Waited for 1.354185798s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-1 I1005 11:59:22.634098 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-1 I1005 11:59:22.833738 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-1 I1005 11:59:45.272454 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-n8xhj I1005 12:00:08.265231 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-2h5qs I1005 12:00:08.265263 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:00:08.282058 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 12:00:08.290581 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: false) I1005 12:00:08.299391 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 12:00:08.299492 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain E1005 12:00:08.927387 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 12:00:08.927444 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:00:57.480822 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 12:00:57.511775 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:01:02.481524 1 node_controller.go:1096] No nodes available for updates I1005 12:01:02.997100 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:01:07.997633 1 node_controller.go:1096] No nodes available for updates I1005 12:01:12.152530 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting NotReady=False I1005 12:01:12.180899 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:01:12.224353 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:01:12.986482 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:01:13.062323 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:01:17.154939 1 node_controller.go:1096] No nodes available for updates I1005 12:01:20.583049 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting Unschedulable I1005 12:01:20.630062 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:01:23.215332 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:01:25.584072 1 node_controller.go:1096] No nodes available for updates I1005 12:01:33.529072 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordoning I1005 12:01:33.529097 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 12:01:33.546620 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 12:01:33.546703 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:01:33.576924 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:01:38.552288 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Completed update to rendered-worker-6215d68f0c95e43c8bafe78e889d4a61 I1005 12:01:38.578264 1 node_controller.go:483] Pool worker: 1 candidate nodes in 1 zones for update, capacity: 1 I1005 12:01:38.578348 1 node_controller.go:483] Pool worker: Setting node ip-10-0-49-13.us-east-2.compute.internal target to MachineConfig rendered-worker-6215d68f0c95e43c8bafe78e889d4a61 I1005 12:01:38.593369 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-6215d68f0c95e43c8bafe78e889d4a61 I1005 12:01:38.595534 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"129146", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-49-13.us-east-2.compute.internal to MachineConfig rendered-worker-6215d68f0c95e43c8bafe78e889d4a61 I1005 12:01:39.978703 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 12:01:40.575695 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 12:01:40.575737 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 12:01:40.593714 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 12:01:40.593813 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain I1005 12:01:40.612255 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints E1005 12:01:41.249854 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 12:01:41.252151 1 drain_controller.go:144] evicting pod openshift-operator-lifecycle-manager/collect-profiles-28275120-gqlbs I1005 12:01:41.252211 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-7nlcc I1005 12:01:41.252238 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-kzqg8 I1005 12:01:41.252276 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-cl8vq I1005 12:01:41.252369 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-nbjqc I1005 12:01:41.252394 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-699c557b9-zz5dp I1005 12:01:41.252250 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 12:01:41.252511 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 12:01:41.252533 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-5q6kg I1005 12:01:41.252546 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-qfntd I1005 12:01:41.252260 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-zjq76 I1005 12:01:41.252267 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-kn24f I1005 12:01:41.252605 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-7zfnk E1005 12:01:41.269665 1 drain_controller.go:144] error when evicting pods/"prometheus-adapter-699c557b9-zz5dp" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 12:01:41.276623 1 drain_controller.go:144] error when evicting pods/"prometheus-k8s-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 12:01:41.279203 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:01:41.336962 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-operator-lifecycle-manager/collect-profiles-28275120-gqlbs E1005 12:01:41.464887 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-zjq76" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:01:42.335587 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-kzqg8 I1005 12:01:42.891689 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-7zfnk I1005 12:01:43.330709 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-nbjqc I1005 12:01:43.624055 1 node_controller.go:1096] No nodes available for updates I1005 12:01:43.625984 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:01:43.705042 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-qfntd I1005 12:01:44.309579 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-5q6kg I1005 12:01:44.319780 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-cl8vq I1005 12:01:44.501008 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-7nlcc I1005 12:01:46.270540 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-699c557b9-zz5dp I1005 12:01:46.277675 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 12:01:46.279815 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 12:01:46.283959 1 drain_controller.go:144] error when evicting pods/"prometheus-k8s-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 12:01:46.287862 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:01:46.465211 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-zjq76 E1005 12:01:46.472750 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-zjq76" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:01:48.299953 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-699c557b9-zz5dp I1005 12:01:48.627207 1 node_controller.go:1096] No nodes available for updates I1005 12:01:51.284903 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 12:01:51.288034 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 12:01:51.295304 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:01:51.473178 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-zjq76 E1005 12:01:51.483913 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-zjq76" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:01:53.325815 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-0 I1005 12:01:56.295880 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 12:01:56.303358 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:01:56.484541 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-zjq76 I1005 12:02:01.304464 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 12:02:01.310387 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:02:06.311092 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 12:02:08.346520 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-0 I1005 12:02:23.526204 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-zjq76 I1005 12:02:23.528325 1 node_controller.go:1096] No nodes available for updates I1005 12:02:29.688174 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-kn24f I1005 12:02:29.688207 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:03:08.240632 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 12:03:08.279753 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:03:13.241647 1 node_controller.go:1096] No nodes available for updates I1005 12:03:13.737998 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:03:18.738873 1 node_controller.go:1096] No nodes available for updates I1005 12:03:29.304835 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting NotReady=False I1005 12:03:29.319443 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:03:29.344891 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:03:33.713997 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:03:33.744408 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:03:34.305019 1 node_controller.go:1096] No nodes available for updates I1005 12:03:38.115994 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting Unschedulable I1005 12:03:38.148533 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:03:38.767414 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:03:43.116839 1 node_controller.go:1096] No nodes available for updates I1005 12:03:49.991536 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordoning I1005 12:03:49.991587 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 12:03:50.021630 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 12:03:50.021704 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:03:50.051100 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:03:55.010407 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Completed update to rendered-worker-6215d68f0c95e43c8bafe78e889d4a61 I1005 12:03:55.052041 1 status.go:109] Pool worker: All nodes are updated with MachineConfig rendered-worker-6215d68f0c95e43c8bafe78e889d4a61 I1005 12:04:14.249913 1 render_controller.go:536] Pool worker: now targeting: rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 12:04:19.268766 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:04:19.279957 1 node_controller.go:483] Pool worker: 2 candidate nodes in 2 zones for update, capacity: 1 I1005 12:04:19.279975 1 node_controller.go:483] Pool worker: Setting node ip-10-0-4-193.us-east-2.compute.internal target to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 12:04:19.281576 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:04:19.383740 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 12:04:19.383788 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"132945", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-4-193.us-east-2.compute.internal to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f E1005 12:04:19.471703 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 12:04:19.471722 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 12:04:21.348246 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 12:04:24.289092 1 node_controller.go:1096] No nodes available for updates I1005 12:04:24.289251 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:04:26.349174 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 12:04:26.349222 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 12:04:26.373105 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 12:04:26.374186 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain I1005 12:04:26.404652 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints E1005 12:04:27.044146 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 12:04:27.046552 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-92fjp I1005 12:04:27.046553 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-cq9b5 I1005 12:04:27.046561 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-1 I1005 12:04:27.046562 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-ns7n8 I1005 12:04:27.046567 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-699c557b9-4wr8m I1005 12:04:27.046571 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-jv7n8 I1005 12:04:27.046574 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-75jzc I1005 12:04:27.046579 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-1 I1005 12:04:27.046580 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-gpwk9 I1005 12:04:27.046583 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-vq5sq I1005 12:04:27.046590 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-l45f5 I1005 12:04:27.046596 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-4r9qf I1005 12:04:28.098214 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-cq9b5 I1005 12:04:29.282533 1 request.go:696] Waited for 1.156667431s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/monitoring-plugin-6d8f5944dc-vq5sq I1005 12:04:29.286545 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-vq5sq I1005 12:04:29.290725 1 node_controller.go:1096] No nodes available for updates I1005 12:04:29.685361 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-l45f5 I1005 12:04:29.886247 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-4r9qf I1005 12:04:30.084054 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-699c557b9-4wr8m I1005 12:04:30.285115 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-75jzc I1005 12:04:30.481403 1 request.go:696] Waited for 1.360862195s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-1 I1005 12:04:30.486691 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-1 I1005 12:04:30.686019 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-gpwk9 I1005 12:04:30.887815 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-1 I1005 12:04:31.086795 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-jv7n8 I1005 12:04:54.128674 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-ns7n8 I1005 12:05:14.132308 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-92fjp I1005 12:05:14.132340 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:05:14.147244 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 12:05:14.147261 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: false) I1005 12:05:14.151064 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 12:05:14.151073 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain E1005 12:05:14.784831 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 12:05:14.784868 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:05:58.802370 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 12:05:58.871709 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:06:03.803438 1 node_controller.go:1096] No nodes available for updates I1005 12:06:04.275156 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:06:09.276213 1 node_controller.go:1096] No nodes available for updates I1005 12:06:18.742844 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting NotReady=False I1005 12:06:18.770017 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:06:18.807915 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:06:19.274185 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:06:19.301957 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:06:23.743314 1 node_controller.go:1096] No nodes available for updates I1005 12:06:27.878462 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting Unschedulable I1005 12:06:27.910470 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:06:29.334441 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:06:32.879127 1 node_controller.go:1096] No nodes available for updates I1005 12:06:39.344437 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordoning I1005 12:06:39.344523 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 12:06:39.375569 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 12:06:39.375638 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:06:39.397108 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:06:44.370483 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Completed update to rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 12:06:44.397970 1 node_controller.go:483] Pool worker: 1 candidate nodes in 1 zones for update, capacity: 1 I1005 12:06:44.398040 1 node_controller.go:483] Pool worker: Setting node ip-10-0-49-13.us-east-2.compute.internal target to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 12:06:44.419103 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 12:06:44.426643 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"133045", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-49-13.us-east-2.compute.internal to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 12:06:45.979053 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 12:06:48.962815 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 12:06:48.962841 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 12:06:48.981777 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 12:06:48.981861 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain I1005 12:06:49.003180 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:06:49.440934 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:06:49.458445 1 node_controller.go:1096] No nodes available for updates E1005 12:06:49.568343 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 12:06:49.568364 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again E1005 12:06:49.623354 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 12:06:49.625143 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-6vdxg I1005 12:06:49.625185 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-2rbp2 I1005 12:06:49.625144 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-rj654 I1005 12:06:49.625192 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-wxwrf I1005 12:06:49.625149 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 12:06:49.625155 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-2xs4j I1005 12:06:49.625164 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-zqp4n I1005 12:06:49.625170 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-699c557b9-ln5p4 I1005 12:06:49.625158 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-v2kkk I1005 12:06:49.625175 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-9ng8x I1005 12:06:49.625215 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-wjkkl I1005 12:06:49.625183 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 12:06:49.635890 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-2xs4j" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 12:06:49.656510 1 drain_controller.go:144] error when evicting pods/"prometheus-k8s-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 12:06:49.656669 1 drain_controller.go:144] error when evicting pods/"prometheus-adapter-699c557b9-ln5p4" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 12:06:49.835000 1 drain_controller.go:144] error when evicting pods/"thanos-querier-6c99b68589-wjkkl" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 12:06:50.032121 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:06:51.736988 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-6vdxg I1005 12:06:51.737079 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-v2kkk I1005 12:06:51.741887 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-wxwrf I1005 12:06:52.680780 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-zqp4n I1005 12:06:52.725822 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-rj654 I1005 12:06:52.741663 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-2rbp2 I1005 12:06:54.441772 1 node_controller.go:1096] No nodes available for updates I1005 12:06:54.636321 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-2xs4j E1005 12:06:54.645868 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-2xs4j" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:06:54.656921 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-699c557b9-ln5p4 I1005 12:06:54.656941 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 E1005 12:06:54.664413 1 drain_controller.go:144] error when evicting pods/"prometheus-k8s-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:06:54.835882 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-wjkkl I1005 12:06:55.032281 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 12:06:55.037800 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:06:56.872721 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-wjkkl I1005 12:06:57.692169 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-699c557b9-ln5p4 I1005 12:06:59.646551 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-2xs4j E1005 12:06:59.653296 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-2xs4j" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:06:59.665351 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 12:07:00.039475 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 12:07:00.046127 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:07:02.704279 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-0 I1005 12:07:04.654298 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-2xs4j I1005 12:07:05.047244 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 12:07:05.056331 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:07:10.056483 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 12:07:10.066078 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:07:15.066995 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 12:07:17.110224 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-0 I1005 12:07:31.686662 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-2xs4j I1005 12:07:37.746112 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-9ng8x I1005 12:07:37.746149 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:08:42.969446 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting NotReady=False I1005 12:08:43.026151 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:08:44.436027 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:08:47.970630 1 node_controller.go:1096] No nodes available for updates I1005 12:08:51.814633 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting Unschedulable I1005 12:08:51.851737 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:08:54.357107 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:08:56.814886 1 node_controller.go:1096] No nodes available for updates I1005 12:09:02.893074 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordoning I1005 12:09:02.893090 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 12:09:02.914605 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 12:09:02.914677 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:09:02.939544 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:09:07.916835 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Completed update to rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 12:09:07.940410 1 status.go:109] Pool worker: All nodes are updated with MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 12:10:34.385926 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 12:10:34.424849 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:10:39.387038 1 node_controller.go:1096] No nodes available for updates I1005 12:10:40.405462 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:10:44.398683 1 node_controller.go:1096] No nodes available for updates I1005 12:10:46.337062 1 template_controller.go:134] Re-syncing ControllerConfig due to secret pull-secret change I1005 12:11:22.161743 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting NotReady=False I1005 12:11:22.196024 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:11:22.225235 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:11:25.329286 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:11:25.348692 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:11:27.162844 1 node_controller.go:1096] No nodes available for updates I1005 12:11:30.927799 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting ready I1005 12:11:30.964092 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:11:35.367635 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:12:20.102185 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfig", Namespace:"openshift-machine-config-operator", Name:"rendered-worker-151d08e153748960bef7ab372848795d", UID:"", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OSImageURLOverridden' OSImageURL was overridden via machineconfig in rendered-worker-151d08e153748960bef7ab372848795d (was: is: quay.io/mcoqe/layering@sha256:c7da7781723035cfb9b671f828e890761bc71467099c63318bde8f67f93e2f3f) I1005 12:12:20.131811 1 render_controller.go:510] Generated machineconfig rendered-worker-151d08e153748960bef7ab372848795d from 8 configs: [{MachineConfig 00-worker machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 97-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-ssh machineconfiguration.openshift.io/v1 } {MachineConfig layering-mc-54159-3mq5xg2e machineconfiguration.openshift.io/v1 }] I1005 12:12:20.132349 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"138459", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-worker-151d08e153748960bef7ab372848795d successfully generated (release version: 4.14.0-0.ci.test-2023-10-05-080602-ci-ln-6f80fqk-latest, controller version: 8e2ca527ec2990eee93be55b61eaa6825451b17f) I1005 12:12:20.142142 1 render_controller.go:536] Pool worker: now targeting: rendered-worker-151d08e153748960bef7ab372848795d I1005 12:12:25.173293 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:12:25.218360 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:12:25.219024 1 node_controller.go:483] Pool worker: 2 candidate nodes in 2 zones for update, capacity: 1 I1005 12:12:25.219070 1 node_controller.go:483] Pool worker: Setting node ip-10-0-4-193.us-east-2.compute.internal target to MachineConfig rendered-worker-151d08e153748960bef7ab372848795d I1005 12:12:25.260110 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"139087", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-4-193.us-east-2.compute.internal to MachineConfig rendered-worker-151d08e153748960bef7ab372848795d I1005 12:12:25.261122 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-151d08e153748960bef7ab372848795d I1005 12:12:25.466459 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfig", Namespace:"openshift-machine-config-operator", Name:"rendered-worker-151d08e153748960bef7ab372848795d", UID:"", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OSImageURLOverridden' OSImageURL was overridden via machineconfig in rendered-worker-151d08e153748960bef7ab372848795d (was: is: quay.io/mcoqe/layering@sha256:c7da7781723035cfb9b671f828e890761bc71467099c63318bde8f67f93e2f3f) E1005 12:12:25.518400 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 12:12:25.518538 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 12:12:25.565332 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfig", Namespace:"openshift-machine-config-operator", Name:"rendered-worker-151d08e153748960bef7ab372848795d", UID:"", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OSImageURLOverridden' OSImageURL was overridden via machineconfig in rendered-worker-151d08e153748960bef7ab372848795d (was: is: quay.io/mcoqe/layering@sha256:c7da7781723035cfb9b671f828e890761bc71467099c63318bde8f67f93e2f3f) I1005 12:12:26.504459 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 12:12:28.256579 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 12:12:28.256615 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 12:12:28.278196 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 12:12:28.278213 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain I1005 12:12:28.295168 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints E1005 12:12:28.923063 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 12:12:28.924666 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-zkzpz I1005 12:12:28.924720 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-kt5xg I1005 12:12:28.924732 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-62knn I1005 12:12:28.924667 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-275g8 I1005 12:12:28.924722 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-r8xmc I1005 12:12:28.924691 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-n962h I1005 12:12:28.924695 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-699c557b9-dhf6r I1005 12:12:28.924704 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-1 I1005 12:12:28.924676 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-1 I1005 12:12:28.924701 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-srdpw I1005 12:12:28.924711 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-tz2s5 I1005 12:12:28.924712 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-kqd9f I1005 12:12:30.188481 1 node_controller.go:1096] No nodes available for updates I1005 12:12:30.189773 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:12:31.149568 1 request.go:696] Waited for 1.130441922s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/monitoring-plugin-6d8f5944dc-kt5xg I1005 12:12:31.157832 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-kt5xg I1005 12:12:31.752729 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-kqd9f I1005 12:12:31.952476 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-275g8 I1005 12:12:32.149811 1 request.go:696] Waited for 1.177925363s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/openshift-state-metrics-547dffdc-r8xmc I1005 12:12:32.154113 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-r8xmc I1005 12:12:32.553383 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-1 I1005 12:12:32.755229 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-srdpw I1005 12:12:33.153650 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-n962h I1005 12:12:33.348871 1 request.go:696] Waited for 2.357437523s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-1 I1005 12:12:33.353076 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-1 I1005 12:12:33.553332 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-699c557b9-dhf6r I1005 12:12:33.752142 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-tz2s5 I1005 12:12:35.190227 1 node_controller.go:1096] No nodes available for updates I1005 12:12:35.238536 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfig", Namespace:"openshift-machine-config-operator", Name:"rendered-worker-151d08e153748960bef7ab372848795d", UID:"", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OSImageURLOverridden' OSImageURL was overridden via machineconfig in rendered-worker-151d08e153748960bef7ab372848795d (was: is: quay.io/mcoqe/layering@sha256:c7da7781723035cfb9b671f828e890761bc71467099c63318bde8f67f93e2f3f) I1005 12:12:56.989669 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-62knn I1005 12:13:16.976608 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-zkzpz I1005 12:13:16.976709 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:15:20.432344 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 12:15:20.464690 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:15:23.070399 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting NotReady=False I1005 12:15:23.086476 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:15:23.113081 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:15:25.434587 1 node_controller.go:1096] No nodes available for updates I1005 12:15:25.915524 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:15:30.916008 1 node_controller.go:1096] No nodes available for updates I1005 12:15:41.784245 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting Unschedulable I1005 12:15:41.824305 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:15:45.863361 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:15:46.785301 1 node_controller.go:1096] No nodes available for updates I1005 12:15:54.759745 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordoning I1005 12:15:54.759763 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 12:15:54.777187 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 12:15:54.777295 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:15:55.024029 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:15:59.795356 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Completed update to rendered-worker-151d08e153748960bef7ab372848795d I1005 12:16:00.025088 1 node_controller.go:483] Pool worker: 1 candidate nodes in 1 zones for update, capacity: 1 I1005 12:16:00.025103 1 node_controller.go:483] Pool worker: Setting node ip-10-0-49-13.us-east-2.compute.internal target to MachineConfig rendered-worker-151d08e153748960bef7ab372848795d I1005 12:16:00.041534 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"139359", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-49-13.us-east-2.compute.internal to MachineConfig rendered-worker-151d08e153748960bef7ab372848795d I1005 12:16:00.041716 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-151d08e153748960bef7ab372848795d I1005 12:16:02.155862 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 12:16:05.042098 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 12:16:05.042191 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 12:16:05.060289 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 12:16:05.060309 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain I1005 12:16:05.085977 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:16:05.086957 1 node_controller.go:1096] No nodes available for updates I1005 12:16:05.147321 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfig", Namespace:"openshift-machine-config-operator", Name:"rendered-worker-151d08e153748960bef7ab372848795d", UID:"", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OSImageURLOverridden' OSImageURL was overridden via machineconfig in rendered-worker-151d08e153748960bef7ab372848795d (was: is: quay.io/mcoqe/layering@sha256:c7da7781723035cfb9b671f828e890761bc71467099c63318bde8f67f93e2f3f) E1005 12:16:05.196744 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 12:16:05.196763 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 12:16:05.233522 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:16:05.265853 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfig", Namespace:"openshift-machine-config-operator", Name:"rendered-worker-151d08e153748960bef7ab372848795d", UID:"", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OSImageURLOverridden' OSImageURL was overridden via machineconfig in rendered-worker-151d08e153748960bef7ab372848795d (was: is: quay.io/mcoqe/layering@sha256:c7da7781723035cfb9b671f828e890761bc71467099c63318bde8f67f93e2f3f) E1005 12:16:05.726821 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 12:16:05.728560 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-t5gq2 I1005 12:16:05.728599 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-5txst I1005 12:16:05.728560 1 drain_controller.go:144] evicting pod openshift-operator-lifecycle-manager/collect-profiles-28275135-g6qg9 I1005 12:16:05.728598 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-4qt5j I1005 12:16:05.728571 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-699c557b9-fdmtp I1005 12:16:05.728570 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 12:16:05.728579 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-d98qv I1005 12:16:05.728570 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-hhqng I1005 12:16:05.728580 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 12:16:05.728590 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-srsgx I1005 12:16:05.728589 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-4fwds I1005 12:16:05.728602 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-mqvzh I1005 12:16:05.728591 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-chbcq E1005 12:16:05.736963 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 12:16:05.738875 1 drain_controller.go:144] error when evicting pods/"prometheus-adapter-699c557b9-fdmtp" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 12:16:05.740438 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-hhqng" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 12:16:05.741723 1 drain_controller.go:144] error when evicting pods/"prometheus-k8s-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:16:05.788902 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-operator-lifecycle-manager/collect-profiles-28275135-g6qg9 I1005 12:16:06.806254 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-4qt5j I1005 12:16:06.814899 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-d98qv I1005 12:16:06.986659 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-4fwds I1005 12:16:07.783342 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-5txst I1005 12:16:07.783343 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-srsgx I1005 12:16:08.175838 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-mqvzh I1005 12:16:09.392246 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-chbcq I1005 12:16:10.087099 1 node_controller.go:1096] No nodes available for updates I1005 12:16:10.737853 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 12:16:10.738928 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-699c557b9-fdmtp I1005 12:16:10.741057 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-hhqng I1005 12:16:10.742131 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 E1005 12:16:10.743981 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 12:16:10.747244 1 drain_controller.go:144] error when evicting pods/"prometheus-k8s-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 12:16:10.748495 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-hhqng" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:16:12.765694 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-699c557b9-fdmtp I1005 12:16:15.744938 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 12:16:15.748073 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 12:16:15.749141 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-hhqng E1005 12:16:15.750970 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:16:16.807214 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-0 I1005 12:16:20.751857 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 12:16:20.758952 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:16:25.759353 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 12:16:27.791337 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-0 I1005 12:16:41.797541 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-hhqng I1005 12:16:43.523875 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfig", Namespace:"openshift-machine-config-operator", Name:"rendered-worker-151d08e153748960bef7ab372848795d", UID:"", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OSImageURLOverridden' OSImageURL was overridden via machineconfig in rendered-worker-151d08e153748960bef7ab372848795d (was: is: quay.io/mcoqe/layering@sha256:c7da7781723035cfb9b671f828e890761bc71467099c63318bde8f67f93e2f3f) I1005 12:16:52.795614 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-t5gq2 I1005 12:16:52.795647 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:16:52.809996 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 12:16:52.810085 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: false) I1005 12:16:52.823318 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 12:16:52.823398 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain E1005 12:16:53.477510 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 12:16:53.477532 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:18:35.905089 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 12:18:35.946464 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:18:40.906155 1 node_controller.go:1096] No nodes available for updates I1005 12:18:41.391588 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:18:46.393487 1 node_controller.go:1096] No nodes available for updates I1005 12:18:51.341317 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting NotReady=False I1005 12:18:51.365569 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:18:51.396754 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:18:56.342374 1 node_controller.go:1096] No nodes available for updates I1005 12:18:56.343834 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:18:56.355813 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:19:00.106195 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting Unschedulable I1005 12:19:00.106381 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:19:01.344907 1 node_controller.go:1096] No nodes available for updates I1005 12:19:01.370350 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:19:06.370608 1 node_controller.go:1096] No nodes available for updates I1005 12:19:13.152669 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordoning I1005 12:19:13.152720 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 12:19:13.176158 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 12:19:13.176235 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:19:13.404391 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:19:18.178392 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Completed update to rendered-worker-151d08e153748960bef7ab372848795d I1005 12:19:18.405609 1 status.go:109] Pool worker: All nodes are updated with MachineConfig rendered-worker-151d08e153748960bef7ab372848795d I1005 12:19:23.469228 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfig", Namespace:"openshift-machine-config-operator", Name:"rendered-worker-151d08e153748960bef7ab372848795d", UID:"", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OSImageURLOverridden' OSImageURL was overridden via machineconfig in rendered-worker-151d08e153748960bef7ab372848795d (was: is: quay.io/mcoqe/layering@sha256:c7da7781723035cfb9b671f828e890761bc71467099c63318bde8f67f93e2f3f) I1005 12:20:32.860966 1 render_controller.go:536] Pool worker: now targeting: rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 12:20:37.916150 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:20:37.979104 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:20:37.981889 1 node_controller.go:483] Pool worker: 2 candidate nodes in 2 zones for update, capacity: 1 I1005 12:20:37.981958 1 node_controller.go:483] Pool worker: Setting node ip-10-0-4-193.us-east-2.compute.internal target to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 12:20:38.014395 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"143967", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-4-193.us-east-2.compute.internal to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 12:20:38.014481 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 12:20:39.634612 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 12:20:42.917046 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 12:20:42.917162 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 12:20:42.950389 1 node_controller.go:1096] No nodes available for updates I1005 12:20:42.956588 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:20:42.972342 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 12:20:42.972403 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain I1005 12:20:43.135768 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:20:43.149537 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints E1005 12:20:43.627573 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 12:20:43.630658 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-chzl8 I1005 12:20:43.630838 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-kzcst I1005 12:20:43.630947 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-5hgg2 I1005 12:20:43.631042 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-1 I1005 12:20:43.631149 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-66v9s I1005 12:20:43.631247 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-gz7z2 I1005 12:20:43.631331 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-hkfqv I1005 12:20:43.631456 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-699c557b9-cf5s8 I1005 12:20:43.631555 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-1 I1005 12:20:43.631667 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-prdzx I1005 12:20:43.631763 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-lldkn I1005 12:20:43.631779 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-dtnkg I1005 12:20:44.700314 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-gz7z2 I1005 12:20:44.702217 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-hkfqv I1005 12:20:45.077894 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-chzl8 I1005 12:20:45.476102 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-prdzx I1005 12:20:45.874327 1 request.go:696] Waited for 1.160387627s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/prometheus-adapter-699c557b9-cf5s8 I1005 12:20:45.877901 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-699c557b9-cf5s8 I1005 12:20:46.277063 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-lldkn I1005 12:20:46.476877 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-dtnkg I1005 12:20:46.879540 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-1 I1005 12:20:47.073820 1 request.go:696] Waited for 1.368271639s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/kube-state-metrics-794c8bd776-66v9s I1005 12:20:47.076777 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-66v9s I1005 12:20:47.278474 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-1 I1005 12:20:47.957565 1 node_controller.go:1096] No nodes available for updates I1005 12:21:10.723245 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-kzcst I1005 12:21:35.705633 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-5hgg2 I1005 12:21:35.705669 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:21:35.731751 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 12:21:35.731766 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: false) I1005 12:21:35.735974 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 12:21:35.735988 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain E1005 12:21:36.405584 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 12:21:36.405607 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:23:41.449231 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 12:23:41.472123 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:23:43.129742 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting NotReady=False I1005 12:23:43.160847 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:23:43.182336 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:23:46.449929 1 node_controller.go:1096] No nodes available for updates I1005 12:23:47.018256 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:23:52.018554 1 node_controller.go:1096] No nodes available for updates I1005 12:24:01.802809 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting Unschedulable I1005 12:24:01.831814 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:24:01.932816 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:24:06.803324 1 node_controller.go:1096] No nodes available for updates I1005 12:24:14.016491 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordoning I1005 12:24:14.016508 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 12:24:14.032483 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 12:24:14.032542 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:24:14.286541 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:24:19.040492 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Completed update to rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 12:24:19.287239 1 node_controller.go:483] Pool worker: 1 candidate nodes in 1 zones for update, capacity: 1 I1005 12:24:19.287326 1 node_controller.go:483] Pool worker: Setting node ip-10-0-49-13.us-east-2.compute.internal target to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 12:24:19.302277 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"144056", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-49-13.us-east-2.compute.internal to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 12:24:19.302348 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 12:24:20.254551 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 12:24:23.451183 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 12:24:23.451276 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 12:24:23.465949 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 12:24:23.466028 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain I1005 12:24:23.496517 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints E1005 12:24:24.117151 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 12:24:24.119059 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-m9zvt I1005 12:24:24.119114 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-rt9bx I1005 12:24:24.119081 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-6wqsn I1005 12:24:24.119118 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 12:24:24.119088 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-699c557b9-xrzft I1005 12:24:24.119094 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-lwcfv I1005 12:24:24.119097 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-74hqc I1005 12:24:24.119102 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 12:24:24.119106 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-dbp9g I1005 12:24:24.119111 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-t2skn I1005 12:24:24.119128 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-qc4fj I1005 12:24:24.119147 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-2zqnx E1005 12:24:24.128816 1 drain_controller.go:144] error when evicting pods/"prometheus-k8s-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 12:24:24.129913 1 drain_controller.go:144] error when evicting pods/"prometheus-adapter-699c557b9-xrzft" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 12:24:24.130054 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 12:24:24.137791 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-6wqsn" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:24:24.361901 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:24:24.369640 1 node_controller.go:1096] No nodes available for updates E1005 12:24:24.370218 1 drain_controller.go:144] error when evicting pods/"thanos-querier-6c99b68589-qc4fj" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 12:24:24.505857 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 12:24:24.505939 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 12:24:25.194943 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-m9zvt I1005 12:24:25.195706 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-t2skn I1005 12:24:25.209747 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-74hqc I1005 12:24:26.184579 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-rt9bx I1005 12:24:26.185872 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-lwcfv I1005 12:24:26.208673 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-dbp9g I1005 12:24:29.130474 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-699c557b9-xrzft I1005 12:24:29.130566 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 12:24:29.130578 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 12:24:29.137159 1 drain_controller.go:144] error when evicting pods/"prometheus-k8s-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:24:29.138107 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-6wqsn E1005 12:24:29.138805 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 12:24:29.144186 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-6wqsn" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:24:29.362233 1 node_controller.go:1096] No nodes available for updates I1005 12:24:29.371310 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-qc4fj I1005 12:24:31.161535 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-699c557b9-xrzft I1005 12:24:31.403753 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-qc4fj I1005 12:24:34.138513 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 12:24:34.139317 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 12:24:34.144466 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-6wqsn E1005 12:24:34.192365 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-6wqsn" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 12:24:34.192861 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:24:38.205856 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-0 I1005 12:24:39.192471 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-6wqsn I1005 12:24:39.193547 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 12:24:39.199364 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:24:44.200401 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 12:24:44.216981 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:24:49.217946 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 12:24:51.300700 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-0 I1005 12:25:06.235494 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-6wqsn I1005 12:25:11.561357 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-2zqnx I1005 12:25:11.561389 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:26:29.866643 1 node_controller.go:1096] No nodes available for updates I1005 12:26:51.986474 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 12:26:52.002552 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:26:55.643617 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting NotReady=False I1005 12:26:55.668890 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:26:55.685703 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:26:56.986833 1 node_controller.go:1096] No nodes available for updates I1005 12:26:57.484384 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:27:02.484642 1 node_controller.go:1096] No nodes available for updates I1005 12:27:14.753263 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting Unschedulable I1005 12:27:14.799123 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:27:17.431484 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:27:19.753589 1 node_controller.go:1096] No nodes available for updates I1005 12:27:26.404623 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordoning I1005 12:27:26.404723 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 12:27:26.434631 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 12:27:26.434707 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:27:26.455849 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:27:31.421163 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Completed update to rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 12:27:31.456907 1 status.go:109] Pool worker: All nodes are updated with MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 12:31:42.492204 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 12:31:42.540604 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:31:47.493479 1 node_controller.go:1096] No nodes available for updates I1005 12:31:48.411011 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:31:52.504307 1 node_controller.go:1096] No nodes available for updates I1005 12:32:31.401085 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting NotReady=False I1005 12:32:31.445108 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:32:31.479248 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:32:33.339868 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:32:33.368548 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:32:36.401261 1 node_controller.go:1096] No nodes available for updates I1005 12:32:39.837130 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting ready I1005 12:32:39.874949 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:32:43.381969 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:33:49.189689 1 render_controller.go:510] Generated machineconfig rendered-worker-d152d5bd3f8b6d7d9b5beb0268e4f741 from 8 configs: [{MachineConfig 00-worker machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-container-runtime machineconfiguration.openshift.io/v1 } {MachineConfig 01-worker-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 97-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 98-worker-generated-kubelet machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-generated-registries machineconfiguration.openshift.io/v1 } {MachineConfig 99-worker-ssh machineconfiguration.openshift.io/v1 } {MachineConfig change-worker-extension-usbguard-70rwvr2l machineconfiguration.openshift.io/v1 }] I1005 12:33:49.190276 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"151065", FieldPath:""}): type: 'Normal' reason: 'RenderedConfigGenerated' rendered-worker-d152d5bd3f8b6d7d9b5beb0268e4f741 successfully generated (release version: 4.14.0-0.ci.test-2023-10-05-080602-ci-ln-6f80fqk-latest, controller version: 8e2ca527ec2990eee93be55b61eaa6825451b17f) I1005 12:33:49.205412 1 render_controller.go:536] Pool worker: now targeting: rendered-worker-d152d5bd3f8b6d7d9b5beb0268e4f741 I1005 12:33:54.224256 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:33:54.257681 1 node_controller.go:483] Pool worker: 2 candidate nodes in 2 zones for update, capacity: 1 I1005 12:33:54.257761 1 node_controller.go:483] Pool worker: Setting node ip-10-0-4-193.us-east-2.compute.internal target to MachineConfig rendered-worker-d152d5bd3f8b6d7d9b5beb0268e4f741 I1005 12:33:54.259203 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:33:54.306409 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-d152d5bd3f8b6d7d9b5beb0268e4f741 I1005 12:33:54.311183 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"151726", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-4-193.us-east-2.compute.internal to MachineConfig rendered-worker-d152d5bd3f8b6d7d9b5beb0268e4f741 E1005 12:33:54.411497 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 12:33:54.411566 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 12:33:55.660084 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 12:33:57.387380 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 12:33:57.387476 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 12:33:57.416608 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 12:33:57.416701 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain I1005 12:33:57.457843 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints E1005 12:33:58.151822 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 12:33:58.153842 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-lw9cm I1005 12:33:58.153885 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-n5nm6 I1005 12:33:58.153850 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-t95xv I1005 12:33:58.153887 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-6dnw2 I1005 12:33:58.153851 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-1 I1005 12:33:58.153862 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-kzfkj I1005 12:33:58.153871 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-1 I1005 12:33:58.153873 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-jhcgb I1005 12:33:58.153878 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-699c557b9-c79f4 I1005 12:33:58.153894 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-tznm8 I1005 12:33:58.153903 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-cqhd9 I1005 12:33:58.153834 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-n5qv5 I1005 12:33:59.241220 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:33:59.245124 1 node_controller.go:1096] No nodes available for updates I1005 12:33:59.395314 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-t95xv I1005 12:34:00.196880 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-tznm8 I1005 12:34:00.393506 1 request.go:696] Waited for 1.173325836s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-network-diagnostics/pods/network-check-source-7ddd77864b-lw9cm I1005 12:34:00.397928 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-lw9cm I1005 12:34:00.597390 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-1 I1005 12:34:01.194695 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-699c557b9-c79f4 I1005 12:34:01.396231 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-jhcgb I1005 12:34:01.592385 1 request.go:696] Waited for 1.387543558s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/telemeter-client-578654767d-6dnw2 I1005 12:34:01.596581 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-6dnw2 I1005 12:34:01.796927 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-kzfkj I1005 12:34:01.995810 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-1 I1005 12:34:02.394591 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-cqhd9 I1005 12:34:04.241798 1 node_controller.go:1096] No nodes available for updates I1005 12:34:25.221967 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-n5nm6 I1005 12:34:45.629663 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-n5qv5 I1005 12:34:45.629694 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:34:45.668828 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 12:34:45.668847 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: false) I1005 12:34:45.713089 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 12:34:45.713108 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain E1005 12:34:46.360546 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 12:34:46.360569 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:35:58.435235 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 12:35:58.507737 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:36:03.435584 1 node_controller.go:1096] No nodes available for updates I1005 12:36:03.975584 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:36:08.978376 1 node_controller.go:1096] No nodes available for updates I1005 12:36:15.156203 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting NotReady=False I1005 12:36:15.187532 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:36:15.348778 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:36:18.948007 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:36:18.964030 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:36:20.156565 1 node_controller.go:1096] No nodes available for updates I1005 12:36:23.761949 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting Unschedulable I1005 12:36:23.783517 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:36:23.974189 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:36:28.762623 1 node_controller.go:1096] No nodes available for updates I1005 12:36:36.126396 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordoning I1005 12:36:36.126412 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 12:36:36.154891 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 12:36:36.154977 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:36:36.392390 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:36:41.154469 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Completed update to rendered-worker-d152d5bd3f8b6d7d9b5beb0268e4f741 I1005 12:36:41.392876 1 node_controller.go:483] Pool worker: 1 candidate nodes in 1 zones for update, capacity: 1 I1005 12:36:41.392895 1 node_controller.go:483] Pool worker: Setting node ip-10-0-49-13.us-east-2.compute.internal target to MachineConfig rendered-worker-d152d5bd3f8b6d7d9b5beb0268e4f741 I1005 12:36:41.418481 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-d152d5bd3f8b6d7d9b5beb0268e4f741 I1005 12:36:41.418985 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"151758", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-49-13.us-east-2.compute.internal to MachineConfig rendered-worker-d152d5bd3f8b6d7d9b5beb0268e4f741 I1005 12:36:43.481576 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 12:36:46.442237 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:36:46.442557 1 node_controller.go:1096] No nodes available for updates I1005 12:36:48.482561 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 12:36:48.482629 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 12:36:48.503436 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 12:36:48.503515 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain I1005 12:36:48.520888 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints E1005 12:36:49.153893 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 12:36:49.155639 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-drlk6 I1005 12:36:49.155639 1 drain_controller.go:144] evicting pod openshift-operator-lifecycle-manager/collect-profiles-28275150-p9k4n I1005 12:36:49.155648 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 12:36:49.155657 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-gqb4z I1005 12:36:49.155665 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-qq69w I1005 12:36:49.155669 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-gqb4r I1005 12:36:49.155676 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-hnmc6 I1005 12:36:49.155684 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-pd76r I1005 12:36:49.155683 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-zbzcq I1005 12:36:49.155688 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-699c557b9-5jv5k I1005 12:36:49.155691 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 12:36:49.155693 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-tr2cf I1005 12:36:49.155696 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-bmqc9 E1005 12:36:49.163175 1 drain_controller.go:144] error when evicting pods/"prometheus-k8s-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 12:36:49.167823 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-zbzcq" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:36:49.196410 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-operator-lifecycle-manager/collect-profiles-28275150-p9k4n E1005 12:36:49.387194 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:36:50.202247 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-qq69w I1005 12:36:50.393013 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-hnmc6 I1005 12:36:50.793310 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-tr2cf I1005 12:36:51.208851 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-pd76r I1005 12:36:51.393944 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-gqb4r I1005 12:36:51.443115 1 node_controller.go:1096] No nodes available for updates I1005 12:36:51.592412 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-699c557b9-5jv5k I1005 12:36:51.994496 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-gqb4z I1005 12:36:52.193039 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-bmqc9 I1005 12:36:54.163497 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 12:36:54.168615 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-zbzcq E1005 12:36:54.178996 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-zbzcq" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:36:54.387878 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 12:36:54.393585 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:36:56.210185 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-0 I1005 12:36:59.179481 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-zbzcq I1005 12:36:59.394170 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 12:36:59.402292 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:37:04.403158 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 12:37:04.408333 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:37:09.408907 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 12:37:09.516040 1 template_controller.go:134] Re-syncing ControllerConfig due to secret pull-secret change I1005 12:37:11.440359 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-0 I1005 12:37:25.211477 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-zbzcq I1005 12:37:36.223448 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-drlk6 I1005 12:37:36.223481 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:37:36.247620 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 12:37:36.247968 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: false) I1005 12:37:36.257602 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 12:37:36.257666 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain E1005 12:37:36.897496 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 12:37:36.897520 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:39:12.931901 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting NotReady=False I1005 12:39:12.992984 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:39:14.162281 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:39:17.932450 1 node_controller.go:1096] No nodes available for updates I1005 12:39:23.029847 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting Unschedulable I1005 12:39:23.069230 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:39:24.028992 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:39:28.030777 1 node_controller.go:1096] No nodes available for updates I1005 12:39:33.778582 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordoning I1005 12:39:33.778597 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 12:39:33.811538 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 12:39:33.811555 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:39:33.830336 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:39:38.808978 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Completed update to rendered-worker-d152d5bd3f8b6d7d9b5beb0268e4f741 I1005 12:39:38.832898 1 status.go:109] Pool worker: All nodes are updated with MachineConfig rendered-worker-d152d5bd3f8b6d7d9b5beb0268e4f741 I1005 12:39:55.899074 1 render_controller.go:536] Pool worker: now targeting: rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 12:40:00.923492 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:40:00.950175 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:40:00.950622 1 node_controller.go:483] Pool worker: 2 candidate nodes in 2 zones for update, capacity: 1 I1005 12:40:00.950637 1 node_controller.go:483] Pool worker: Setting node ip-10-0-4-193.us-east-2.compute.internal target to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 12:40:00.976923 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 12:40:00.977269 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"155724", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-4-193.us-east-2.compute.internal to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f E1005 12:40:01.017666 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 12:40:01.017686 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 12:40:03.002369 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 12:40:04.705029 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 12:40:04.705071 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 12:40:04.722627 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 12:40:04.722704 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain I1005 12:40:04.749002 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints E1005 12:40:05.367480 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 12:40:05.369200 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-kl6tm I1005 12:40:05.369219 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-1 I1005 12:40:05.369228 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-1 I1005 12:40:05.369201 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-4dcfm I1005 12:40:05.369320 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-zgxtv I1005 12:40:05.369210 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-699c557b9-5qpdm I1005 12:40:05.369372 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-jdrfd I1005 12:40:05.369448 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-fp7d9 I1005 12:40:05.369456 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-qj2th I1005 12:40:05.369505 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-t9fkj I1005 12:40:05.369510 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-rsz6t I1005 12:40:05.369564 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-qt78w I1005 12:40:05.942856 1 node_controller.go:1096] No nodes available for updates I1005 12:40:05.953487 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:40:06.420265 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-4dcfm I1005 12:40:06.426030 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-fp7d9 I1005 12:40:06.600396 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-qj2th I1005 12:40:07.597142 1 request.go:696] Waited for 1.144509428s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-image-registry/pods/image-registry-5b6f4c84f4-kl6tm I1005 12:40:07.800442 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-t9fkj I1005 12:40:07.999253 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-rsz6t I1005 12:40:08.199643 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-qt78w I1005 12:40:08.400684 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-jdrfd I1005 12:40:08.597488 1 request.go:696] Waited for 1.163811347s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/api/v1/namespaces/openshift-monitoring/pods/prometheus-adapter-699c557b9-5qpdm I1005 12:40:08.601143 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-699c557b9-5qpdm I1005 12:40:08.801216 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-1 I1005 12:40:09.002098 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-1 I1005 12:40:10.954108 1 node_controller.go:1096] No nodes available for updates I1005 12:40:32.455978 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-kl6tm I1005 12:40:55.450870 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-zgxtv I1005 12:40:55.450900 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:40:55.464180 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordoning I1005 12:40:55.464257 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating cordon (currently schedulable: false) I1005 12:40:55.469782 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 12:40:55.469851 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating drain E1005 12:40:56.103348 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-z8jvl, openshift-cluster-node-tuning-operator/tuned-qsxms, openshift-dns/dns-default-n848h, openshift-dns/node-resolver-6nwlq, openshift-image-registry/node-ca-kkn8w, openshift-ingress-canary/ingress-canary-8zfsq, openshift-machine-config-operator/machine-config-daemon-xxms2, openshift-monitoring/node-exporter-s722g, openshift-multus/multus-257cm, openshift-multus/multus-additional-cni-plugins-m54cg, openshift-multus/network-metrics-daemon-5z2cz, openshift-network-diagnostics/network-check-target-ckk9g, openshift-sdn/sdn-rkdwz I1005 12:40:56.103374 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:41:54.073348 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 12:41:54.110922 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:41:59.073977 1 node_controller.go:1096] No nodes available for updates I1005 12:41:59.793999 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:42:00.711638 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting NotReady=False I1005 12:42:00.726959 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:42:00.746171 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:42:04.746939 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:42:04.766257 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:42:04.794717 1 node_controller.go:1096] No nodes available for updates I1005 12:42:19.915590 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Reporting unready: node ip-10-0-4-193.us-east-2.compute.internal is reporting Unschedulable I1005 12:42:19.959835 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:42:24.787078 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:42:24.916606 1 node_controller.go:1096] No nodes available for updates I1005 12:42:31.890278 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordoning I1005 12:42:31.890294 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 12:42:31.908782 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 12:42:31.908848 1 drain_controller.go:173] node ip-10-0-4-193.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:42:32.120111 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: changed taints I1005 12:42:36.908178 1 node_controller.go:493] Pool worker[zone=us-east-2a]: node ip-10-0-4-193.us-east-2.compute.internal: Completed update to rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 12:42:37.120943 1 node_controller.go:483] Pool worker: 1 candidate nodes in 1 zones for update, capacity: 1 I1005 12:42:37.120963 1 node_controller.go:483] Pool worker: Setting node ip-10-0-49-13.us-east-2.compute.internal target to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 12:42:37.135122 1 event.go:298] Event(v1.ObjectReference{Kind:"MachineConfigPool", Namespace:"openshift-machine-config-operator", Name:"worker", UID:"3ef5c098-8530-488a-a6b7-a1eee4a01bbc", APIVersion:"machineconfiguration.openshift.io/v1", ResourceVersion:"155756", FieldPath:""}): type: 'Normal' reason: 'SetDesiredConfig' Targeted node ip-10-0-49-13.us-east-2.compute.internal to MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 12:42:37.137124 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/desiredConfig = rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 12:42:38.538217 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed annotation machineconfiguration.openshift.io/state = Working I1005 12:42:42.137594 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 12:42:42.137628 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: true) I1005 12:42:42.160883 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 12:42:42.160899 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain I1005 12:42:42.168194 1 node_controller.go:1096] No nodes available for updates I1005 12:42:42.168525 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints E1005 12:42:42.275455 1 render_controller.go:460] Error updating MachineConfigPool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 12:42:42.275544 1 render_controller.go:377] Error syncing machineconfigpool worker: Operation cannot be fulfilled on machineconfigpools.machineconfiguration.openshift.io "worker": the object has been modified; please apply your changes to the latest version and try again I1005 12:42:42.378592 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints E1005 12:42:42.828820 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 12:42:42.831088 1 drain_controller.go:144] evicting pod openshift-network-diagnostics/network-check-source-7ddd77864b-ts6xr I1005 12:42:42.831110 1 drain_controller.go:144] evicting pod openshift-ingress/router-default-d49dc89bd-nv5vb I1005 12:42:42.831157 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-mwsfq I1005 12:42:42.831188 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 12:42:42.831247 1 drain_controller.go:144] evicting pod openshift-monitoring/openshift-state-metrics-547dffdc-z4w2b I1005 12:42:42.831281 1 drain_controller.go:144] evicting pod openshift-monitoring/monitoring-plugin-6d8f5944dc-56gwx I1005 12:42:42.831322 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-699c557b9-qflr9 I1005 12:42:42.831337 1 drain_controller.go:144] evicting pod openshift-monitoring/telemeter-client-578654767d-xxgfn I1005 12:42:42.831399 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-p2r4l I1005 12:42:42.831400 1 drain_controller.go:144] evicting pod openshift-monitoring/thanos-querier-6c99b68589-h5xbf I1005 12:42:42.831088 1 drain_controller.go:144] evicting pod openshift-monitoring/kube-state-metrics-794c8bd776-6c45j I1005 12:42:42.831524 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 E1005 12:42:42.840594 1 drain_controller.go:144] error when evicting pods/"prometheus-adapter-699c557b9-qflr9" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 12:42:42.842621 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 12:42:42.855190 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-mwsfq" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. E1005 12:42:43.263344 1 drain_controller.go:144] error when evicting pods/"prometheus-k8s-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:42:43.909236 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-network-diagnostics/network-check-source-7ddd77864b-ts6xr I1005 12:42:44.912513 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/openshift-state-metrics-547dffdc-z4w2b I1005 12:42:44.932584 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/monitoring-plugin-6d8f5944dc-56gwx I1005 12:42:45.070643 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-operator-admission-webhook-7c8dc7fcdb-p2r4l I1005 12:42:45.272230 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/thanos-querier-6c99b68589-h5xbf I1005 12:42:45.471181 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/kube-state-metrics-794c8bd776-6c45j I1005 12:42:45.913161 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/telemeter-client-578654767d-xxgfn I1005 12:42:47.169538 1 node_controller.go:1096] No nodes available for updates I1005 12:42:47.841233 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-adapter-699c557b9-qflr9 I1005 12:42:47.843189 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 12:42:47.852695 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:42:47.856156 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-mwsfq E1005 12:42:47.867768 1 drain_controller.go:144] error when evicting pods/"image-registry-5b6f4c84f4-mwsfq" -n "openshift-image-registry" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:42:48.263870 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 E1005 12:42:48.272024 1 drain_controller.go:144] error when evicting pods/"prometheus-k8s-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:42:49.891363 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-adapter-699c557b9-qflr9 I1005 12:42:52.853684 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 12:42:52.861038 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:42:52.868092 1 drain_controller.go:144] evicting pod openshift-image-registry/image-registry-5b6f4c84f4-mwsfq I1005 12:42:53.272542 1 drain_controller.go:144] evicting pod openshift-monitoring/prometheus-k8s-0 I1005 12:42:55.319064 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/prometheus-k8s-0 I1005 12:42:57.861541 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 E1005 12:42:57.868133 1 drain_controller.go:144] error when evicting pods/"alertmanager-main-0" -n "openshift-monitoring" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget. I1005 12:43:02.868905 1 drain_controller.go:144] evicting pod openshift-monitoring/alertmanager-main-0 I1005 12:43:04.905529 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-monitoring/alertmanager-main-0 I1005 12:43:19.900087 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-image-registry/image-registry-5b6f4c84f4-mwsfq I1005 12:43:29.929832 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: Evicted pod openshift-ingress/router-default-d49dc89bd-nv5vb I1005 12:43:29.929866 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:43:29.944882 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordoning I1005 12:43:29.944996 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating cordon (currently schedulable: false) I1005 12:43:29.951706 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: cordon succeeded (currently schedulable: false) I1005 12:43:29.951722 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating drain E1005 12:43:30.591396 1 drain_controller.go:144] WARNING: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-7xzfl, openshift-cluster-node-tuning-operator/tuned-jr9bt, openshift-dns/dns-default-xpn59, openshift-dns/node-resolver-l74gm, openshift-image-registry/node-ca-2kbdz, openshift-ingress-canary/ingress-canary-fmcfg, openshift-machine-config-operator/machine-config-daemon-7bqc6, openshift-monitoring/node-exporter-zrrm9, openshift-multus/multus-9sgpg, openshift-multus/multus-additional-cni-plugins-g4gcr, openshift-multus/network-metrics-daemon-5jqsl, openshift-network-diagnostics/network-check-target-chlt5, openshift-sdn/sdn-9qtxl I1005 12:43:30.591435 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:44:29.812972 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting OutOfDisk=Unknown I1005 12:44:29.848541 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:44:34.813532 1 node_controller.go:1096] No nodes available for updates I1005 12:44:35.283274 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:44:35.730562 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting NotReady=False I1005 12:44:35.753667 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:44:35.774632 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:44:40.250884 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:44:40.283882 1 node_controller.go:1096] No nodes available for updates I1005 12:44:40.285788 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:44:45.286376 1 node_controller.go:1096] No nodes available for updates I1005 12:44:54.681639 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Reporting unready: node ip-10-0-49-13.us-east-2.compute.internal is reporting Unschedulable I1005 12:44:54.716001 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:44:55.293949 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:44:59.697311 1 node_controller.go:1096] No nodes available for updates I1005 12:45:06.483241 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordoning I1005 12:45:06.483258 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: initiating uncordon (currently schedulable: false) I1005 12:45:06.509608 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: uncordon succeeded (currently schedulable: true) I1005 12:45:06.509639 1 drain_controller.go:173] node ip-10-0-49-13.us-east-2.compute.internal: operation successful; applying completion annotation I1005 12:45:06.524730 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: changed taints I1005 12:45:11.502634 1 node_controller.go:493] Pool worker[zone=us-east-2b]: node ip-10-0-49-13.us-east-2.compute.internal: Completed update to rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 12:45:11.525786 1 status.go:109] Pool worker: All nodes are updated with MachineConfig rendered-worker-6bf803109332579a2637f8dc27f9f58f I1005 13:03:32.695626 1 template_controller.go:134] Re-syncing ControllerConfig due to secret pull-secret change I1005 13:29:55.875651 1 template_controller.go:134] Re-syncing ControllerConfig due to secret pull-secret change I1005 13:56:19.054970 1 template_controller.go:134] Re-syncing ControllerConfig due to secret pull-secret change I1005 14:22:42.234111 1 template_controller.go:134] Re-syncing ControllerConfig due to secret pull-secret change