NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE openshift-kube-apiserver Normal TerminationGracefulTerminationFinished pod/kube-apiserver-ip-10-0-197-197.ec2.internal All pending requests processed openshift-monitoring 42m Normal Scheduled pod/configure-alertmanager-operator-registry-ztskr Successfully assigned openshift-monitoring/configure-alertmanager-operator-registry-ztskr to ip-10-0-160-152.ec2.internal openshift-monitoring 37m Normal Scheduled pod/configure-alertmanager-operator-registry-xvjrx Successfully assigned openshift-monitoring/configure-alertmanager-operator-registry-xvjrx to ip-10-0-160-152.ec2.internal openshift-kube-apiserver Warning KubeAPIReadyz pod/kube-apiserver-ip-10-0-197-197.ec2.internal readyz=true openshift-monitoring 42m Normal Scheduled pod/configure-alertmanager-operator-registry-w7zdk Successfully assigned openshift-monitoring/configure-alertmanager-operator-registry-w7zdk to ip-10-0-232-8.ec2.internal openshift-monitoring 42m Normal Scheduled pod/configure-alertmanager-operator-7b9b57dbdd-xgqtw Successfully assigned openshift-monitoring/configure-alertmanager-operator-7b9b57dbdd-xgqtw to ip-10-0-232-8.ec2.internal openshift-monitoring 37m Normal Scheduled pod/configure-alertmanager-operator-7b9b57dbdd-fjt5w Successfully assigned openshift-monitoring/configure-alertmanager-operator-7b9b57dbdd-fjt5w to ip-10-0-160-152.ec2.internal openshift-kube-apiserver Normal HTTPServerStoppedListening pod/kube-apiserver-ip-10-0-197-197.ec2.internal HTTP Server has stopped listening openshift-monitoring 37m Normal Scheduled pod/kube-state-metrics-7d7b86bb68-l675w Successfully assigned openshift-monitoring/kube-state-metrics-7d7b86bb68-l675w to ip-10-0-195-121.ec2.internal openshift-kube-apiserver Normal InFlightRequestsDrained pod/kube-apiserver-ip-10-0-197-197.ec2.internal All non long-running request(s) in-flight have drained openshift-kube-apiserver Normal AfterShutdownDelayDuration pod/kube-apiserver-ip-10-0-197-197.ec2.internal The minimal shutdown duration of 2m9s finished openshift-monitoring 59m Normal Scheduled pod/cluster-monitoring-operator-78777bc588-rhggh Successfully assigned openshift-monitoring/cluster-monitoring-operator-78777bc588-rhggh to ip-10-0-197-197.ec2.internal openshift-monitoring 62m Warning FailedScheduling pod/cluster-monitoring-operator-78777bc588-rhggh 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-monitoring 32m Normal Scheduled pod/cluster-monitoring-operator-78777bc588-fps2r Successfully assigned openshift-monitoring/cluster-monitoring-operator-78777bc588-fps2r to ip-10-0-239-132.ec2.internal openshift-monitoring 31m Normal Scheduled pod/alertmanager-main-1 Successfully assigned openshift-monitoring/alertmanager-main-1 to ip-10-0-187-75.ec2.internal openshift-monitoring 32m Warning FailedScheduling pod/alertmanager-main-1 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 32m Warning FailedScheduling pod/alertmanager-main-1 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 34m Warning FailedScheduling pod/alertmanager-main-1 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 34m Warning FailedScheduling pod/alertmanager-main-1 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 37m Normal Scheduled pod/alertmanager-main-1 Successfully assigned openshift-monitoring/alertmanager-main-1 to ip-10-0-187-75.ec2.internal openshift-monitoring 39m Warning FailedScheduling pod/alertmanager-main-1 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 39m Warning FailedScheduling pod/alertmanager-main-1 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 39m Warning FailedScheduling pod/alertmanager-main-1 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 42m Warning FailedScheduling pod/alertmanager-main-1 0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling.. openshift-monitoring 40m Normal Scheduled pod/node-exporter-4g9rl Successfully assigned openshift-monitoring/node-exporter-4g9rl to ip-10-0-187-75.ec2.internal openshift-monitoring 53m Normal Scheduled pod/node-exporter-58wsk Successfully assigned openshift-monitoring/node-exporter-58wsk to ip-10-0-232-8.ec2.internal openshift-validation-webhook 42m Normal Scheduled pod/validation-webhook-p4gz5 Successfully assigned openshift-validation-webhook/validation-webhook-p4gz5 to ip-10-0-197-197.ec2.internal openshift-monitoring 42m Warning FailedScheduling pod/alertmanager-main-1 0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling.. openshift-validation-webhook 42m Normal Scheduled pod/validation-webhook-j7r6j Successfully assigned openshift-validation-webhook/validation-webhook-j7r6j to ip-10-0-140-6.ec2.internal openshift-monitoring 50m Normal Scheduled pod/alertmanager-main-1 Successfully assigned openshift-monitoring/alertmanager-main-1 to ip-10-0-160-152.ec2.internal openshift-validation-webhook 42m Normal Scheduled pod/validation-webhook-dt8g2 Successfully assigned openshift-validation-webhook/validation-webhook-dt8g2 to ip-10-0-239-132.ec2.internal openshift-user-workload-monitoring 37m Normal Scheduled pod/thanos-ruler-user-workload-1 Successfully assigned openshift-user-workload-monitoring/thanos-ruler-user-workload-1 to ip-10-0-160-152.ec2.internal openshift-user-workload-monitoring 37m Warning FailedScheduling pod/thanos-ruler-user-workload-1 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-user-workload-monitoring 34m Normal Scheduled pod/thanos-ruler-user-workload-0 Successfully assigned openshift-user-workload-monitoring/thanos-ruler-user-workload-0 to ip-10-0-232-8.ec2.internal openshift-user-workload-monitoring 36m Warning FailedScheduling pod/thanos-ruler-user-workload-0 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-user-workload-monitoring 37m Warning FailedScheduling pod/thanos-ruler-user-workload-0 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) were unschedulable, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-user-workload-monitoring 37m Warning FailedScheduling pod/thanos-ruler-user-workload-0 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-user-workload-monitoring 37m Warning FailedScheduling pod/thanos-ruler-user-workload-0 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-user-workload-monitoring 37m Normal Scheduled pod/thanos-ruler-user-workload-0 Successfully assigned openshift-user-workload-monitoring/thanos-ruler-user-workload-0 to ip-10-0-232-8.ec2.internal openshift-user-workload-monitoring 37m Normal Scheduled pod/prometheus-user-workload-1 Successfully assigned openshift-user-workload-monitoring/prometheus-user-workload-1 to ip-10-0-160-152.ec2.internal openshift-user-workload-monitoring 37m Warning FailedScheduling pod/prometheus-user-workload-1 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-user-workload-monitoring 37m Warning FailedScheduling pod/prometheus-user-workload-1 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-user-workload-monitoring 34m Normal Scheduled pod/prometheus-user-workload-0 Successfully assigned openshift-user-workload-monitoring/prometheus-user-workload-0 to ip-10-0-232-8.ec2.internal openshift-user-workload-monitoring 37m Warning FailedScheduling pod/prometheus-user-workload-0 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-user-workload-monitoring 37m Warning FailedScheduling pod/prometheus-user-workload-0 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-user-workload-monitoring 37m Normal Scheduled pod/prometheus-user-workload-0 Successfully assigned openshift-user-workload-monitoring/prometheus-user-workload-0 to ip-10-0-232-8.ec2.internal openshift-user-workload-monitoring 37m Normal Scheduled pod/prometheus-operator-6cbc5c4f45-t95ht Successfully assigned openshift-user-workload-monitoring/prometheus-operator-6cbc5c4f45-t95ht to ip-10-0-239-132.ec2.internal openshift-user-workload-monitoring 37m Normal Scheduled pod/prometheus-operator-6cbc5c4f45-dt4j5 Successfully assigned openshift-user-workload-monitoring/prometheus-operator-6cbc5c4f45-dt4j5 to ip-10-0-140-6.ec2.internal openshift-sre-pruning 14m Normal Scheduled pod/deployments-pruner-27990060-vz8dp Successfully assigned openshift-sre-pruning/deployments-pruner-27990060-vz8dp to ip-10-0-187-75.ec2.internal openshift-sre-pruning 14m Normal Scheduled pod/builds-pruner-27990060-2l29r Successfully assigned openshift-sre-pruning/builds-pruner-27990060-2l29r to ip-10-0-187-75.ec2.internal openshift-monitoring 52m Normal Scheduled pod/alertmanager-main-1 Successfully assigned openshift-monitoring/alertmanager-main-1 to ip-10-0-160-152.ec2.internal openshift-monitoring 26m Normal Scheduled pod/alertmanager-main-0 Successfully assigned openshift-monitoring/alertmanager-main-0 to ip-10-0-195-121.ec2.internal openshift-monitoring 28m Warning FailedScheduling pod/alertmanager-main-0 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) were unschedulable, 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 28m Warning FailedScheduling pod/alertmanager-main-0 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) were unschedulable, 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 37m Normal Scheduled pod/alertmanager-main-0 Successfully assigned openshift-monitoring/alertmanager-main-0 to ip-10-0-195-121.ec2.internal openshift-monitoring 49m Normal Scheduled pod/alertmanager-main-0 Successfully assigned openshift-monitoring/alertmanager-main-0 to ip-10-0-232-8.ec2.internal openshift-monitoring 52m Normal Scheduled pod/alertmanager-main-0 Successfully assigned openshift-monitoring/alertmanager-main-0 to ip-10-0-232-8.ec2.internal openshift-marketplace 12m Normal Scheduled pod/redhat-operators-wpqdp Successfully assigned openshift-marketplace/redhat-operators-wpqdp to ip-10-0-140-6.ec2.internal openshift-marketplace 37m Normal Scheduled pod/redhat-operators-rwpx4 Successfully assigned openshift-marketplace/redhat-operators-rwpx4 to ip-10-0-140-6.ec2.internal openshift-marketplace 23m Normal Scheduled pod/redhat-operators-rf7k9 Successfully assigned openshift-marketplace/redhat-operators-rf7k9 to ip-10-0-140-6.ec2.internal openshift-marketplace 36m Normal Scheduled pod/redhat-operators-pcjm7 Successfully assigned openshift-marketplace/redhat-operators-pcjm7 to ip-10-0-239-132.ec2.internal openshift-marketplace 47m Normal Scheduled pod/redhat-operators-lf7xl Successfully assigned openshift-marketplace/redhat-operators-lf7xl to ip-10-0-140-6.ec2.internal openshift-marketplace 58m Normal Scheduled pod/redhat-operators-jzt5b Successfully assigned openshift-marketplace/redhat-operators-jzt5b to ip-10-0-140-6.ec2.internal openshift-marketplace 65s Normal Scheduled pod/redhat-operators-5qn2j Successfully assigned openshift-marketplace/redhat-operators-5qn2j to ip-10-0-140-6.ec2.internal openshift-marketplace 47m Normal Scheduled pod/redhat-marketplace-xhp6s Successfully assigned openshift-marketplace/redhat-marketplace-xhp6s to ip-10-0-140-6.ec2.internal openshift-marketplace 36m Normal Scheduled pod/redhat-marketplace-vj67h Successfully assigned openshift-marketplace/redhat-marketplace-vj67h to ip-10-0-239-132.ec2.internal openshift-marketplace 22m Normal Scheduled pod/redhat-marketplace-qxchz Successfully assigned openshift-marketplace/redhat-marketplace-qxchz to ip-10-0-140-6.ec2.internal openshift-marketplace 36m Normal Scheduled pod/redhat-marketplace-p4zxh Successfully assigned openshift-marketplace/redhat-marketplace-p4zxh to ip-10-0-239-132.ec2.internal openshift-marketplace 11m Normal Scheduled pod/redhat-marketplace-jg5qp Successfully assigned openshift-marketplace/redhat-marketplace-jg5qp to ip-10-0-140-6.ec2.internal openshift-marketplace 58m Normal Scheduled pod/redhat-marketplace-crqrm Successfully assigned openshift-marketplace/redhat-marketplace-crqrm to ip-10-0-140-6.ec2.internal openshift-marketplace 56s Normal Scheduled pod/redhat-marketplace-7d4zn Successfully assigned openshift-marketplace/redhat-marketplace-7d4zn to ip-10-0-140-6.ec2.internal openshift-marketplace 14m Normal Scheduled pod/osd-patch-subscription-source-27990060-mm5c7 Successfully assigned openshift-marketplace/osd-patch-subscription-source-27990060-mm5c7 to ip-10-0-187-75.ec2.internal openshift-kube-apiserver Normal TerminationPreShutdownHooksFinished pod/kube-apiserver-ip-10-0-197-197.ec2.internal All pre-shutdown hooks have been finished openshift-kube-apiserver Normal ShutdownInitiated pod/kube-apiserver-ip-10-0-197-197.ec2.internal Received signal to terminate, becoming unready, but keeping serving openshift-kube-apiserver Warning KubeAPIReadyz pod/kube-apiserver-ip-10-0-197-197.ec2.internal readyz=true openshift-authentication-operator 59m Normal Scheduled pod/authentication-operator-dbb89644b-tbxcm Successfully assigned openshift-authentication-operator/authentication-operator-dbb89644b-tbxcm to ip-10-0-197-197.ec2.internal openshift-authentication-operator 61m Warning FailedScheduling pod/authentication-operator-dbb89644b-tbxcm 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-kube-apiserver Warning KubeAPIReadyz pod/kube-apiserver-ip-10-0-140-6.ec2.internal readyz=true openshift-etcd-operator 59m Normal Scheduled pod/etcd-operator-775754ddff-xjxrm Successfully assigned openshift-etcd-operator/etcd-operator-775754ddff-xjxrm to ip-10-0-197-197.ec2.internal openshift-etcd-operator 62m Warning FailedScheduling pod/etcd-operator-775754ddff-xjxrm 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-marketplace 32m Normal Scheduled pod/marketplace-operator-554c77d6df-pn29n Successfully assigned openshift-marketplace/marketplace-operator-554c77d6df-pn29n to ip-10-0-239-132.ec2.internal openshift-etcd-operator 32m Normal Scheduled pod/etcd-operator-775754ddff-tnxcn Successfully assigned openshift-etcd-operator/etcd-operator-775754ddff-tnxcn to ip-10-0-140-6.ec2.internal openshift-monitoring 53m Normal Scheduled pod/node-exporter-cghbq Successfully assigned openshift-monitoring/node-exporter-cghbq to ip-10-0-140-6.ec2.internal openshift-dns 55m Warning FailedScheduling pod/node-resolver-vfr6q running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "node-resolver-vfr6q": pod node-resolver-vfr6q is already assigned to node "ip-10-0-232-8.ec2.internal" openshift-dns 55m Normal Scheduled pod/node-resolver-vfr6q Successfully assigned openshift-dns/node-resolver-vfr6q to ip-10-0-232-8.ec2.internal openshift-dns 59m Normal Scheduled pod/node-resolver-t57dw Successfully assigned openshift-dns/node-resolver-t57dw to ip-10-0-197-197.ec2.internal openshift-service-ca 36m Normal Scheduled pod/service-ca-57bb877df5-7tzmh Successfully assigned openshift-service-ca/service-ca-57bb877df5-7tzmh to ip-10-0-239-132.ec2.internal openshift-monitoring 53m Normal Scheduled pod/node-exporter-g4hdx Successfully assigned openshift-monitoring/node-exporter-g4hdx to ip-10-0-160-152.ec2.internal openshift-dns 40m Normal Scheduled pod/node-resolver-qqhl6 Successfully assigned openshift-dns/node-resolver-qqhl6 to ip-10-0-187-75.ec2.internal openshift-marketplace 59m Normal Scheduled pod/marketplace-operator-554c77d6df-2q9k5 Successfully assigned openshift-marketplace/marketplace-operator-554c77d6df-2q9k5 to ip-10-0-197-197.ec2.internal openshift-marketplace 61m Warning FailedScheduling pod/marketplace-operator-554c77d6df-2q9k5 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-monitoring 53m Normal Scheduled pod/node-exporter-jhj5d Successfully assigned openshift-monitoring/node-exporter-jhj5d to ip-10-0-239-132.ec2.internal openshift-service-ca 59m Normal Scheduled pod/service-ca-57bb877df5-24vfr Successfully assigned openshift-service-ca/service-ca-57bb877df5-24vfr to ip-10-0-140-6.ec2.internal openshift-marketplace 36m Normal Scheduled pod/community-operators-wgn28 Successfully assigned openshift-marketplace/community-operators-wgn28 to ip-10-0-239-132.ec2.internal openshift-marketplace 22m Normal Scheduled pod/community-operators-q5p9v Successfully assigned openshift-marketplace/community-operators-q5p9v to ip-10-0-140-6.ec2.internal openshift-marketplace 47m Normal Scheduled pod/community-operators-p676f Successfully assigned openshift-marketplace/community-operators-p676f to ip-10-0-140-6.ec2.internal openshift-marketplace 37m Normal Scheduled pod/community-operators-kp7pr Successfully assigned openshift-marketplace/community-operators-kp7pr to ip-10-0-239-132.ec2.internal openshift-dns 39m Normal Scheduled pod/node-resolver-njmd5 Successfully assigned openshift-dns/node-resolver-njmd5 to ip-10-0-195-121.ec2.internal openshift-marketplace 10m Normal Scheduled pod/community-operators-gqgqn Successfully assigned openshift-marketplace/community-operators-gqgqn to ip-10-0-140-6.ec2.internal openshift-marketplace 22s Normal Scheduled pod/community-operators-8hc4x Successfully assigned openshift-marketplace/community-operators-8hc4x to ip-10-0-140-6.ec2.internal openshift-dns 59m Normal Scheduled pod/node-resolver-ndpz5 Successfully assigned openshift-dns/node-resolver-ndpz5 to ip-10-0-140-6.ec2.internal openshift-monitoring 39m Normal Scheduled pod/node-exporter-sn6ks Successfully assigned openshift-monitoring/node-exporter-sn6ks to ip-10-0-195-121.ec2.internal openshift-marketplace 58m Normal Scheduled pod/community-operators-7jr7c Successfully assigned openshift-marketplace/community-operators-7jr7c to ip-10-0-140-6.ec2.internal openshift-marketplace 65s Normal Scheduled pod/certified-operators-wzhjt Successfully assigned openshift-marketplace/certified-operators-wzhjt to ip-10-0-140-6.ec2.internal openshift-dns 55m Warning FailedScheduling pod/node-resolver-f7qjl running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "node-resolver-f7qjl": pod node-resolver-f7qjl is already assigned to node "ip-10-0-160-152.ec2.internal" openshift-dns 55m Normal Scheduled pod/node-resolver-f7qjl Successfully assigned openshift-dns/node-resolver-f7qjl to ip-10-0-160-152.ec2.internal openshift-monitoring 53m Normal Scheduled pod/node-exporter-ztvgk Successfully assigned openshift-monitoring/node-exporter-ztvgk to ip-10-0-197-197.ec2.internal openshift-dns 59m Normal Scheduled pod/node-resolver-dqg6k Successfully assigned openshift-dns/node-resolver-dqg6k to ip-10-0-239-132.ec2.internal openshift-marketplace 11m Normal Scheduled pod/certified-operators-sdz5q Successfully assigned openshift-marketplace/certified-operators-sdz5q to ip-10-0-140-6.ec2.internal openshift-marketplace 22m Normal Scheduled pod/certified-operators-f6dr2 Successfully assigned openshift-marketplace/certified-operators-f6dr2 to ip-10-0-140-6.ec2.internal openshift-marketplace 38m Normal Scheduled pod/certified-operators-dwz78 Successfully assigned openshift-marketplace/certified-operators-dwz78 to ip-10-0-140-6.ec2.internal openshift-marketplace 36m Normal Scheduled pod/certified-operators-dplkw Successfully assigned openshift-marketplace/certified-operators-dplkw to ip-10-0-239-132.ec2.internal openshift-dns 59m Normal Scheduled pod/dns-default-wnmv8 Successfully assigned openshift-dns/dns-default-wnmv8 to ip-10-0-140-6.ec2.internal openshift-monitoring 53m Normal Scheduled pod/openshift-state-metrics-66f87c88bd-jg7dn Successfully assigned openshift-monitoring/openshift-state-metrics-66f87c88bd-jg7dn to ip-10-0-232-8.ec2.internal openshift-marketplace 58m Normal Scheduled pod/certified-operators-77trp Successfully assigned openshift-marketplace/certified-operators-77trp to ip-10-0-140-6.ec2.internal openshift-kube-apiserver Normal TerminationGracefulTerminationFinished pod/kube-apiserver-ip-10-0-140-6.ec2.internal All pending requests processed openshift-marketplace 48m Normal Scheduled pod/certified-operators-5mh29 Successfully assigned openshift-marketplace/certified-operators-5mh29 to ip-10-0-140-6.ec2.internal openshift-dns 59m Normal Scheduled pod/dns-default-vlp6d Successfully assigned openshift-dns/dns-default-vlp6d to ip-10-0-197-197.ec2.internal openshift-machine-config-operator 59m Normal Scheduled pod/machine-config-server-9k88t Successfully assigned openshift-machine-config-operator/machine-config-server-9k88t to ip-10-0-140-6.ec2.internal openshift-monitoring 31m Normal Scheduled pod/openshift-state-metrics-8757cbbb4-gqxjm Successfully assigned openshift-monitoring/openshift-state-metrics-8757cbbb4-gqxjm to ip-10-0-187-75.ec2.internal openshift-kube-apiserver Normal HTTPServerStoppedListening pod/kube-apiserver-ip-10-0-140-6.ec2.internal HTTP Server has stopped listening openshift-kube-apiserver Normal InFlightRequestsDrained pod/kube-apiserver-ip-10-0-140-6.ec2.internal All non long-running request(s) in-flight have drained openshift-service-ca-operator 32m Normal Scheduled pod/service-ca-operator-7988896c96-9vpq6 Successfully assigned openshift-service-ca-operator/service-ca-operator-7988896c96-9vpq6 to ip-10-0-140-6.ec2.internal openshift-dns 59m Normal Scheduled pod/dns-default-tnhzk Successfully assigned openshift-dns/dns-default-tnhzk to ip-10-0-239-132.ec2.internal openshift-monitoring 34m Normal Scheduled pod/openshift-state-metrics-8757cbbb4-lk7sd Successfully assigned openshift-monitoring/openshift-state-metrics-8757cbbb4-lk7sd to ip-10-0-195-121.ec2.internal openshift-monitoring 37m Normal Scheduled pod/openshift-state-metrics-8757cbbb4-whgf4 Successfully assigned openshift-monitoring/openshift-state-metrics-8757cbbb4-whgf4 to ip-10-0-187-75.ec2.internal openshift-machine-config-operator 59m Normal Scheduled pod/machine-config-server-8rhkb Successfully assigned openshift-machine-config-operator/machine-config-server-8rhkb to ip-10-0-239-132.ec2.internal openshift-monitoring 37m Normal Scheduled pod/osd-cluster-ready-pzbtd Successfully assigned openshift-monitoring/osd-cluster-ready-pzbtd to ip-10-0-160-152.ec2.internal openshift-monitoring 42m Normal Scheduled pod/osd-cluster-ready-thb5j Successfully assigned openshift-monitoring/osd-cluster-ready-thb5j to ip-10-0-232-8.ec2.internal openshift-service-ca-operator 59m Normal Scheduled pod/service-ca-operator-7988896c96-5q667 Successfully assigned openshift-service-ca-operator/service-ca-operator-7988896c96-5q667 to ip-10-0-197-197.ec2.internal openshift-service-ca-operator 61m Warning FailedScheduling pod/service-ca-operator-7988896c96-5q667 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-security 42m Normal Scheduled pod/audit-exporter-vscxm Successfully assigned openshift-security/audit-exporter-vscxm to ip-10-0-197-197.ec2.internal openshift-security 42m Normal Scheduled pod/audit-exporter-th592 Successfully assigned openshift-security/audit-exporter-th592 to ip-10-0-140-6.ec2.internal openshift-security 42m Normal Scheduled pod/audit-exporter-7bwkj Successfully assigned openshift-security/audit-exporter-7bwkj to ip-10-0-239-132.ec2.internal openshift-monitoring 28m Normal Scheduled pod/osd-rebalance-infra-nodes-27990045-mxscc Successfully assigned openshift-monitoring/osd-rebalance-infra-nodes-27990045-mxscc to ip-10-0-187-75.ec2.internal openshift-monitoring 14m Normal Scheduled pod/osd-rebalance-infra-nodes-27990060-8r9xq Successfully assigned openshift-monitoring/osd-rebalance-infra-nodes-27990060-8r9xq to ip-10-0-187-75.ec2.internal openshift-monitoring 42m Warning FailedScheduling pod/prometheus-adapter-5b77f96bd4-7lwwj 0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling.. openshift-machine-config-operator 59m Normal Scheduled pod/machine-config-server-4bmnx Successfully assigned openshift-machine-config-operator/machine-config-server-4bmnx to ip-10-0-197-197.ec2.internal openshift-dns 55m Normal Scheduled pod/dns-default-jf2vx Successfully assigned openshift-dns/dns-default-jf2vx to ip-10-0-160-152.ec2.internal openshift-monitoring 42m Warning FailedScheduling pod/prometheus-adapter-5b77f96bd4-7lwwj 0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling.. openshift-monitoring 39m Warning FailedScheduling pod/prometheus-adapter-5b77f96bd4-7lwwj 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 39m Warning FailedScheduling pod/prometheus-adapter-5b77f96bd4-7lwwj 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 39m Warning FailedScheduling pod/prometheus-adapter-5b77f96bd4-7lwwj 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 52m Normal Scheduled pod/prometheus-adapter-5b77f96bd4-lkn8s Successfully assigned openshift-monitoring/prometheus-adapter-5b77f96bd4-lkn8s to ip-10-0-160-152.ec2.internal openshift-image-registry 32m Normal Scheduled pod/cluster-image-registry-operator-868788f8c6-9j6mj Successfully assigned openshift-image-registry/cluster-image-registry-operator-868788f8c6-9j6mj to ip-10-0-140-6.ec2.internal openshift-image-registry 62m Warning FailedScheduling pod/cluster-image-registry-operator-868788f8c6-frhj8 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-image-registry 59m Normal Scheduled pod/cluster-image-registry-operator-868788f8c6-frhj8 Successfully assigned openshift-image-registry/cluster-image-registry-operator-868788f8c6-frhj8 to ip-10-0-197-197.ec2.internal openshift-dns 55m Warning FailedScheduling pod/dns-default-f7bt7 running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "dns-default-f7bt7": pod dns-default-f7bt7 is already assigned to node "ip-10-0-232-8.ec2.internal" openshift-dns 55m Normal Scheduled pod/dns-default-f7bt7 Successfully assigned openshift-dns/dns-default-f7bt7 to ip-10-0-232-8.ec2.internal openshift-kube-apiserver Normal AfterShutdownDelayDuration pod/kube-apiserver-ip-10-0-140-6.ec2.internal The minimal shutdown duration of 2m9s finished openshift-authentication-operator 32m Normal Scheduled pod/authentication-operator-dbb89644b-4b786 Successfully assigned openshift-authentication-operator/authentication-operator-dbb89644b-4b786 to ip-10-0-140-6.ec2.internal openshift-kube-apiserver Normal TerminationPreShutdownHooksFinished pod/kube-apiserver-ip-10-0-140-6.ec2.internal All pre-shutdown hooks have been finished openshift-monitoring 52m Normal Scheduled pod/prometheus-adapter-5b77f96bd4-vm8xp Successfully assigned openshift-monitoring/prometheus-adapter-5b77f96bd4-vm8xp to ip-10-0-232-8.ec2.internal openshift-kube-apiserver Normal ShutdownInitiated pod/kube-apiserver-ip-10-0-197-197.ec2.internal Received signal to terminate, becoming unready, but keeping serving openshift-dns-operator 32m Normal Scheduled pod/dns-operator-656b9bd9f9-rf9q6 Successfully assigned openshift-dns-operator/dns-operator-656b9bd9f9-rf9q6 to ip-10-0-239-132.ec2.internal openshift-kube-apiserver Normal TerminationPreShutdownHooksFinished pod/kube-apiserver-ip-10-0-197-197.ec2.internal All pre-shutdown hooks have been finished openshift-monitoring 31m Warning FailedScheduling pod/prometheus-adapter-8467ff79fd-cth85 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-route-controller-manager 49m Normal Scheduled pod/route-controller-manager-9b45479c5-q5nh8 Successfully assigned openshift-route-controller-manager/route-controller-manager-9b45479c5-q5nh8 to ip-10-0-140-6.ec2.internal openshift-route-controller-manager 49m Warning FailedScheduling pod/route-controller-manager-9b45479c5-q5nh8 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-route-controller-manager 42m Warning FailedScheduling pod/route-controller-manager-9b45479c5-nfwk9 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-route-controller-manager 42m Warning FailedScheduling pod/route-controller-manager-9b45479c5-nfwk9 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-dns-operator 59m Normal Scheduled pod/dns-operator-656b9bd9f9-lb9ps Successfully assigned openshift-dns-operator/dns-operator-656b9bd9f9-lb9ps to ip-10-0-197-197.ec2.internal openshift-dns-operator 61m Warning FailedScheduling pod/dns-operator-656b9bd9f9-lb9ps 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-monitoring 31m Warning FailedScheduling pod/prometheus-adapter-8467ff79fd-cth85 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 28m Warning FailedScheduling pod/prometheus-adapter-8467ff79fd-cth85 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) were unschedulable, 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-route-controller-manager 49m Normal Scheduled pod/route-controller-manager-9b45479c5-kkjqb Successfully assigned openshift-route-controller-manager/route-controller-manager-9b45479c5-kkjqb to ip-10-0-239-132.ec2.internal openshift-monitoring 28m Warning FailedScheduling pod/prometheus-adapter-8467ff79fd-cth85 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) were unschedulable, 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 26m Normal Scheduled pod/prometheus-adapter-8467ff79fd-cth85 Successfully assigned openshift-monitoring/prometheus-adapter-8467ff79fd-cth85 to ip-10-0-195-121.ec2.internal openshift-kube-apiserver Normal AfterShutdownDelayDuration pod/kube-apiserver-ip-10-0-197-197.ec2.internal The minimal shutdown duration of 2m9s finished openshift-monitoring 53m Normal Scheduled pod/kube-state-metrics-55f6dbfb8b-phfp9 Successfully assigned openshift-monitoring/kube-state-metrics-55f6dbfb8b-phfp9 to ip-10-0-232-8.ec2.internal openshift-route-controller-manager 49m Normal Scheduled pod/route-controller-manager-9b45479c5-69h2c Successfully assigned openshift-route-controller-manager/route-controller-manager-9b45479c5-69h2c to ip-10-0-197-197.ec2.internal openshift-route-controller-manager 49m Warning FailedScheduling pod/route-controller-manager-9b45479c5-69h2c 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-monitoring 37m Normal Scheduled pod/prometheus-adapter-8467ff79fd-rl8p7 Successfully assigned openshift-monitoring/prometheus-adapter-8467ff79fd-rl8p7 to ip-10-0-187-75.ec2.internal openshift-monitoring 37m Normal Scheduled pod/prometheus-adapter-8467ff79fd-szs4l Successfully assigned openshift-monitoring/prometheus-adapter-8467ff79fd-szs4l to ip-10-0-195-121.ec2.internal openshift-monitoring 34m Warning FailedScheduling pod/prometheus-adapter-8467ff79fd-xg97t 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 34m Warning FailedScheduling pod/prometheus-adapter-8467ff79fd-xg97t 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 32m Warning FailedScheduling pod/prometheus-adapter-8467ff79fd-xg97t 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 32m Warning FailedScheduling pod/prometheus-adapter-8467ff79fd-xg97t 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-image-registry 14m Normal Scheduled pod/image-pruner-27990060-dqmfp Successfully assigned openshift-image-registry/image-pruner-27990060-dqmfp to ip-10-0-195-121.ec2.internal openshift-monitoring 31m Normal Scheduled pod/prometheus-adapter-8467ff79fd-xg97t Successfully assigned openshift-monitoring/prometheus-adapter-8467ff79fd-xg97t to ip-10-0-187-75.ec2.internal openshift-monitoring 52m Normal Scheduled pod/prometheus-k8s-0 Successfully assigned openshift-monitoring/prometheus-k8s-0 to ip-10-0-232-8.ec2.internal openshift-kube-apiserver Normal InFlightRequestsDrained pod/kube-apiserver-ip-10-0-197-197.ec2.internal All non long-running request(s) in-flight have drained openshift-kube-apiserver Normal HTTPServerStoppedListening pod/kube-apiserver-ip-10-0-197-197.ec2.internal HTTP Server has stopped listening openshift-route-controller-manager 58m Normal Scheduled pod/route-controller-manager-7ff89c67c-8b8g2 Successfully assigned openshift-route-controller-manager/route-controller-manager-7ff89c67c-8b8g2 to ip-10-0-197-197.ec2.internal openshift-route-controller-manager 58m Warning FailedScheduling pod/route-controller-manager-7ff89c67c-8b8g2 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-kube-apiserver Normal TerminationGracefulTerminationFinished pod/kube-apiserver-ip-10-0-197-197.ec2.internal All pending requests processed openshift-image-registry 50m Normal Scheduled pod/image-registry-5588bdd7b4-4mffb Successfully assigned openshift-image-registry/image-registry-5588bdd7b4-4mffb to ip-10-0-160-152.ec2.internal openshift-machine-config-operator 32m Normal Scheduled pod/machine-config-operator-7fd9cd8968-sbt2v Successfully assigned openshift-machine-config-operator/machine-config-operator-7fd9cd8968-sbt2v to ip-10-0-239-132.ec2.internal openshift-image-registry 50m Normal Scheduled pod/image-registry-5588bdd7b4-m28sx Successfully assigned openshift-image-registry/image-registry-5588bdd7b4-m28sx to ip-10-0-232-8.ec2.internal openshift-controller-manager 49m Normal Scheduled pod/controller-manager-c5c84d6f9-x72pp Successfully assigned openshift-controller-manager/controller-manager-c5c84d6f9-x72pp to ip-10-0-239-132.ec2.internal openshift-route-controller-manager 57m Normal Scheduled pod/route-controller-manager-7ff89c67c-4622z Successfully assigned openshift-route-controller-manager/route-controller-manager-7ff89c67c-4622z to ip-10-0-140-6.ec2.internal openshift-route-controller-manager 58m Warning FailedScheduling pod/route-controller-manager-7ff89c67c-4622z 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-controller-manager 49m Warning FailedScheduling pod/controller-manager-c5c84d6f9-x72pp 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-machine-config-operator 59m Normal Scheduled pod/machine-config-operator-7fd9cd8968-9vg57 Successfully assigned openshift-machine-config-operator/machine-config-operator-7fd9cd8968-9vg57 to ip-10-0-197-197.ec2.internal openshift-image-registry 42m Warning FailedScheduling pod/image-registry-55b7d998b9-479fl 0/5 nodes are available: 1 node(s) didn't match pod topology spread constraints, 1 node(s) didn't satisfy existing pods anti-affinity rules, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/5 nodes are available: 1 node(s) didn't match pod topology spread constraints, 1 node(s) didn't satisfy existing pods anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-controller-manager 49m Normal Scheduled pod/controller-manager-c5c84d6f9-wrj8l Successfully assigned openshift-controller-manager/controller-manager-c5c84d6f9-wrj8l to ip-10-0-140-6.ec2.internal openshift-controller-manager 49m Warning FailedScheduling pod/controller-manager-c5c84d6f9-wrj8l 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-route-controller-manager 58m Normal Scheduled pod/route-controller-manager-7ff89c67c-2bq47 Successfully assigned openshift-route-controller-manager/route-controller-manager-7ff89c67c-2bq47 to ip-10-0-239-132.ec2.internal openshift-route-controller-manager 58m Warning FailedScheduling pod/route-controller-manager-7ff89c67c-2bq47 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-kube-apiserver Normal ShutdownInitiated pod/kube-apiserver-ip-10-0-140-6.ec2.internal Received signal to terminate, becoming unready, but keeping serving openshift-controller-manager 42m Warning FailedScheduling pod/controller-manager-c5c84d6f9-vpk76 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-controller-manager 42m Warning FailedScheduling pod/controller-manager-c5c84d6f9-vpk76 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-image-registry 42m Warning FailedScheduling pod/image-registry-55b7d998b9-479fl 0/5 nodes are available: 1 node(s) didn't match pod topology spread constraints, 1 node(s) didn't satisfy existing pods anti-affinity rules, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/5 nodes are available: 1 node(s) didn't match pod topology spread constraints, 1 node(s) didn't satisfy existing pods anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-image-registry 42m Warning FailedScheduling pod/image-registry-55b7d998b9-479fl 0/5 nodes are available: 1 node(s) didn't match pod topology spread constraints, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 1 node(s) didn't match pod topology spread constraints, 4 Preemption is not helpful for scheduling.. openshift-image-registry 39m Warning FailedScheduling pod/image-registry-55b7d998b9-479fl 0/7 nodes are available: 1 node(s) didn't match pod topology spread constraints, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod topology spread constraints, 6 Preemption is not helpful for scheduling.. openshift-image-registry 39m Warning FailedScheduling pod/image-registry-55b7d998b9-479fl 0/7 nodes are available: 1 node(s) didn't match pod topology spread constraints, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod topology spread constraints, 6 Preemption is not helpful for scheduling.. openshift-image-registry 39m Normal Scheduled pod/image-registry-55b7d998b9-479fl Successfully assigned openshift-image-registry/image-registry-55b7d998b9-479fl to ip-10-0-195-121.ec2.internal openshift-machine-config-operator 62m Warning FailedScheduling pod/machine-config-operator-7fd9cd8968-9vg57 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-controller-manager 37m Normal Scheduled pod/controller-manager-c5c84d6f9-tll5c Successfully assigned openshift-controller-manager/controller-manager-c5c84d6f9-tll5c to ip-10-0-239-132.ec2.internal openshift-controller-manager 49m Normal Scheduled pod/controller-manager-c5c84d6f9-qxhsq Successfully assigned openshift-controller-manager/controller-manager-c5c84d6f9-qxhsq to ip-10-0-197-197.ec2.internal openshift-controller-manager 59m Warning FailedScheduling pod/controller-manager-78f477fd5c-r8mcx 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-controller-manager 59m Warning FailedScheduling pod/controller-manager-78f477fd5c-r8mcx 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-controller-manager 58m Normal Scheduled pod/controller-manager-77cd478b57-z4w9g Successfully assigned openshift-controller-manager/controller-manager-77cd478b57-z4w9g to ip-10-0-239-132.ec2.internal openshift-controller-manager 58m Warning FailedScheduling pod/controller-manager-77cd478b57-z4w9g 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-controller-manager 58m Normal Scheduled pod/controller-manager-77cd478b57-m9m5f Successfully assigned openshift-controller-manager/controller-manager-77cd478b57-m9m5f to ip-10-0-140-6.ec2.internal openshift-controller-manager 58m Warning FailedScheduling pod/controller-manager-77cd478b57-m9m5f 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-controller-manager 58m Normal Scheduled pod/controller-manager-77cd478b57-4s2qm Successfully assigned openshift-controller-manager/controller-manager-77cd478b57-4s2qm to ip-10-0-197-197.ec2.internal openshift-controller-manager 58m Warning FailedScheduling pod/controller-manager-77cd478b57-4s2qm 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-controller-manager 58m Warning FailedScheduling pod/controller-manager-77cd478b57-4s2qm 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-image-registry 42m Warning FailedScheduling pod/image-registry-55b7d998b9-4mbwh 0/5 nodes are available: 2 node(s) didn't satisfy existing pods anti-affinity rules, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/5 nodes are available: 2 node(s) didn't satisfy existing pods anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-image-registry 42m Warning FailedScheduling pod/image-registry-55b7d998b9-4mbwh 0/5 nodes are available: 1 node(s) didn't match pod topology spread constraints, 1 node(s) didn't satisfy existing pods anti-affinity rules, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/5 nodes are available: 1 node(s) didn't match pod topology spread constraints, 1 node(s) didn't satisfy existing pods anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-image-registry 42m Warning FailedScheduling pod/image-registry-55b7d998b9-4mbwh 0/5 nodes are available: 1 node(s) didn't match pod topology spread constraints, 1 node(s) didn't satisfy existing pods anti-affinity rules, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/5 nodes are available: 1 node(s) didn't match pod topology spread constraints, 1 node(s) didn't satisfy existing pods anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-image-registry 42m Warning FailedScheduling pod/image-registry-55b7d998b9-4mbwh 0/5 nodes are available: 1 node(s) didn't match pod topology spread constraints, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 1 node(s) didn't match pod topology spread constraints, 4 Preemption is not helpful for scheduling.. openshift-controller-manager 57m Normal Scheduled pod/controller-manager-6fcd58c8dc-wdb9f Successfully assigned openshift-controller-manager/controller-manager-6fcd58c8dc-wdb9f to ip-10-0-140-6.ec2.internal openshift-controller-manager 58m Warning FailedScheduling pod/controller-manager-6fcd58c8dc-wdb9f 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-controller-manager 58m Warning FailedScheduling pod/controller-manager-6fcd58c8dc-wdb9f 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-image-registry 39m Warning FailedScheduling pod/image-registry-55b7d998b9-4mbwh 0/7 nodes are available: 1 node(s) didn't match pod topology spread constraints, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod topology spread constraints, 6 Preemption is not helpful for scheduling.. openshift-controller-manager 57m Normal Scheduled pod/controller-manager-6fcd58c8dc-dnsjp Successfully assigned openshift-controller-manager/controller-manager-6fcd58c8dc-dnsjp to ip-10-0-239-132.ec2.internal openshift-controller-manager 57m Warning FailedScheduling pod/controller-manager-6fcd58c8dc-dnsjp 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-image-registry 39m Warning FailedScheduling pod/image-registry-55b7d998b9-4mbwh 0/7 nodes are available: 1 node(s) didn't match pod topology spread constraints, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod topology spread constraints, 6 Preemption is not helpful for scheduling.. openshift-controller-manager 57m Normal Scheduled pod/controller-manager-6fcd58c8dc-6vtpl Successfully assigned openshift-controller-manager/controller-manager-6fcd58c8dc-6vtpl to ip-10-0-197-197.ec2.internal openshift-controller-manager 57m Warning FailedScheduling pod/controller-manager-6fcd58c8dc-6vtpl 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-route-controller-manager 59m Normal Scheduled pod/route-controller-manager-7d7696bfd4-zpkmp Successfully assigned openshift-route-controller-manager/route-controller-manager-7d7696bfd4-zpkmp to ip-10-0-197-197.ec2.internal openshift-image-registry 39m Normal Scheduled pod/image-registry-55b7d998b9-4mbwh Successfully assigned openshift-image-registry/image-registry-55b7d998b9-4mbwh to ip-10-0-187-75.ec2.internal openshift-image-registry 34m Normal Scheduled pod/image-registry-55b7d998b9-pf4xh Successfully assigned openshift-image-registry/image-registry-55b7d998b9-pf4xh to ip-10-0-232-8.ec2.internal openshift-controller-manager 28m Normal Scheduled pod/controller-manager-66b447958d-w97xv Successfully assigned openshift-controller-manager/controller-manager-66b447958d-w97xv to ip-10-0-197-197.ec2.internal openshift-kube-apiserver Warning KubeAPIReadyz pod/kube-apiserver-ip-10-0-140-6.ec2.internal readyz=true openshift-controller-manager 31m Warning FailedScheduling pod/controller-manager-66b447958d-w97xv 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-controller-manager 32m Warning FailedScheduling pod/controller-manager-66b447958d-w97xv 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-controller-manager 32m Warning FailedScheduling pod/controller-manager-66b447958d-w97xv 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-machine-config-operator 59m Normal Scheduled pod/machine-config-daemon-zlzm2 Successfully assigned openshift-machine-config-operator/machine-config-daemon-zlzm2 to ip-10-0-239-132.ec2.internal openshift-image-registry 31m Normal Scheduled pod/image-registry-55b7d998b9-pq262 Successfully assigned openshift-image-registry/image-registry-55b7d998b9-pq262 to ip-10-0-187-75.ec2.internal openshift-route-controller-manager 59m Normal Scheduled pod/route-controller-manager-7d7696bfd4-z2bjq Successfully assigned openshift-route-controller-manager/route-controller-manager-7d7696bfd4-z2bjq to ip-10-0-239-132.ec2.internal openshift-machine-config-operator 55m Warning FailedScheduling pod/machine-config-daemon-w98lz running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "machine-config-daemon-w98lz": pod machine-config-daemon-w98lz is already assigned to node "ip-10-0-160-152.ec2.internal" openshift-machine-config-operator 55m Normal Scheduled pod/machine-config-daemon-w98lz Successfully assigned openshift-machine-config-operator/machine-config-daemon-w98lz to ip-10-0-160-152.ec2.internal openshift-controller-manager 32m Normal Scheduled pod/controller-manager-66b447958d-6mqfl Successfully assigned openshift-controller-manager/controller-manager-66b447958d-6mqfl to ip-10-0-140-6.ec2.internal openshift-controller-manager 34m Warning FailedScheduling pod/controller-manager-66b447958d-6mqfl 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-controller-manager 36m Warning FailedScheduling pod/controller-manager-66b447958d-6mqfl 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-controller-manager 37m Warning FailedScheduling pod/controller-manager-66b447958d-6mqfl 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-controller-manager 37m Warning FailedScheduling pod/controller-manager-66b447958d-6mqfl 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-controller-manager 37m Warning FailedScheduling pod/controller-manager-66b447958d-6mqfl 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-controller-manager 39m Warning FailedScheduling pod/controller-manager-66b447958d-6mqfl 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-controller-manager 39m Warning FailedScheduling pod/controller-manager-66b447958d-6mqfl 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-controller-manager 39m Warning FailedScheduling pod/controller-manager-66b447958d-6mqfl 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-controller-manager 41m Warning FailedScheduling pod/controller-manager-66b447958d-6mqfl 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-controller-manager 41m Warning FailedScheduling pod/controller-manager-66b447958d-6mqfl 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-machine-config-operator 40m Normal Scheduled pod/machine-config-daemon-vlfmm Successfully assigned openshift-machine-config-operator/machine-config-daemon-vlfmm to ip-10-0-187-75.ec2.internal openshift-machine-config-operator 39m Normal Scheduled pod/machine-config-daemon-tpglq Successfully assigned openshift-machine-config-operator/machine-config-daemon-tpglq to ip-10-0-195-121.ec2.internal openshift-monitoring 49m Normal Scheduled pod/prometheus-k8s-0 Successfully assigned openshift-monitoring/prometheus-k8s-0 to ip-10-0-232-8.ec2.internal openshift-controller-manager 32m Normal Scheduled pod/controller-manager-66b447958d-6gldq Successfully assigned openshift-controller-manager/controller-manager-66b447958d-6gldq to ip-10-0-239-132.ec2.internal openshift-controller-manager 32m Warning FailedScheduling pod/controller-manager-66b447958d-6gldq 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-controller-manager 59m Normal Scheduled pod/controller-manager-64556d4c99-kxhn7 Successfully assigned openshift-controller-manager/controller-manager-64556d4c99-kxhn7 to ip-10-0-140-6.ec2.internal openshift-controller-manager 59m Normal Scheduled pod/controller-manager-64556d4c99-8fw47 Successfully assigned openshift-controller-manager/controller-manager-64556d4c99-8fw47 to ip-10-0-239-132.ec2.internal openshift-controller-manager 59m Normal Scheduled pod/controller-manager-64556d4c99-46tn2 Successfully assigned openshift-controller-manager/controller-manager-64556d4c99-46tn2 to ip-10-0-197-197.ec2.internal openshift-controller-manager 58m Normal Scheduled pod/controller-manager-5ff6588dbb-fwcgz Successfully assigned openshift-controller-manager/controller-manager-5ff6588dbb-fwcgz to ip-10-0-140-6.ec2.internal openshift-controller-manager 58m Warning FailedScheduling pod/controller-manager-5ff6588dbb-fwcgz 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-controller-manager 59m Warning FailedScheduling pod/controller-manager-5ff6588dbb-fwcgz 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-route-controller-manager 59m Normal Scheduled pod/route-controller-manager-7d7696bfd4-2tvnf Successfully assigned openshift-route-controller-manager/route-controller-manager-7d7696bfd4-2tvnf to ip-10-0-140-6.ec2.internal openshift-controller-manager 59m Warning FailedScheduling pod/controller-manager-579956b947-ql6fs 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-controller-manager 59m Warning FailedScheduling pod/controller-manager-579956b947-ql6fs 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-image-registry 28m Normal Scheduled pod/image-registry-5bd87dfd7-4rptq Successfully assigned openshift-image-registry/image-registry-5bd87dfd7-4rptq to ip-10-0-160-152.ec2.internal openshift-machine-config-operator 59m Normal Scheduled pod/machine-config-daemon-s6f62 Successfully assigned openshift-machine-config-operator/machine-config-daemon-s6f62 to ip-10-0-140-6.ec2.internal openshift-image-registry 28m Normal Scheduled pod/image-registry-5bd87dfd7-vhs2b Successfully assigned openshift-image-registry/image-registry-5bd87dfd7-vhs2b to ip-10-0-187-75.ec2.internal openshift-kube-apiserver Warning KubeAPIReadyz pod/kube-apiserver-ip-10-0-197-197.ec2.internal readyz=true openshift-kube-apiserver Warning KubeAPIReadyz pod/kube-apiserver-ip-10-0-140-6.ec2.internal readyz=true openshift-oauth-apiserver 55m Warning FailedScheduling pod/apiserver-9b9694fdc-sl5wc running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "apiserver-9b9694fdc-sl5wc": pod apiserver-9b9694fdc-sl5wc is already assigned to node "ip-10-0-140-6.ec2.internal" openshift-machine-config-operator 59m Normal Scheduled pod/machine-config-daemon-ll5kq Successfully assigned openshift-machine-config-operator/machine-config-daemon-ll5kq to ip-10-0-197-197.ec2.internal openshift-image-registry 40m Normal Scheduled pod/node-ca-5ldj8 Successfully assigned openshift-image-registry/node-ca-5ldj8 to ip-10-0-187-75.ec2.internal openshift-machine-config-operator 55m Warning FailedScheduling pod/machine-config-daemon-drlvb running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "machine-config-daemon-drlvb": pod machine-config-daemon-drlvb is already assigned to node "ip-10-0-232-8.ec2.internal" openshift-machine-config-operator 55m Normal Scheduled pod/machine-config-daemon-drlvb Successfully assigned openshift-machine-config-operator/machine-config-daemon-drlvb to ip-10-0-232-8.ec2.internal openshift-controller-manager-operator 32m Normal Scheduled pod/openshift-controller-manager-operator-6548869cc5-xfpsm Successfully assigned openshift-controller-manager-operator/openshift-controller-manager-operator-6548869cc5-xfpsm to ip-10-0-239-132.ec2.internal openshift-controller-manager-operator 59m Normal Scheduled pod/openshift-controller-manager-operator-6548869cc5-9kqx5 Successfully assigned openshift-controller-manager-operator/openshift-controller-manager-operator-6548869cc5-9kqx5 to ip-10-0-197-197.ec2.internal openshift-controller-manager-operator 61m Warning FailedScheduling pod/openshift-controller-manager-operator-6548869cc5-9kqx5 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-machine-config-operator 37m Normal Scheduled pod/machine-config-controller-7f488c778d-vjl7t Successfully assigned openshift-machine-config-operator/machine-config-controller-7f488c778d-vjl7t to ip-10-0-239-132.ec2.internal openshift-image-registry 50m Normal Scheduled pod/node-ca-92xvd Successfully assigned openshift-image-registry/node-ca-92xvd to ip-10-0-140-6.ec2.internal openshift-monitoring 37m Normal Scheduled pod/prometheus-k8s-0 Successfully assigned openshift-monitoring/prometheus-k8s-0 to ip-10-0-187-75.ec2.internal openshift-image-registry 50m Normal Scheduled pod/node-ca-bcbwn Successfully assigned openshift-image-registry/node-ca-bcbwn to ip-10-0-239-132.ec2.internal openshift-monitoring 34m Warning FailedScheduling pod/prometheus-k8s-0 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 34m Warning FailedScheduling pod/prometheus-k8s-0 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 32m Warning FailedScheduling pod/prometheus-k8s-0 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 32m Warning FailedScheduling pod/prometheus-k8s-0 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 31m Normal Scheduled pod/prometheus-k8s-0 Successfully assigned openshift-monitoring/prometheus-k8s-0 to ip-10-0-187-75.ec2.internal openshift-route-controller-manager 58m Normal Scheduled pod/route-controller-manager-795466d555-hwftm Successfully assigned openshift-route-controller-manager/route-controller-manager-795466d555-hwftm to ip-10-0-140-6.ec2.internal openshift-route-controller-manager 58m Warning FailedScheduling pod/route-controller-manager-795466d555-hwftm 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-route-controller-manager 58m Warning FailedScheduling pod/route-controller-manager-795466d555-hwftm 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-monitoring 52m Normal Scheduled pod/prometheus-k8s-1 Successfully assigned openshift-monitoring/prometheus-k8s-1 to ip-10-0-160-152.ec2.internal openshift-console 37m Normal Scheduled pod/downloads-fcdb597fd-vfqwm Successfully assigned openshift-console/downloads-fcdb597fd-vfqwm to ip-10-0-239-132.ec2.internal openshift-machine-config-operator 59m Normal Scheduled pod/machine-config-controller-7f488c778d-fvfx4 Successfully assigned openshift-machine-config-operator/machine-config-controller-7f488c778d-fvfx4 to ip-10-0-239-132.ec2.internal openshift-machine-config-operator 42m Normal Scheduled pod/machine-config-controller-7f488c778d-c8svb Successfully assigned openshift-machine-config-operator/machine-config-controller-7f488c778d-c8svb to ip-10-0-140-6.ec2.internal openshift-image-registry 39m Normal Scheduled pod/node-ca-fg6h6 Successfully assigned openshift-image-registry/node-ca-fg6h6 to ip-10-0-195-121.ec2.internal openshift-route-controller-manager 58m Normal Scheduled pod/route-controller-manager-795466d555-dxq7d Successfully assigned openshift-route-controller-manager/route-controller-manager-795466d555-dxq7d to ip-10-0-197-197.ec2.internal openshift-console 42m Normal Scheduled pod/downloads-fcdb597fd-tr9zh Successfully assigned openshift-console/downloads-fcdb597fd-tr9zh to ip-10-0-140-6.ec2.internal openshift-image-registry 50m Normal Scheduled pod/node-ca-rz7r5 Successfully assigned openshift-image-registry/node-ca-rz7r5 to ip-10-0-197-197.ec2.internal openshift-console 37m Normal Scheduled pod/downloads-fcdb597fd-sbcw8 Successfully assigned openshift-console/downloads-fcdb597fd-sbcw8 to ip-10-0-160-152.ec2.internal openshift-oauth-apiserver 55m Normal Scheduled pod/apiserver-9b9694fdc-sl5wc Successfully assigned openshift-oauth-apiserver/apiserver-9b9694fdc-sl5wc to ip-10-0-140-6.ec2.internal openshift-oauth-apiserver 55m Warning FailedScheduling pod/apiserver-9b9694fdc-sl5wc 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-route-controller-manager 58m Normal Scheduled pod/route-controller-manager-795466d555-57pst Successfully assigned openshift-route-controller-manager/route-controller-manager-795466d555-57pst to ip-10-0-239-132.ec2.internal openshift-route-controller-manager 58m Warning FailedScheduling pod/route-controller-manager-795466d555-57pst 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-kube-apiserver Normal ShutdownInitiated pod/kube-apiserver-ip-10-0-197-197.ec2.internal Received signal to terminate, becoming unready, but keeping serving openshift-kube-apiserver Normal TerminationPreShutdownHooksFinished pod/kube-apiserver-ip-10-0-197-197.ec2.internal All pre-shutdown hooks have been finished openshift-oauth-apiserver 55m Warning FailedScheduling pod/apiserver-9b9694fdc-sl5wc 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-oauth-apiserver 55m Warning FailedScheduling pod/apiserver-9b9694fdc-kb6ks running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "apiserver-9b9694fdc-kb6ks": pod apiserver-9b9694fdc-kb6ks is already assigned to node "ip-10-0-197-197.ec2.internal" openshift-machine-api 32m Normal Scheduled pod/machine-api-operator-564474f8c6-nlqm9 Successfully assigned openshift-machine-api/machine-api-operator-564474f8c6-nlqm9 to ip-10-0-140-6.ec2.internal openshift-image-registry 50m Normal Scheduled pod/node-ca-sfbnk Successfully assigned openshift-image-registry/node-ca-sfbnk to ip-10-0-232-8.ec2.internal openshift-console 50m Normal Scheduled pod/downloads-fcdb597fd-qhkwv Successfully assigned openshift-console/downloads-fcdb597fd-qhkwv to ip-10-0-239-132.ec2.internal openshift-image-registry 50m Normal Scheduled pod/node-ca-tvq4f Successfully assigned openshift-image-registry/node-ca-tvq4f to ip-10-0-160-152.ec2.internal openshift-machine-api 59m Normal Scheduled pod/machine-api-operator-564474f8c6-284hs Successfully assigned openshift-machine-api/machine-api-operator-564474f8c6-284hs to ip-10-0-197-197.ec2.internal openshift-console 42m Normal Scheduled pod/downloads-fcdb597fd-grdr7 Successfully assigned openshift-console/downloads-fcdb597fd-grdr7 to ip-10-0-232-8.ec2.internal openshift-machine-api 62m Warning FailedScheduling pod/machine-api-operator-564474f8c6-284hs 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-ingress-canary 55m Normal Scheduled pod/ingress-canary-2zk7z Successfully assigned openshift-ingress-canary/ingress-canary-2zk7z to ip-10-0-232-8.ec2.internal openshift-oauth-apiserver 55m Normal Scheduled pod/apiserver-9b9694fdc-kb6ks Successfully assigned openshift-oauth-apiserver/apiserver-9b9694fdc-kb6ks to ip-10-0-197-197.ec2.internal openshift-console 50m Normal Scheduled pod/downloads-fcdb597fd-24zcn Successfully assigned openshift-console/downloads-fcdb597fd-24zcn to ip-10-0-160-152.ec2.internal openshift-monitoring 49m Normal Scheduled pod/prometheus-k8s-1 Successfully assigned openshift-monitoring/prometheus-k8s-1 to ip-10-0-160-152.ec2.internal openshift-ingress-canary 55m Normal Scheduled pod/ingress-canary-bn5dn Successfully assigned openshift-ingress-canary/ingress-canary-bn5dn to ip-10-0-160-152.ec2.internal openshift-ingress-canary 55m Warning FailedScheduling pod/ingress-canary-bn5dn running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "ingress-canary-bn5dn": pod ingress-canary-bn5dn is already assigned to node "ip-10-0-160-152.ec2.internal" openshift-ingress-canary 39m Normal Scheduled pod/ingress-canary-xb5f7 Successfully assigned openshift-ingress-canary/ingress-canary-xb5f7 to ip-10-0-195-121.ec2.internal openshift-console 49m Normal Scheduled pod/console-7dc48fc574-fvlls Successfully assigned openshift-console/console-7dc48fc574-fvlls to ip-10-0-140-6.ec2.internal openshift-console 49m Warning FailedScheduling pod/console-7dc48fc574-fvlls 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-ingress-canary 39m Normal Scheduled pod/ingress-canary-zwpz2 Successfully assigned openshift-ingress-canary/ingress-canary-zwpz2 to ip-10-0-187-75.ec2.internal openshift-machine-api 59m Normal Scheduled pod/machine-api-controllers-674d9f54f6-r6g9g Successfully assigned openshift-machine-api/machine-api-controllers-674d9f54f6-r6g9g to ip-10-0-140-6.ec2.internal openshift-route-controller-manager 59m Warning FailedScheduling pod/route-controller-manager-6b76fb6ddf-hqd6b 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-route-controller-manager 59m Warning FailedScheduling pod/route-controller-manager-6b76fb6ddf-hqd6b 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-machine-api 37m Normal Scheduled pod/machine-api-controllers-674d9f54f6-h4xz6 Successfully assigned openshift-machine-api/machine-api-controllers-674d9f54f6-h4xz6 to ip-10-0-239-132.ec2.internal openshift-console 49m Warning FailedScheduling pod/console-7dc48fc574-fvlls 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-route-controller-manager 58m Warning FailedScheduling pod/route-controller-manager-678c989865-fj78v 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-route-controller-manager 59m Warning FailedScheduling pod/route-controller-manager-678c989865-fj78v 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-route-controller-manager 36m Normal Scheduled pod/route-controller-manager-6594987c6f-qfkcc Successfully assigned openshift-route-controller-manager/route-controller-manager-6594987c6f-qfkcc to ip-10-0-197-197.ec2.internal openshift-route-controller-manager 36m Warning FailedScheduling pod/route-controller-manager-6594987c6f-qfkcc 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-route-controller-manager 28m Normal Scheduled pod/route-controller-manager-6594987c6f-q7rdv Successfully assigned openshift-route-controller-manager/route-controller-manager-6594987c6f-q7rdv to ip-10-0-197-197.ec2.internal openshift-route-controller-manager 31m Warning FailedScheduling pod/route-controller-manager-6594987c6f-q7rdv 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-route-controller-manager 31m Warning FailedScheduling pod/route-controller-manager-6594987c6f-q7rdv 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-route-controller-manager 32m Warning FailedScheduling pod/route-controller-manager-6594987c6f-q7rdv 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-route-controller-manager 32m Warning FailedScheduling pod/route-controller-manager-6594987c6f-q7rdv 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-route-controller-manager 37m Normal Scheduled pod/route-controller-manager-6594987c6f-dcrpz Successfully assigned openshift-route-controller-manager/route-controller-manager-6594987c6f-dcrpz to ip-10-0-239-132.ec2.internal openshift-route-controller-manager 37m Warning FailedScheduling pod/route-controller-manager-6594987c6f-dcrpz 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-route-controller-manager 37m Warning FailedScheduling pod/route-controller-manager-6594987c6f-dcrpz 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-route-controller-manager 37m Warning FailedScheduling pod/route-controller-manager-6594987c6f-dcrpz 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-route-controller-manager 39m Warning FailedScheduling pod/route-controller-manager-6594987c6f-dcrpz 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-route-controller-manager 39m Warning FailedScheduling pod/route-controller-manager-6594987c6f-dcrpz 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-route-controller-manager 39m Warning FailedScheduling pod/route-controller-manager-6594987c6f-dcrpz 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-route-controller-manager 41m Warning FailedScheduling pod/route-controller-manager-6594987c6f-dcrpz 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-route-controller-manager 41m Warning FailedScheduling pod/route-controller-manager-6594987c6f-dcrpz 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-route-controller-manager 32m Normal Scheduled pod/route-controller-manager-6594987c6f-246st Successfully assigned openshift-route-controller-manager/route-controller-manager-6594987c6f-246st to ip-10-0-140-6.ec2.internal openshift-route-controller-manager 34m Warning FailedScheduling pod/route-controller-manager-6594987c6f-246st 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-route-controller-manager 36m Warning FailedScheduling pod/route-controller-manager-6594987c6f-246st 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-route-controller-manager 36m Warning FailedScheduling pod/route-controller-manager-6594987c6f-246st 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-console 49m Normal Scheduled pod/console-7dc48fc574-4kqrk Successfully assigned openshift-console/console-7dc48fc574-4kqrk to ip-10-0-197-197.ec2.internal openshift-oauth-apiserver 55m Warning FailedScheduling pod/apiserver-9b9694fdc-kb6ks 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-console 41m Normal Scheduled pod/console-7db75d8d45-dzkhb Successfully assigned openshift-console/console-7db75d8d45-dzkhb to ip-10-0-140-6.ec2.internal openshift-console 42m Warning FailedScheduling pod/console-7db75d8d45-dzkhb 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-console 42m Warning FailedScheduling pod/console-7db75d8d45-dzkhb 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-console 42m Warning FailedScheduling pod/console-7db75d8d45-dzkhb 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. default Normal TerminationPreShutdownHooksFinished namespace/kube-system All pre-shutdown hooks have been finished default Normal TerminationStart namespace/kube-system Received signal to terminate, becoming unready, but keeping serving default Normal TerminationMinimalShutdownDurationFinished namespace/kube-system The minimal shutdown duration of 0s finished default Normal TerminationStoppedServing namespace/kube-system Server has stopped listening default Normal TerminationGracefulTerminationFinished namespace/kube-system All pending requests processed default Normal TerminationPreShutdownHooksFinished namespace/kube-system All pre-shutdown hooks have been finished default Normal TerminationStart namespace/kube-system Received signal to terminate, becoming unready, but keeping serving default Normal TerminationMinimalShutdownDurationFinished namespace/kube-system The minimal shutdown duration of 0s finished default Normal TerminationStoppedServing namespace/kube-system Server has stopped listening default Normal TerminationGracefulTerminationFinished namespace/kube-system All pending requests processed default Normal TerminationPreShutdownHooksFinished namespace/kube-system All pre-shutdown hooks have been finished default Normal TerminationStart namespace/kube-system Received signal to terminate, becoming unready, but keeping serving default Normal TerminationMinimalShutdownDurationFinished namespace/kube-system The minimal shutdown duration of 0s finished default Normal TerminationStoppedServing namespace/kube-system Server has stopped listening default Normal TerminationGracefulTerminationFinished namespace/kube-system All pending requests processed default Normal TerminationStart namespace/kube-system Received signal to terminate, becoming unready, but keeping serving default Normal TerminationPreShutdownHooksFinished namespace/kube-system All pre-shutdown hooks have been finished default Normal TerminationMinimalShutdownDurationFinished namespace/kube-system The minimal shutdown duration of 0s finished default Normal TerminationStoppedServing namespace/kube-system Server has stopped listening default Normal TerminationPreShutdownHooksFinished namespace/kube-system All pre-shutdown hooks have been finished default Normal TerminationStart namespace/kube-system Received signal to terminate, becoming unready, but keeping serving default Normal TerminationMinimalShutdownDurationFinished namespace/kube-system The minimal shutdown duration of 0s finished default Normal TerminationStoppedServing namespace/kube-system Server has stopped listening default Normal TerminationGracefulTerminationFinished namespace/kube-system All pending requests processed default Normal TerminationStart namespace/kube-system Received signal to terminate, becoming unready, but keeping serving default Normal TerminationPreShutdownHooksFinished namespace/kube-system All pre-shutdown hooks have been finished default Normal TerminationMinimalShutdownDurationFinished namespace/kube-system The minimal shutdown duration of 0s finished default Normal TerminationStoppedServing namespace/kube-system Server has stopped listening default Normal TerminationGracefulTerminationFinished namespace/kube-system All pending requests processed default Normal TerminationPreShutdownHooksFinished namespace/kube-system All pre-shutdown hooks have been finished default Normal TerminationStart namespace/kube-system Received signal to terminate, becoming unready, but keeping serving default Normal TerminationMinimalShutdownDurationFinished namespace/kube-system The minimal shutdown duration of 0s finished default Normal TerminationStoppedServing namespace/kube-system Server has stopped listening default Normal TerminationGracefulTerminationFinished namespace/kube-system All pending requests processed default Normal TerminationPreShutdownHooksFinished namespace/kube-system All pre-shutdown hooks have been finished default Normal TerminationStart namespace/kube-system Received signal to terminate, becoming unready, but keeping serving default Normal TerminationMinimalShutdownDurationFinished namespace/kube-system The minimal shutdown duration of 0s finished default Normal TerminationStoppedServing namespace/kube-system Server has stopped listening default Normal TerminationGracefulTerminationFinished namespace/kube-system All pending requests processed default Normal TerminationStart namespace/kube-system Received signal to terminate, becoming unready, but keeping serving default Normal TerminationPreShutdownHooksFinished namespace/kube-system All pre-shutdown hooks have been finished default Normal TerminationMinimalShutdownDurationFinished namespace/kube-system The minimal shutdown duration of 0s finished default Normal TerminationStoppedServing namespace/kube-system Server has stopped listening default Normal TerminationGracefulTerminationFinished namespace/kube-system All pending requests processed default Normal TerminationPreShutdownHooksFinished namespace/kube-system All pre-shutdown hooks have been finished default Normal TerminationStart namespace/kube-system Received signal to terminate, becoming unready, but keeping serving default Normal TerminationMinimalShutdownDurationFinished namespace/kube-system The minimal shutdown duration of 0s finished default Normal TerminationStoppedServing namespace/kube-system Server has stopped listening default Normal TerminationGracefulTerminationFinished namespace/kube-system All pending requests processed default Normal TerminationStart namespace/kube-system Received signal to terminate, becoming unready, but keeping serving default Normal TerminationPreShutdownHooksFinished namespace/kube-system All pre-shutdown hooks have been finished default Normal TerminationMinimalShutdownDurationFinished namespace/kube-system The minimal shutdown duration of 0s finished default Normal TerminationStoppedServing namespace/kube-system Server has stopped listening default Normal TerminationGracefulTerminationFinished namespace/kube-system All pending requests processed openshift-ingress-operator 62m Warning FailedScheduling pod/ingress-operator-6486794b49-42ddh 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-monitoring 42m Warning FailedScheduling pod/prometheus-k8s-1 0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling.. openshift-monitoring 42m Warning FailedScheduling pod/prometheus-k8s-1 0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling.. openshift-monitoring 39m Warning FailedScheduling pod/prometheus-k8s-1 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 39m Warning FailedScheduling pod/prometheus-k8s-1 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-console 42m Normal Scheduled pod/console-7db75d8d45-7vkqx Successfully assigned openshift-console/console-7db75d8d45-7vkqx to ip-10-0-197-197.ec2.internal openshift-console 42m Warning FailedScheduling pod/console-7db75d8d45-7vkqx 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-console 42m Warning FailedScheduling pod/console-7db75d8d45-7vkqx 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-console 42m Warning FailedScheduling pod/console-7db75d8d45-7vkqx 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-monitoring 39m Warning FailedScheduling pod/prometheus-k8s-1 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 37m Normal Scheduled pod/prometheus-k8s-1 Successfully assigned openshift-monitoring/prometheus-k8s-1 to ip-10-0-195-121.ec2.internal openshift-monitoring 28m Warning FailedScheduling pod/prometheus-k8s-1 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) were unschedulable, 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 28m Warning FailedScheduling pod/prometheus-k8s-1 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) were unschedulable, 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 26m Normal Scheduled pod/prometheus-k8s-1 Successfully assigned openshift-monitoring/prometheus-k8s-1 to ip-10-0-195-121.ec2.internal openshift-ingress-operator 59m Normal Scheduled pod/ingress-operator-6486794b49-42ddh Successfully assigned openshift-ingress-operator/ingress-operator-6486794b49-42ddh to ip-10-0-197-197.ec2.internal openshift-console 39m Normal Scheduled pod/console-65cc7f8b45-md5n8 Successfully assigned openshift-console/console-65cc7f8b45-md5n8 to ip-10-0-140-6.ec2.internal openshift-console 39m Warning FailedScheduling pod/console-65cc7f8b45-md5n8 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-operator-lifecycle-manager 62m Warning FailedScheduling pod/catalog-operator-567d5cdcc9-gwwnx 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-console 39m Warning FailedScheduling pod/console-65cc7f8b45-md5n8 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-oauth-apiserver 55m Warning FailedScheduling pod/apiserver-9b9694fdc-kb6ks 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-console 39m Warning FailedScheduling pod/console-65cc7f8b45-md5n8 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-kube-apiserver Normal AfterShutdownDelayDuration pod/kube-apiserver-ip-10-0-197-197.ec2.internal The minimal shutdown duration of 2m9s finished openshift-ovn-kubernetes 40m Normal Scheduled pod/ovnkube-node-zzdfn Successfully assigned openshift-ovn-kubernetes/ovnkube-node-zzdfn to ip-10-0-187-75.ec2.internal openshift-monitoring 31m Normal Scheduled pod/prometheus-operator-7f64545d8-7h6fd Successfully assigned openshift-monitoring/prometheus-operator-7f64545d8-7h6fd to ip-10-0-187-75.ec2.internal openshift-console 32m Normal Scheduled pod/console-65cc7f8b45-mbjm9 Successfully assigned openshift-console/console-65cc7f8b45-mbjm9 to ip-10-0-140-6.ec2.internal openshift-monitoring 38m Normal Scheduled pod/prometheus-operator-7f64545d8-cxj25 Successfully assigned openshift-monitoring/prometheus-operator-7f64545d8-cxj25 to ip-10-0-187-75.ec2.internal openshift-monitoring 34m Normal Scheduled pod/prometheus-operator-7f64545d8-j6vlm Successfully assigned openshift-monitoring/prometheus-operator-7f64545d8-j6vlm to ip-10-0-195-121.ec2.internal openshift-monitoring 58m Warning FailedScheduling pod/prometheus-operator-admission-webhook-5c549f4449-d5c7w 0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. openshift-monitoring 57m Warning FailedScheduling pod/prometheus-operator-admission-webhook-5c549f4449-d5c7w 0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. openshift-monitoring 55m Warning FailedScheduling pod/prometheus-operator-admission-webhook-5c549f4449-d5c7w 0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling.. openshift-monitoring 55m Warning FailedScheduling pod/prometheus-operator-admission-webhook-5c549f4449-d5c7w 0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling.. openshift-console 38m Normal Scheduled pod/console-65cc7f8b45-drq2q Successfully assigned openshift-console/console-65cc7f8b45-drq2q to ip-10-0-197-197.ec2.internal openshift-console 39m Warning FailedScheduling pod/console-65cc7f8b45-drq2q 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-console 39m Warning FailedScheduling pod/console-65cc7f8b45-drq2q 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-console 39m Warning FailedScheduling pod/console-65cc7f8b45-drq2q 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-console 39m Warning FailedScheduling pod/console-65cc7f8b45-drq2q 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-monitoring 55m Warning FailedScheduling pod/prometheus-operator-admission-webhook-5c549f4449-d5c7w 0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling.. openshift-monitoring 54m Normal Scheduled pod/prometheus-operator-admission-webhook-5c549f4449-d5c7w Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-5c549f4449-d5c7w to ip-10-0-232-8.ec2.internal openshift-machine-api 59m Normal Scheduled pod/control-plane-machine-set-operator-77b4c948f8-s7qsh Successfully assigned openshift-machine-api/control-plane-machine-set-operator-77b4c948f8-s7qsh to ip-10-0-197-197.ec2.internal openshift-machine-api 62m Warning FailedScheduling pod/control-plane-machine-set-operator-77b4c948f8-s7qsh 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-console 37m Normal Scheduled pod/console-65cc7f8b45-4xp2z Successfully assigned openshift-console/console-65cc7f8b45-4xp2z to ip-10-0-239-132.ec2.internal openshift-kube-apiserver Normal InFlightRequestsDrained pod/kube-apiserver-ip-10-0-197-197.ec2.internal All non long-running request(s) in-flight have drained openshift-ovn-kubernetes 61m Normal Scheduled pod/ovnkube-node-x8pqn Successfully assigned openshift-ovn-kubernetes/ovnkube-node-x8pqn to ip-10-0-197-197.ec2.internal openshift-ingress-operator 32m Normal Scheduled pod/ingress-operator-6486794b49-9zv9g Successfully assigned openshift-ingress-operator/ingress-operator-6486794b49-9zv9g to ip-10-0-140-6.ec2.internal openshift-machine-api 32m Normal Scheduled pod/control-plane-machine-set-operator-77b4c948f8-7vvdb Successfully assigned openshift-machine-api/control-plane-machine-set-operator-77b4c948f8-7vvdb to ip-10-0-140-6.ec2.internal openshift-kube-apiserver Normal HTTPServerStoppedListening pod/kube-apiserver-ip-10-0-197-197.ec2.internal HTTP Server has stopped listening openshift-oauth-apiserver 55m Normal Scheduled pod/apiserver-9b9694fdc-g7gxw Successfully assigned openshift-oauth-apiserver/apiserver-9b9694fdc-g7gxw to ip-10-0-239-132.ec2.internal openshift-machine-api 32m Normal Scheduled pod/cluster-baremetal-operator-cb6794dd9-h8ch4 Successfully assigned openshift-machine-api/cluster-baremetal-operator-cb6794dd9-h8ch4 to ip-10-0-140-6.ec2.internal openshift-oauth-apiserver 55m Warning FailedScheduling pod/apiserver-9b9694fdc-g7gxw 0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 1 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-operator-lifecycle-manager 59m Normal Scheduled pod/catalog-operator-567d5cdcc9-gwwnx Successfully assigned openshift-operator-lifecycle-manager/catalog-operator-567d5cdcc9-gwwnx to ip-10-0-197-197.ec2.internal openshift-console 50m Normal Scheduled pod/console-64949fc89-v8nrv Successfully assigned openshift-console/console-64949fc89-v8nrv to ip-10-0-140-6.ec2.internal openshift-oauth-apiserver 55m Warning FailedScheduling pod/apiserver-9b9694fdc-g7gxw 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-ingress 57m Warning FailedScheduling pod/router-default-699d8c97f-6nwwk 0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. openshift-ingress 55m Normal Scheduled pod/router-default-699d8c97f-6nwwk Successfully assigned openshift-ingress/router-default-699d8c97f-6nwwk to ip-10-0-160-152.ec2.internal openshift-ingress 58m Warning FailedScheduling pod/router-default-699d8c97f-9xbcx 0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. default Warning KubeAPIReadyz namespace/openshift-kube-apiserver readyz=true default Normal ShutdownInitiated namespace/openshift-kube-apiserver Received signal to terminate, becoming unready, but keeping serving default Normal TerminationPreShutdownHooksFinished namespace/openshift-kube-apiserver All pre-shutdown hooks have been finished default Normal AfterShutdownDelayDuration namespace/openshift-kube-apiserver The minimal shutdown duration of 1m10s finished default Normal InFlightRequestsDrained namespace/openshift-kube-apiserver All non long-running request(s) in-flight have drained default Normal HTTPServerStoppedListening namespace/openshift-kube-apiserver HTTP Server has stopped listening default Normal TerminationGracefulTerminationFinished namespace/openshift-kube-apiserver All pending requests processed openshift-ingress 57m Warning FailedScheduling pod/router-default-699d8c97f-9xbcx 0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. openshift-console 50m Normal Scheduled pod/console-64949fc89-nhxbj Successfully assigned openshift-console/console-64949fc89-nhxbj to ip-10-0-239-132.ec2.internal openshift-ingress 55m Warning FailedScheduling pod/router-default-699d8c97f-9xbcx 0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling.. openshift-ingress 55m Normal Scheduled pod/router-default-699d8c97f-9xbcx Successfully assigned openshift-ingress/router-default-699d8c97f-9xbcx to ip-10-0-160-152.ec2.internal openshift-monitoring 58m Warning FailedScheduling pod/prometheus-operator-admission-webhook-5c549f4449-v9x8h 0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. openshift-console 42m Warning FailedScheduling pod/console-569c4c4669-p6rk8 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-ovn-kubernetes 55m Warning FailedScheduling pod/ovnkube-node-x4z8l running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "ovnkube-node-x4z8l": pod ovnkube-node-x4z8l is already assigned to node "ip-10-0-232-8.ec2.internal" openshift-ovn-kubernetes 55m Normal Scheduled pod/ovnkube-node-x4z8l Successfully assigned openshift-ovn-kubernetes/ovnkube-node-x4z8l to ip-10-0-232-8.ec2.internal openshift-monitoring 57m Warning FailedScheduling pod/prometheus-operator-admission-webhook-5c549f4449-v9x8h 0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. openshift-monitoring 55m Normal Scheduled pod/prometheus-operator-admission-webhook-5c549f4449-v9x8h Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-5c549f4449-v9x8h to ip-10-0-160-152.ec2.internal openshift-ingress 49m Normal Scheduled pod/router-default-699d8c97f-mlkcv Successfully assigned openshift-ingress/router-default-699d8c97f-mlkcv to ip-10-0-232-8.ec2.internal openshift-console 42m Normal Scheduled pod/console-569c4c4669-gdr7m Successfully assigned openshift-console/console-569c4c4669-gdr7m to ip-10-0-239-132.ec2.internal openshift-monitoring 42m Warning FailedScheduling pod/prometheus-operator-admission-webhook-5c9b9d98cc-4mv5m 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.. openshift-monitoring 39m Warning FailedScheduling pod/prometheus-operator-admission-webhook-5c9b9d98cc-4mv5m 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 39m Warning FailedScheduling pod/prometheus-operator-admission-webhook-5c9b9d98cc-4mv5m 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 39m Normal Scheduled pod/prometheus-operator-admission-webhook-5c9b9d98cc-4mv5m Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-5c9b9d98cc-4mv5m to ip-10-0-195-121.ec2.internal openshift-monitoring 31m Warning FailedScheduling pod/prometheus-operator-admission-webhook-5c9b9d98cc-9qkgr 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 31m Normal Scheduled pod/kube-state-metrics-7d7b86bb68-kpmhh Successfully assigned openshift-monitoring/kube-state-metrics-7d7b86bb68-kpmhh to ip-10-0-187-75.ec2.internal openshift-monitoring 31m Warning FailedScheduling pod/prometheus-operator-admission-webhook-5c9b9d98cc-9qkgr 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-oauth-apiserver 55m Warning FailedScheduling pod/apiserver-9b9694fdc-g7gxw 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-oauth-apiserver 51m Normal Scheduled pod/apiserver-8ddbf84fd-g8ssl Successfully assigned openshift-oauth-apiserver/apiserver-8ddbf84fd-g8ssl to ip-10-0-197-197.ec2.internal openshift-oauth-apiserver 52m Warning FailedScheduling pod/apiserver-8ddbf84fd-g8ssl 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-oauth-apiserver 52m Warning FailedScheduling pod/apiserver-8ddbf84fd-g8ssl 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-monitoring 28m Warning FailedScheduling pod/prometheus-operator-admission-webhook-5c9b9d98cc-9qkgr 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) were unschedulable, 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 28m Warning FailedScheduling pod/prometheus-operator-admission-webhook-5c9b9d98cc-9qkgr 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) were unschedulable, 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-kube-apiserver Normal TerminationGracefulTerminationFinished pod/kube-apiserver-ip-10-0-197-197.ec2.internal All pending requests processed openshift-oauth-apiserver 49m Normal Scheduled pod/apiserver-8ddbf84fd-7qf7p Successfully assigned openshift-oauth-apiserver/apiserver-8ddbf84fd-7qf7p to ip-10-0-239-132.ec2.internal openshift-ovn-kubernetes 61m Normal Scheduled pod/ovnkube-node-wsrzb Successfully assigned openshift-ovn-kubernetes/ovnkube-node-wsrzb to ip-10-0-239-132.ec2.internal openshift-monitoring 26m Normal Scheduled pod/prometheus-operator-admission-webhook-5c9b9d98cc-9qkgr Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-5c9b9d98cc-9qkgr to ip-10-0-195-121.ec2.internal openshift-monitoring 34m Warning FailedScheduling pod/prometheus-operator-admission-webhook-5c9b9d98cc-dvsqk 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 34m Warning FailedScheduling pod/prometheus-operator-admission-webhook-5c9b9d98cc-dvsqk 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 32m Warning FailedScheduling pod/prometheus-operator-admission-webhook-5c9b9d98cc-dvsqk 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 32m Warning FailedScheduling pod/prometheus-operator-admission-webhook-5c9b9d98cc-dvsqk 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-machine-api 59m Normal Scheduled pod/cluster-baremetal-operator-cb6794dd9-8bqk2 Successfully assigned openshift-machine-api/cluster-baremetal-operator-cb6794dd9-8bqk2 to ip-10-0-197-197.ec2.internal openshift-ingress 42m Warning FailedScheduling pod/router-default-75b548b966-bd28g 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.. openshift-ingress 41m Warning FailedScheduling pod/router-default-75b548b966-bd28g skip schedule deleting pod: openshift-ingress/router-default-75b548b966-bd28g openshift-ingress 42m Warning FailedScheduling pod/router-default-75b548b966-br22c 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.. openshift-ingress 41m Warning FailedScheduling pod/router-default-75b548b966-br22c skip schedule deleting pod: openshift-ingress/router-default-75b548b966-br22c openshift-monitoring 31m Normal Scheduled pod/prometheus-operator-admission-webhook-5c9b9d98cc-dvsqk Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-5c9b9d98cc-dvsqk to ip-10-0-187-75.ec2.internal openshift-machine-api 62m Warning FailedScheduling pod/cluster-baremetal-operator-cb6794dd9-8bqk2 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-oauth-apiserver 49m Warning FailedScheduling pod/apiserver-8ddbf84fd-7qf7p 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-oauth-apiserver 50m Warning FailedScheduling pod/apiserver-8ddbf84fd-7qf7p 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-ingress 42m Normal Scheduled pod/router-default-7898b977d4-l6kqr Successfully assigned openshift-ingress/router-default-7898b977d4-l6kqr to ip-10-0-232-8.ec2.internal openshift-machine-api 32m Normal Scheduled pod/cluster-autoscaler-operator-7fcffdb7c8-hswcn Successfully assigned openshift-machine-api/cluster-autoscaler-operator-7fcffdb7c8-hswcn to ip-10-0-239-132.ec2.internal openshift-ingress 42m Normal Scheduled pod/router-default-7898b977d4-vhrfb Successfully assigned openshift-ingress/router-default-7898b977d4-vhrfb to ip-10-0-160-152.ec2.internal openshift-ingress 34m Warning FailedScheduling pod/router-default-7cf4c94d4-klqtt 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-ingress 34m Warning FailedScheduling pod/router-default-7cf4c94d4-klqtt 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-ingress 32m Warning FailedScheduling pod/router-default-7cf4c94d4-klqtt 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-ingress 32m Warning FailedScheduling pod/router-default-7cf4c94d4-klqtt 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-ingress 31m Normal Scheduled pod/router-default-7cf4c94d4-klqtt Successfully assigned openshift-ingress/router-default-7cf4c94d4-klqtt to ip-10-0-187-75.ec2.internal openshift-ovn-kubernetes 55m Warning FailedScheduling pod/ovnkube-node-8sb9g running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "ovnkube-node-8sb9g": pod ovnkube-node-8sb9g is already assigned to node "ip-10-0-160-152.ec2.internal" openshift-ovn-kubernetes 55m Normal Scheduled pod/ovnkube-node-8sb9g Successfully assigned openshift-ovn-kubernetes/ovnkube-node-8sb9g to ip-10-0-160-152.ec2.internal openshift-machine-api 59m Normal Scheduled pod/cluster-autoscaler-operator-7fcffdb7c8-g4w4m Successfully assigned openshift-machine-api/cluster-autoscaler-operator-7fcffdb7c8-g4w4m to ip-10-0-197-197.ec2.internal openshift-monitoring 42m Warning FailedScheduling pod/prometheus-operator-admission-webhook-5c9b9d98cc-nznt8 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.. openshift-monitoring 39m Normal Scheduled pod/prometheus-operator-admission-webhook-5c9b9d98cc-nznt8 Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-5c9b9d98cc-nznt8 to ip-10-0-187-75.ec2.internal openshift-machine-api 62m Warning FailedScheduling pod/cluster-autoscaler-operator-7fcffdb7c8-g4w4m 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-ingress 41m Warning FailedScheduling pod/router-default-7cf4c94d4-s4mh5 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.. openshift-ingress 39m Normal Scheduled pod/router-default-7cf4c94d4-s4mh5 Successfully assigned openshift-ingress/router-default-7cf4c94d4-s4mh5 to ip-10-0-187-75.ec2.internal openshift-monitoring 53m Normal Scheduled pod/prometheus-operator-f4cf7fb47-bhql4 Successfully assigned openshift-monitoring/prometheus-operator-f4cf7fb47-bhql4 to ip-10-0-140-6.ec2.internal openshift-oauth-apiserver 50m Warning FailedScheduling pod/apiserver-8ddbf84fd-7qf7p 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-authentication 41m Warning FailedScheduling pod/oauth-openshift-58cb97bf44-dtw8g 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-authentication 41m Warning FailedScheduling pod/oauth-openshift-58cb97bf44-dtw8g 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-authentication 39m Warning FailedScheduling pod/oauth-openshift-58cb97bf44-dtw8g 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-authentication 39m Warning FailedScheduling pod/oauth-openshift-58cb97bf44-dtw8g 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-authentication 39m Warning FailedScheduling pod/oauth-openshift-58cb97bf44-dtw8g 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-oauth-apiserver 48m Normal Scheduled pod/apiserver-8ddbf84fd-4jwnk Successfully assigned openshift-oauth-apiserver/apiserver-8ddbf84fd-4jwnk to ip-10-0-140-6.ec2.internal openshift-authentication 37m Warning FailedScheduling pod/oauth-openshift-5c9d8ccbcc-bkr8m 0/7 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-authentication 37m Warning FailedScheduling pod/oauth-openshift-5c9d8ccbcc-bkr8m 0/7 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-operator-lifecycle-manager 32m Normal Scheduled pod/catalog-operator-567d5cdcc9-zvdz6 Successfully assigned openshift-operator-lifecycle-manager/catalog-operator-567d5cdcc9-zvdz6 to ip-10-0-239-132.ec2.internal openshift-authentication 37m Warning FailedScheduling pod/oauth-openshift-5c9d8ccbcc-bkr8m 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-authentication 37m Normal Scheduled pod/oauth-openshift-5c9d8ccbcc-bkr8m Successfully assigned openshift-authentication/oauth-openshift-5c9d8ccbcc-bkr8m to ip-10-0-239-132.ec2.internal openshift-oauth-apiserver 48m Warning FailedScheduling pod/apiserver-8ddbf84fd-4jwnk 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-apiserver Normal TerminationGracefulTerminationFinished pod/apiserver-7475f65d84-whqlh All pending requests processed openshift-operator-lifecycle-manager 59m Warning FailedScheduling pod/collect-profiles-27990015-4vlzz 0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. openshift-apiserver Normal TerminationPreShutdownHooksFinished pod/apiserver-7475f65d84-whqlh All pre-shutdown hooks have been finished openshift-apiserver Normal TerminationStoppedServing pod/apiserver-7475f65d84-whqlh Server has stopped listening openshift-apiserver Normal TerminationMinimalShutdownDurationFinished pod/apiserver-7475f65d84-whqlh The minimal shutdown duration of 15s finished openshift-operator-lifecycle-manager 57m Warning FailedScheduling pod/collect-profiles-27990015-4vlzz 0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. openshift-operator-lifecycle-manager 55m Normal Scheduled pod/collect-profiles-27990015-4vlzz Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-27990015-4vlzz to ip-10-0-160-152.ec2.internal openshift-apiserver Normal TerminationStart pod/apiserver-7475f65d84-whqlh Received signal to terminate, becoming unready, but keeping serving openshift-oauth-apiserver 49m Warning FailedScheduling pod/apiserver-8ddbf84fd-4jwnk 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-authentication 37m Warning FailedScheduling pod/oauth-openshift-5c9d8ccbcc-vkchb 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-authentication 37m Warning FailedScheduling pod/oauth-openshift-5c9d8ccbcc-vkchb 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-authentication 34m Warning FailedScheduling pod/oauth-openshift-5c9d8ccbcc-vkchb 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-authentication 53m Normal Scheduled pod/oauth-openshift-5fdc498fc9-2ktk4 Successfully assigned openshift-authentication/oauth-openshift-5fdc498fc9-2ktk4 to ip-10-0-197-197.ec2.internal openshift-authentication 53m Normal Scheduled pod/oauth-openshift-5fdc498fc9-pbpqd Successfully assigned openshift-authentication/oauth-openshift-5fdc498fc9-pbpqd to ip-10-0-140-6.ec2.internal openshift-kube-apiserver Warning KubeAPIReadyz pod/kube-apiserver-ip-10-0-197-197.ec2.internal readyz=true openshift-authentication 53m Normal Scheduled pod/oauth-openshift-5fdc498fc9-vjtw8 Successfully assigned openshift-authentication/oauth-openshift-5fdc498fc9-vjtw8 to ip-10-0-239-132.ec2.internal openshift-authentication 37m Warning FailedScheduling pod/oauth-openshift-6cd75d67b9-27tx4 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-authentication 37m Warning FailedScheduling pod/oauth-openshift-6cd75d67b9-27tx4 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-authentication 42m Warning FailedScheduling pod/oauth-openshift-6cd75d67b9-btb4m 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-authentication 42m Warning FailedScheduling pod/oauth-openshift-6cd75d67b9-btb4m 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-authentication 42m Warning FailedScheduling pod/oauth-openshift-6cd75d67b9-btb4m 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-authentication 42m Normal Scheduled pod/oauth-openshift-6cd75d67b9-btb4m Successfully assigned openshift-authentication/oauth-openshift-6cd75d67b9-btb4m to ip-10-0-140-6.ec2.internal openshift-authentication 42m Warning FailedScheduling pod/oauth-openshift-6cd75d67b9-hnvl6 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-authentication 41m Warning FailedScheduling pod/oauth-openshift-6cd75d67b9-hnvl6 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-oauth-apiserver 57m Normal Scheduled pod/apiserver-89645c77-szcw6 Successfully assigned openshift-oauth-apiserver/apiserver-89645c77-szcw6 to ip-10-0-239-132.ec2.internal openshift-apiserver 47m Normal Scheduled pod/apiserver-7475f65d84-whqlh Successfully assigned openshift-apiserver/apiserver-7475f65d84-whqlh to ip-10-0-197-197.ec2.internal openshift-apiserver 48m Warning FailedScheduling pod/apiserver-7475f65d84-whqlh 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-apiserver 49m Warning FailedScheduling pod/apiserver-7475f65d84-whqlh 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-apiserver Normal TerminationGracefulTerminationFinished pod/apiserver-7475f65d84-lm7x6 All pending requests processed openshift-authentication 28m Warning FailedScheduling pod/oauth-openshift-85644d984b-2d8rq 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 1 node(s) were unschedulable, 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling.. openshift-authentication 28m Warning FailedScheduling pod/oauth-openshift-85644d984b-2d8rq 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 1 node(s) were unschedulable, 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling.. openshift-apiserver Normal TerminationPreShutdownHooksFinished pod/apiserver-7475f65d84-lm7x6 All pre-shutdown hooks have been finished openshift-apiserver Normal TerminationStoppedServing pod/apiserver-7475f65d84-lm7x6 Server has stopped listening openshift-apiserver Normal TerminationMinimalShutdownDurationFinished pod/apiserver-7475f65d84-lm7x6 The minimal shutdown duration of 15s finished openshift-operator-lifecycle-manager 44m Normal Scheduled pod/collect-profiles-27990030-m4gbh Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-27990030-m4gbh to ip-10-0-160-152.ec2.internal openshift-authentication 27m Normal Scheduled pod/oauth-openshift-85644d984b-2d8rq Successfully assigned openshift-authentication/oauth-openshift-85644d984b-2d8rq to ip-10-0-239-132.ec2.internal openshift-apiserver Normal TerminationStart pod/apiserver-7475f65d84-lm7x6 Received signal to terminate, becoming unready, but keeping serving openshift-authentication 32m Warning FailedScheduling pod/oauth-openshift-85644d984b-5jmpn 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 1 node(s) were unschedulable, 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/7 nodes are available: 3 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling.. openshift-authentication 32m Warning FailedScheduling pod/oauth-openshift-85644d984b-5jmpn 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-authentication 32m Warning FailedScheduling pod/oauth-openshift-85644d984b-5jmpn 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-authentication 31m Warning FailedScheduling pod/oauth-openshift-85644d984b-5jmpn 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-authentication 31m Warning FailedScheduling pod/oauth-openshift-85644d984b-5jmpn 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-console-operator 42m Normal Scheduled pod/console-operator-57cbc6b88f-tbq55 Successfully assigned openshift-console-operator/console-operator-57cbc6b88f-tbq55 to ip-10-0-140-6.ec2.internal openshift-kube-storage-version-migrator 59m Normal Scheduled pod/migrator-579f5cd9c5-sk4xj Successfully assigned openshift-kube-storage-version-migrator/migrator-579f5cd9c5-sk4xj to ip-10-0-239-132.ec2.internal openshift-kube-storage-version-migrator 37m Normal Scheduled pod/migrator-579f5cd9c5-qkfvb Successfully assigned openshift-kube-storage-version-migrator/migrator-579f5cd9c5-qkfvb to ip-10-0-160-152.ec2.internal openshift-ingress 31m Warning FailedScheduling pod/router-default-7cf4c94d4-tqmcb 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-ingress 31m Warning FailedScheduling pod/router-default-7cf4c94d4-tqmcb 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-ingress 28m Warning FailedScheduling pod/router-default-7cf4c94d4-tqmcb 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) were unschedulable, 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-ingress 28m Warning FailedScheduling pod/router-default-7cf4c94d4-tqmcb 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) were unschedulable, 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-ingress 26m Normal Scheduled pod/router-default-7cf4c94d4-tqmcb Successfully assigned openshift-ingress/router-default-7cf4c94d4-tqmcb to ip-10-0-195-121.ec2.internal openshift-kube-storage-version-migrator 42m Normal Scheduled pod/migrator-579f5cd9c5-flz72 Successfully assigned openshift-kube-storage-version-migrator/migrator-579f5cd9c5-flz72 to ip-10-0-232-8.ec2.internal openshift-ingress 41m Warning FailedScheduling pod/router-default-7cf4c94d4-zs7xj 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 5 Preemption is not helpful for scheduling.. openshift-console-operator 50m Normal Scheduled pod/console-operator-57cbc6b88f-snwcj Successfully assigned openshift-console-operator/console-operator-57cbc6b88f-snwcj to ip-10-0-239-132.ec2.internal openshift-ingress 39m Warning FailedScheduling pod/router-default-7cf4c94d4-zs7xj 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-ingress 39m Warning FailedScheduling pod/router-default-7cf4c94d4-zs7xj 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) didn't match pod anti-affinity rules, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-ingress 39m Normal Scheduled pod/router-default-7cf4c94d4-zs7xj Successfully assigned openshift-ingress/router-default-7cf4c94d4-zs7xj to ip-10-0-195-121.ec2.internal openshift-authentication 28m Normal Scheduled pod/oauth-openshift-85644d984b-5jmpn Successfully assigned openshift-authentication/oauth-openshift-85644d984b-5jmpn to ip-10-0-197-197.ec2.internal openshift-authentication 33m Warning FailedScheduling pod/oauth-openshift-85644d984b-qhpfp 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-authentication 32m Warning FailedScheduling pod/oauth-openshift-85644d984b-qhpfp 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-authentication 32m Normal Scheduled pod/oauth-openshift-85644d984b-qhpfp Successfully assigned openshift-authentication/oauth-openshift-85644d984b-qhpfp to ip-10-0-140-6.ec2.internal openshift-kube-apiserver Warning KubeAPIReadyz pod/kube-apiserver-ip-10-0-197-197.ec2.internal readyz=true openshift-oauth-apiserver 57m Normal Scheduled pod/apiserver-89645c77-fdwmw Successfully assigned openshift-oauth-apiserver/apiserver-89645c77-fdwmw to ip-10-0-140-6.ec2.internal openshift-monitoring 42m Normal Scheduled pod/sre-dns-latency-exporter-4j7vx Successfully assigned openshift-monitoring/sre-dns-latency-exporter-4j7vx to ip-10-0-160-152.ec2.internal openshift-monitoring 42m Normal Scheduled pod/sre-dns-latency-exporter-62rmk Successfully assigned openshift-monitoring/sre-dns-latency-exporter-62rmk to ip-10-0-197-197.ec2.internal openshift-monitoring 42m Normal Scheduled pod/sre-dns-latency-exporter-fvnpq Successfully assigned openshift-monitoring/sre-dns-latency-exporter-fvnpq to ip-10-0-239-132.ec2.internal openshift-monitoring 40m Normal Scheduled pod/sre-dns-latency-exporter-hm6bk Successfully assigned openshift-monitoring/sre-dns-latency-exporter-hm6bk to ip-10-0-187-75.ec2.internal openshift-console-operator 37m Normal Scheduled pod/console-operator-57cbc6b88f-b2ttj Successfully assigned openshift-console-operator/console-operator-57cbc6b88f-b2ttj to ip-10-0-239-132.ec2.internal openshift-monitoring 42m Normal Scheduled pod/sre-dns-latency-exporter-snmkd Successfully assigned openshift-monitoring/sre-dns-latency-exporter-snmkd to ip-10-0-232-8.ec2.internal openshift-monitoring 42m Normal Scheduled pod/sre-dns-latency-exporter-t9jjt Successfully assigned openshift-monitoring/sre-dns-latency-exporter-t9jjt to ip-10-0-140-6.ec2.internal openshift-authentication 49m Warning FailedScheduling pod/oauth-openshift-86966797f8-b47q9 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-monitoring 39m Normal Scheduled pod/sre-dns-latency-exporter-v8kzl Successfully assigned openshift-monitoring/sre-dns-latency-exporter-v8kzl to ip-10-0-195-121.ec2.internal openshift-apiserver 45m Normal Scheduled pod/apiserver-7475f65d84-lm7x6 Successfully assigned openshift-apiserver/apiserver-7475f65d84-lm7x6 to ip-10-0-140-6.ec2.internal openshift-apiserver 45m Warning FailedScheduling pod/apiserver-7475f65d84-lm7x6 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-config-operator 59m Normal Scheduled pod/openshift-config-operator-67bdbffb68-sdgx7 Successfully assigned openshift-config-operator/openshift-config-operator-67bdbffb68-sdgx7 to ip-10-0-197-197.ec2.internal openshift-config-operator 62m Warning FailedScheduling pod/openshift-config-operator-67bdbffb68-sdgx7 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-monitoring 42m Normal Scheduled pod/sre-ebs-iops-reporter-1-5p7mx Successfully assigned openshift-monitoring/sre-ebs-iops-reporter-1-5p7mx to ip-10-0-232-8.ec2.internal openshift-config-operator 32m Normal Scheduled pod/openshift-config-operator-67bdbffb68-9f2m6 Successfully assigned openshift-config-operator/openshift-config-operator-67bdbffb68-9f2m6 to ip-10-0-239-132.ec2.internal openshift-monitoring 42m Normal Scheduled pod/sre-ebs-iops-reporter-1-deploy Successfully assigned openshift-monitoring/sre-ebs-iops-reporter-1-deploy to ip-10-0-160-152.ec2.internal openshift-monitoring 42m Normal Scheduled pod/sre-ebs-iops-reporter-1-x89c4 Successfully assigned openshift-monitoring/sre-ebs-iops-reporter-1-x89c4 to ip-10-0-160-152.ec2.internal openshift-monitoring 42m Normal Scheduled pod/sre-stuck-ebs-vols-1-7pl6b Successfully assigned openshift-monitoring/sre-stuck-ebs-vols-1-7pl6b to ip-10-0-160-152.ec2.internal openshift-monitoring 42m Normal Scheduled pod/sre-stuck-ebs-vols-1-deploy Successfully assigned openshift-monitoring/sre-stuck-ebs-vols-1-deploy to ip-10-0-232-8.ec2.internal openshift-monitoring 37m Normal Scheduled pod/sre-stuck-ebs-vols-1-fzwz8 Successfully assigned openshift-monitoring/sre-stuck-ebs-vols-1-fzwz8 to ip-10-0-160-152.ec2.internal openshift-monitoring 42m Normal Scheduled pod/sre-stuck-ebs-vols-1-ws5wv Successfully assigned openshift-monitoring/sre-stuck-ebs-vols-1-ws5wv to ip-10-0-232-8.ec2.internal openshift-monitoring 52m Normal Scheduled pod/telemeter-client-5bd4dfdf7c-2982f Successfully assigned openshift-monitoring/telemeter-client-5bd4dfdf7c-2982f to ip-10-0-232-8.ec2.internal openshift-apiserver 46m Warning FailedScheduling pod/apiserver-7475f65d84-lm7x6 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-apiserver Normal TerminationGracefulTerminationFinished pod/apiserver-7475f65d84-4ncn2 All pending requests processed openshift-authentication 49m Warning FailedScheduling pod/oauth-openshift-86966797f8-b47q9 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-authentication 48m Normal Scheduled pod/oauth-openshift-86966797f8-b47q9 Successfully assigned openshift-authentication/oauth-openshift-86966797f8-b47q9 to ip-10-0-197-197.ec2.internal openshift-apiserver Normal TerminationPreShutdownHooksFinished pod/apiserver-7475f65d84-4ncn2 All pre-shutdown hooks have been finished openshift-apiserver Normal TerminationStoppedServing pod/apiserver-7475f65d84-4ncn2 Server has stopped listening openshift-apiserver Normal TerminationPreShutdownHooksFinished pod/apiserver-5f568869f-b9bw5 All pre-shutdown hooks have been finished openshift-apiserver Normal TerminationMinimalShutdownDurationFinished pod/apiserver-7475f65d84-4ncn2 The minimal shutdown duration of 15s finished openshift-authentication 50m Warning FailedScheduling pod/oauth-openshift-86966797f8-g5rm7 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-apiserver-operator 32m Normal Scheduled pod/openshift-apiserver-operator-67fd94b9d7-m22hm Successfully assigned openshift-apiserver-operator/openshift-apiserver-operator-67fd94b9d7-m22hm to ip-10-0-140-6.ec2.internal openshift-operator-lifecycle-manager 28m Normal Scheduled pod/collect-profiles-27990045-xf7fw Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-27990045-xf7fw to ip-10-0-232-8.ec2.internal openshift-apiserver Normal TerminationStart pod/apiserver-7475f65d84-4ncn2 Received signal to terminate, becoming unready, but keeping serving openshift-operator-lifecycle-manager 14m Normal Scheduled pod/collect-profiles-27990060-kvm2x Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-27990060-kvm2x to ip-10-0-232-8.ec2.internal openshift-ovn-kubernetes 61m Normal Scheduled pod/ovnkube-node-8qw6d Successfully assigned openshift-ovn-kubernetes/ovnkube-node-8qw6d to ip-10-0-140-6.ec2.internal openshift-ovn-kubernetes 39m Normal Scheduled pod/ovnkube-node-6jsx2 Successfully assigned openshift-ovn-kubernetes/ovnkube-node-6jsx2 to ip-10-0-195-121.ec2.internal openshift-apiserver-operator 61m Warning FailedScheduling pod/openshift-apiserver-operator-67fd94b9d7-nvg29 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-apiserver-operator 59m Normal Scheduled pod/openshift-apiserver-operator-67fd94b9d7-nvg29 Successfully assigned openshift-apiserver-operator/openshift-apiserver-operator-67fd94b9d7-nvg29 to ip-10-0-197-197.ec2.internal openshift-authentication 50m Warning FailedScheduling pod/oauth-openshift-86966797f8-g5rm7 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-kube-apiserver Normal ShutdownInitiated pod/kube-apiserver-ip-10-0-197-197.ec2.internal Received signal to terminate, becoming unready, but keeping serving openshift-kube-apiserver Normal TerminationPreShutdownHooksFinished pod/kube-apiserver-ip-10-0-197-197.ec2.internal All pre-shutdown hooks have been finished openshift-kube-apiserver Normal AfterShutdownDelayDuration pod/kube-apiserver-ip-10-0-197-197.ec2.internal The minimal shutdown duration of 2m9s finished openshift-kube-apiserver Normal InFlightRequestsDrained pod/kube-apiserver-ip-10-0-197-197.ec2.internal All non long-running request(s) in-flight have drained openshift-kube-apiserver Normal HTTPServerStoppedListening pod/kube-apiserver-ip-10-0-197-197.ec2.internal HTTP Server has stopped listening openshift-kube-apiserver Normal TerminationGracefulTerminationFinished pod/kube-apiserver-ip-10-0-197-197.ec2.internal All pending requests processed openshift-authentication 49m Warning FailedScheduling pod/oauth-openshift-86966797f8-g5rm7 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-authentication 49m Normal Scheduled pod/oauth-openshift-86966797f8-g5rm7 Successfully assigned openshift-authentication/oauth-openshift-86966797f8-g5rm7 to ip-10-0-140-6.ec2.internal openshift-authentication 48m Warning FailedScheduling pod/oauth-openshift-86966797f8-sbdp5 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-authentication 48m Warning FailedScheduling pod/oauth-openshift-86966797f8-sbdp5 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-apiserver 49m Normal Scheduled pod/apiserver-7475f65d84-4ncn2 Successfully assigned openshift-apiserver/apiserver-7475f65d84-4ncn2 to ip-10-0-239-132.ec2.internal openshift-cluster-version 62m Normal Scheduled pod/cluster-version-operator-5d74b9d6f5-qzcfb Successfully assigned openshift-cluster-version/cluster-version-operator-5d74b9d6f5-qzcfb to ip-10-0-239-132.ec2.internal openshift-monitoring 37m Normal Scheduled pod/telemeter-client-5c9599c744-827bg Successfully assigned openshift-monitoring/telemeter-client-5c9599c744-827bg to ip-10-0-195-121.ec2.internal openshift-apiserver 49m Warning FailedScheduling pod/apiserver-7475f65d84-4ncn2 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-monitoring 31m Normal Scheduled pod/telemeter-client-5c9599c744-rlt2c Successfully assigned openshift-monitoring/telemeter-client-5c9599c744-rlt2c to ip-10-0-187-75.ec2.internal openshift-apiserver 50m Warning FailedScheduling pod/apiserver-7475f65d84-4ncn2 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-cluster-version 37m Normal Scheduled pod/cluster-version-operator-5d74b9d6f5-nclrf Successfully assigned openshift-cluster-version/cluster-version-operator-5d74b9d6f5-nclrf to ip-10-0-239-132.ec2.internal openshift-monitoring 26m Normal Scheduled pod/telemeter-client-6756b7679c-qgzlk Successfully assigned openshift-monitoring/telemeter-client-6756b7679c-qgzlk to ip-10-0-187-75.ec2.internal openshift-monitoring 37m Normal Scheduled pod/thanos-querier-6566ccfdd9-7cwhk Successfully assigned openshift-monitoring/thanos-querier-6566ccfdd9-7cwhk to ip-10-0-195-121.ec2.internal openshift-monitoring 37m Normal Scheduled pod/thanos-querier-6566ccfdd9-jmz7s Successfully assigned openshift-monitoring/thanos-querier-6566ccfdd9-jmz7s to ip-10-0-187-75.ec2.internal openshift-monitoring 31m Warning FailedScheduling pod/thanos-querier-6566ccfdd9-lkbh6 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-cluster-version 42m Normal Scheduled pod/cluster-version-operator-5d74b9d6f5-689xc Successfully assigned openshift-cluster-version/cluster-version-operator-5d74b9d6f5-689xc to ip-10-0-140-6.ec2.internal openshift-monitoring 31m Warning FailedScheduling pod/thanos-querier-6566ccfdd9-lkbh6 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 28m Warning FailedScheduling pod/thanos-querier-6566ccfdd9-lkbh6 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) were unschedulable, 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 28m Warning FailedScheduling pod/thanos-querier-6566ccfdd9-lkbh6 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 1 node(s) were unschedulable, 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 26m Normal Scheduled pod/thanos-querier-6566ccfdd9-lkbh6 Successfully assigned openshift-monitoring/thanos-querier-6566ccfdd9-lkbh6 to ip-10-0-195-121.ec2.internal openshift-monitoring 34m Warning FailedScheduling pod/thanos-querier-6566ccfdd9-vppqt 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 34m Warning FailedScheduling pod/thanos-querier-6566ccfdd9-vppqt 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 32m Warning FailedScheduling pod/thanos-querier-6566ccfdd9-vppqt 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 32m Warning FailedScheduling pod/thanos-querier-6566ccfdd9-vppqt 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 31m Normal Scheduled pod/thanos-querier-6566ccfdd9-vppqt Successfully assigned openshift-monitoring/thanos-querier-6566ccfdd9-vppqt to ip-10-0-187-75.ec2.internal openshift-monitoring 52m Normal Scheduled pod/thanos-querier-7bbf5b5dcd-7fpvv Successfully assigned openshift-monitoring/thanos-querier-7bbf5b5dcd-7fpvv to ip-10-0-232-8.ec2.internal openshift-cluster-storage-operator 37m Normal Scheduled pod/csi-snapshot-webhook-75476bf784-zlxp4 Successfully assigned openshift-cluster-storage-operator/csi-snapshot-webhook-75476bf784-zlxp4 to ip-10-0-239-132.ec2.internal openshift-kube-storage-version-migrator-operator 59m Normal Scheduled pod/kube-storage-version-migrator-operator-7f8b95cf5f-x5hzl Successfully assigned openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f8b95cf5f-x5hzl to ip-10-0-197-197.ec2.internal openshift-kube-storage-version-migrator-operator 61m Warning FailedScheduling pod/kube-storage-version-migrator-operator-7f8b95cf5f-x5hzl 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-kube-storage-version-migrator-operator 32m Normal Scheduled pod/kube-storage-version-migrator-operator-7f8b95cf5f-dvvp5 Successfully assigned openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f8b95cf5f-dvvp5 to ip-10-0-140-6.ec2.internal openshift-insights 32m Normal Scheduled pod/insights-operator-6fd65c6b65-lh6xj Successfully assigned openshift-insights/insights-operator-6fd65c6b65-lh6xj to ip-10-0-140-6.ec2.internal openshift-insights 62m Warning FailedScheduling pod/insights-operator-6fd65c6b65-vrxhp 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-insights 59m Normal Scheduled pod/insights-operator-6fd65c6b65-vrxhp Successfully assigned openshift-insights/insights-operator-6fd65c6b65-vrxhp to ip-10-0-197-197.ec2.internal openshift-cluster-storage-operator 42m Normal Scheduled pod/csi-snapshot-webhook-75476bf784-sfhhx Successfully assigned openshift-cluster-storage-operator/csi-snapshot-webhook-75476bf784-sfhhx to ip-10-0-197-197.ec2.internal openshift-cluster-storage-operator 32m Normal Scheduled pod/csi-snapshot-webhook-75476bf784-bhnwx Successfully assigned openshift-cluster-storage-operator/csi-snapshot-webhook-75476bf784-bhnwx to ip-10-0-140-6.ec2.internal openshift-cluster-storage-operator 59m Normal Scheduled pod/csi-snapshot-webhook-75476bf784-7z4rl Successfully assigned openshift-cluster-storage-operator/csi-snapshot-webhook-75476bf784-7z4rl to ip-10-0-140-6.ec2.internal openshift-apiserver 50m Warning FailedScheduling pod/apiserver-7475f65d84-4ncn2 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-cluster-storage-operator 59m Normal Scheduled pod/csi-snapshot-webhook-75476bf784-7vh6f Successfully assigned openshift-cluster-storage-operator/csi-snapshot-webhook-75476bf784-7vh6f to ip-10-0-239-132.ec2.internal openshift-authentication 48m Normal Scheduled pod/oauth-openshift-86966797f8-sbdp5 Successfully assigned openshift-authentication/oauth-openshift-86966797f8-sbdp5 to ip-10-0-239-132.ec2.internal openshift-authentication 42m Warning FailedScheduling pod/oauth-openshift-86966797f8-vtzkz 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-kube-apiserver-operator 62m Warning FailedScheduling pod/kube-apiserver-operator-79b598d5b4-dqp95 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-kube-apiserver-operator 59m Normal Scheduled pod/kube-apiserver-operator-79b598d5b4-dqp95 Successfully assigned openshift-kube-apiserver-operator/kube-apiserver-operator-79b598d5b4-dqp95 to ip-10-0-197-197.ec2.internal openshift-authentication 42m Warning FailedScheduling pod/oauth-openshift-86966797f8-vtzkz 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-kube-apiserver-operator 32m Normal Scheduled pod/kube-apiserver-operator-79b598d5b4-rm6pd Successfully assigned openshift-kube-apiserver-operator/kube-apiserver-operator-79b598d5b4-rm6pd to ip-10-0-140-6.ec2.internal openshift-monitoring 42m Warning FailedScheduling pod/thanos-querier-7bbf5b5dcd-fvmbq 0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling.. openshift-monitoring 42m Warning FailedScheduling pod/thanos-querier-7bbf5b5dcd-fvmbq 0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling.. openshift-oauth-apiserver 57m Normal Scheduled pod/apiserver-89645c77-26sj6 Successfully assigned openshift-oauth-apiserver/apiserver-89645c77-26sj6 to ip-10-0-197-197.ec2.internal openshift-oauth-apiserver 37m Normal Scheduled pod/apiserver-74455c7c5-tqs7k Successfully assigned openshift-oauth-apiserver/apiserver-74455c7c5-tqs7k to ip-10-0-239-132.ec2.internal openshift-monitoring 39m Warning FailedScheduling pod/thanos-querier-7bbf5b5dcd-fvmbq 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-cluster-storage-operator 59m Normal Scheduled pod/csi-snapshot-controller-operator-c9586b974-wk85s Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-operator-c9586b974-wk85s to ip-10-0-197-197.ec2.internal openshift-cluster-storage-operator 62m Warning FailedScheduling pod/csi-snapshot-controller-operator-c9586b974-wk85s 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-monitoring 39m Warning FailedScheduling pod/thanos-querier-7bbf5b5dcd-fvmbq 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 39m Warning FailedScheduling pod/thanos-querier-7bbf5b5dcd-fvmbq 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 1 node(s) didn't match pod anti-affinity rules, 6 Preemption is not helpful for scheduling.. openshift-monitoring 52m Normal Scheduled pod/thanos-querier-7bbf5b5dcd-nrjft Successfully assigned openshift-monitoring/thanos-querier-7bbf5b5dcd-nrjft to ip-10-0-160-152.ec2.internal openshift-cluster-storage-operator 32m Normal Scheduled pod/csi-snapshot-controller-operator-c9586b974-k2tdv Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-operator-c9586b974-k2tdv to ip-10-0-239-132.ec2.internal openshift-oauth-apiserver 37m Warning FailedScheduling pod/apiserver-74455c7c5-tqs7k 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-apiserver Normal TerminationGracefulTerminationFinished pod/apiserver-6977bc9f6b-wgtnw All pending requests processed openshift-oauth-apiserver 37m Warning FailedScheduling pod/apiserver-74455c7c5-tqs7k 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-cluster-storage-operator 37m Normal Scheduled pod/csi-snapshot-controller-f58c44499-xkth2 Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-f58c44499-xkth2 to ip-10-0-239-132.ec2.internal openshift-cluster-storage-operator 32m Normal Scheduled pod/csi-snapshot-controller-f58c44499-svdlt Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-f58c44499-svdlt to ip-10-0-140-6.ec2.internal openshift-cluster-storage-operator 42m Normal Scheduled pod/csi-snapshot-controller-f58c44499-rnqw9 Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-f58c44499-rnqw9 to ip-10-0-197-197.ec2.internal openshift-cluster-storage-operator 59m Normal Scheduled pod/csi-snapshot-controller-f58c44499-qvgsh Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-f58c44499-qvgsh to ip-10-0-239-132.ec2.internal openshift-cluster-storage-operator 59m Normal Scheduled pod/csi-snapshot-controller-f58c44499-k4v7v Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-f58c44499-k4v7v to ip-10-0-140-6.ec2.internal openshift-monitoring 42m Normal Scheduled pod/token-refresher-5dbcf88876-cbn8j Successfully assigned openshift-monitoring/token-refresher-5dbcf88876-cbn8j to ip-10-0-232-8.ec2.internal openshift-monitoring 37m Normal Scheduled pod/token-refresher-5dbcf88876-hfhjz Successfully assigned openshift-monitoring/token-refresher-5dbcf88876-hfhjz to ip-10-0-160-152.ec2.internal openshift-multus 61m Normal Scheduled pod/multus-486wq Successfully assigned openshift-multus/multus-486wq to ip-10-0-197-197.ec2.internal openshift-authentication 52m Warning FailedScheduling pod/oauth-openshift-cf968c599-9vrxf 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-multus 61m Normal Scheduled pod/multus-7x2mr Successfully assigned openshift-multus/multus-7x2mr to ip-10-0-140-6.ec2.internal openshift-apiserver Normal TerminationPreShutdownHooksFinished pod/apiserver-6977bc9f6b-wgtnw All pre-shutdown hooks have been finished openshift-apiserver Normal TerminationStoppedServing pod/apiserver-6977bc9f6b-wgtnw Server has stopped listening openshift-apiserver Normal TerminationMinimalShutdownDurationFinished pod/apiserver-6977bc9f6b-wgtnw The minimal shutdown duration of 15s finished openshift-authentication 52m Warning FailedScheduling pod/oauth-openshift-cf968c599-9vrxf 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-authentication 52m Normal Scheduled pod/oauth-openshift-cf968c599-9vrxf Successfully assigned openshift-authentication/oauth-openshift-cf968c599-9vrxf to ip-10-0-239-132.ec2.internal openshift-apiserver Normal TerminationStart pod/apiserver-6977bc9f6b-wgtnw Received signal to terminate, becoming unready, but keeping serving openshift-authentication 52m Warning FailedScheduling pod/oauth-openshift-cf968c599-ffkkn 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-multus 40m Normal Scheduled pod/multus-additional-cni-plugins-4qmk6 Successfully assigned openshift-multus/multus-additional-cni-plugins-4qmk6 to ip-10-0-187-75.ec2.internal openshift-multus 61m Normal Scheduled pod/multus-additional-cni-plugins-b2lhx Successfully assigned openshift-multus/multus-additional-cni-plugins-b2lhx to ip-10-0-140-6.ec2.internal openshift-authentication 51m Normal Scheduled pod/oauth-openshift-cf968c599-ffkkn Successfully assigned openshift-authentication/oauth-openshift-cf968c599-ffkkn to ip-10-0-197-197.ec2.internal openshift-authentication 50m Warning FailedScheduling pod/oauth-openshift-cf968c599-kskc6 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-authentication 50m Warning FailedScheduling pod/oauth-openshift-cf968c599-kskc6 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-cluster-storage-operator 32m Normal Scheduled pod/cluster-storage-operator-fb5868667-wn4n8 Successfully assigned openshift-cluster-storage-operator/cluster-storage-operator-fb5868667-wn4n8 to ip-10-0-239-132.ec2.internal openshift-authentication 50m Normal Scheduled pod/oauth-openshift-cf968c599-kskc6 Successfully assigned openshift-authentication/oauth-openshift-cf968c599-kskc6 to ip-10-0-140-6.ec2.internal openshift-cluster-storage-operator 59m Normal Scheduled pod/cluster-storage-operator-fb5868667-cclnx Successfully assigned openshift-cluster-storage-operator/cluster-storage-operator-fb5868667-cclnx to ip-10-0-197-197.ec2.internal openshift-cluster-storage-operator 62m Warning FailedScheduling pod/cluster-storage-operator-fb5868667-cclnx 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-oauth-apiserver 37m Warning FailedScheduling pod/apiserver-74455c7c5-tqs7k 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-kube-apiserver Warning KubeAPIReadyz pod/kube-apiserver-ip-10-0-197-197.ec2.internal readyz=true openshift-kube-apiserver Warning KubeAPIReadyz pod/kube-apiserver-ip-10-0-239-132.ec2.internal readyz=true openshift-oauth-apiserver 39m Warning FailedScheduling pod/apiserver-74455c7c5-tqs7k 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-cluster-samples-operator 57m Normal Scheduled pod/cluster-samples-operator-bf9b9498c-mkgcp Successfully assigned openshift-cluster-samples-operator/cluster-samples-operator-bf9b9498c-mkgcp to ip-10-0-140-6.ec2.internal openshift-oauth-apiserver 39m Warning FailedScheduling pod/apiserver-74455c7c5-tqs7k 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-oauth-apiserver 39m Warning FailedScheduling pod/apiserver-74455c7c5-tqs7k 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-oauth-apiserver 39m Warning FailedScheduling pod/apiserver-74455c7c5-tqs7k 0/6 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/6 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling.. openshift-apiserver 55m Normal Scheduled pod/apiserver-6977bc9f6b-wgtnw Successfully assigned openshift-apiserver/apiserver-6977bc9f6b-wgtnw to ip-10-0-140-6.ec2.internal openshift-apiserver 55m Warning FailedScheduling pod/apiserver-6977bc9f6b-wgtnw 0/5 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-apiserver 55m Warning FailedScheduling pod/apiserver-6977bc9f6b-wgtnw 0/5 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-apiserver 55m Warning FailedScheduling pod/apiserver-6977bc9f6b-wgtnw 0/5 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-apiserver 55m Warning FailedScheduling pod/apiserver-6977bc9f6b-wgtnw 0/5 nodes are available: 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-apiserver Normal TerminationGracefulTerminationFinished pod/apiserver-6977bc9f6b-b9qrr All pending requests processed openshift-cluster-samples-operator 37m Normal Scheduled pod/cluster-samples-operator-bf9b9498c-gn68l Successfully assigned openshift-cluster-samples-operator/cluster-samples-operator-bf9b9498c-gn68l to ip-10-0-239-132.ec2.internal openshift-oauth-apiserver 40m Warning FailedScheduling pod/apiserver-74455c7c5-tqs7k 0/6 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/6 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 4 Preemption is not helpful for scheduling.. openshift-oauth-apiserver 40m Warning FailedScheduling pod/apiserver-74455c7c5-tqs7k 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-apiserver Normal TerminationPreShutdownHooksFinished pod/apiserver-6977bc9f6b-b9qrr All pre-shutdown hooks have been finished openshift-apiserver Normal TerminationStoppedServing pod/apiserver-6977bc9f6b-b9qrr Server has stopped listening openshift-apiserver Normal TerminationMinimalShutdownDurationFinished pod/apiserver-6977bc9f6b-b9qrr The minimal shutdown duration of 15s finished openshift-cluster-node-tuning-operator 58m Normal Scheduled pod/tuned-zxj2p Successfully assigned openshift-cluster-node-tuning-operator/tuned-zxj2p to ip-10-0-140-6.ec2.internal openshift-oauth-apiserver 41m Normal Scheduled pod/apiserver-74455c7c5-rpzl9 Successfully assigned openshift-oauth-apiserver/apiserver-74455c7c5-rpzl9 to ip-10-0-197-197.ec2.internal openshift-multus 61m Normal Scheduled pod/multus-additional-cni-plugins-g7hvw Successfully assigned openshift-multus/multus-additional-cni-plugins-g7hvw to ip-10-0-239-132.ec2.internal openshift-cluster-node-tuning-operator 58m Normal Scheduled pod/tuned-x9jkg Successfully assigned openshift-cluster-node-tuning-operator/tuned-x9jkg to ip-10-0-197-197.ec2.internal openshift-oauth-apiserver 41m Warning FailedScheduling pod/apiserver-74455c7c5-rpzl9 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-apiserver Normal TerminationStart pod/apiserver-6977bc9f6b-b9qrr Received signal to terminate, becoming unready, but keeping serving openshift-oauth-apiserver 42m Warning FailedScheduling pod/apiserver-74455c7c5-rpzl9 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-oauth-apiserver 32m Normal Scheduled pod/apiserver-74455c7c5-m45v9 Successfully assigned openshift-oauth-apiserver/apiserver-74455c7c5-m45v9 to ip-10-0-140-6.ec2.internal openshift-cluster-node-tuning-operator 55m Warning FailedScheduling pod/tuned-t8kzn running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "tuned-t8kzn": pod tuned-t8kzn is already assigned to node "ip-10-0-160-152.ec2.internal" openshift-cluster-node-tuning-operator 55m Normal Scheduled pod/tuned-t8kzn Successfully assigned openshift-cluster-node-tuning-operator/tuned-t8kzn to ip-10-0-160-152.ec2.internal openshift-kube-apiserver Warning KubeAPIReadyz pod/kube-apiserver-ip-10-0-239-132.ec2.internal readyz=true openshift-oauth-apiserver 34m Warning FailedScheduling pod/apiserver-74455c7c5-m45v9 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-oauth-apiserver 36m Warning FailedScheduling pod/apiserver-74455c7c5-m45v9 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-controller-manager 31m Warning FailedScheduling pod/controller-manager-66b447958d-w97xv 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-oauth-apiserver 36m Warning FailedScheduling pod/apiserver-74455c7c5-m45v9 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-oauth-apiserver 40m Normal Scheduled pod/apiserver-74455c7c5-h9ck5 Successfully assigned openshift-oauth-apiserver/apiserver-74455c7c5-h9ck5 to ip-10-0-140-6.ec2.internal openshift-oauth-apiserver 41m Warning FailedScheduling pod/apiserver-74455c7c5-h9ck5 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-backplane-srep 37m Normal Scheduled pod/osd-delete-ownerrefs-serviceaccounts-27990037-cdprm Successfully assigned openshift-backplane-srep/osd-delete-ownerrefs-serviceaccounts-27990037-cdprm to ip-10-0-160-152.ec2.internal openshift-cluster-node-tuning-operator 58m Normal Scheduled pod/tuned-pbkvf Successfully assigned openshift-cluster-node-tuning-operator/tuned-pbkvf to ip-10-0-239-132.ec2.internal openshift-oauth-apiserver 41m Warning FailedScheduling pod/apiserver-74455c7c5-h9ck5 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-oauth-apiserver 28m Normal Scheduled pod/apiserver-74455c7c5-6zb4s Successfully assigned openshift-oauth-apiserver/apiserver-74455c7c5-6zb4s to ip-10-0-197-197.ec2.internal openshift-oauth-apiserver 31m Warning FailedScheduling pod/apiserver-74455c7c5-6zb4s 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-oauth-apiserver 31m Warning FailedScheduling pod/apiserver-74455c7c5-6zb4s 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-apiserver 54m Warning FailedScheduling pod/apiserver-6977bc9f6b-b9qrr running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "apiserver-6977bc9f6b-b9qrr": pod apiserver-6977bc9f6b-b9qrr is already assigned to node "ip-10-0-239-132.ec2.internal" openshift-apiserver 54m Normal Scheduled pod/apiserver-6977bc9f6b-b9qrr Successfully assigned openshift-apiserver/apiserver-6977bc9f6b-b9qrr to ip-10-0-239-132.ec2.internal openshift-apiserver 55m Warning FailedScheduling pod/apiserver-6977bc9f6b-b9qrr 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-apiserver 55m Warning FailedScheduling pod/apiserver-6977bc9f6b-b9qrr 0/5 nodes are available: 2 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-apiserver 55m Warning FailedScheduling pod/apiserver-6977bc9f6b-b9qrr 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-cluster-node-tuning-operator 39m Normal Scheduled pod/tuned-nhvkp Successfully assigned openshift-cluster-node-tuning-operator/tuned-nhvkp to ip-10-0-195-121.ec2.internal openshift-apiserver 55m Warning FailedScheduling pod/apiserver-6977bc9f6b-b9qrr 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/5 nodes are available: 2 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-apiserver Normal TerminationGracefulTerminationFinished pod/apiserver-6977bc9f6b-6c47k All pending requests processed openshift-oauth-apiserver 32m Warning FailedScheduling pod/apiserver-74455c7c5-6zb4s 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-multus 61m Normal Scheduled pod/multus-additional-cni-plugins-hg7bc Successfully assigned openshift-multus/multus-additional-cni-plugins-hg7bc to ip-10-0-197-197.ec2.internal openshift-oauth-apiserver 32m Warning FailedScheduling pod/apiserver-74455c7c5-6zb4s 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-apiserver Normal TerminationPreShutdownHooksFinished pod/apiserver-6977bc9f6b-6c47k All pre-shutdown hooks have been finished openshift-cluster-node-tuning-operator 40m Normal Scheduled pod/tuned-9gtgt Successfully assigned openshift-cluster-node-tuning-operator/tuned-9gtgt to ip-10-0-187-75.ec2.internal openshift-apiserver Normal TerminationStoppedServing pod/apiserver-6977bc9f6b-6c47k Server has stopped listening openshift-apiserver Normal TerminationMinimalShutdownDurationFinished pod/apiserver-6977bc9f6b-6c47k The minimal shutdown duration of 15s finished openshift-backplane-srep 7m5s Normal Scheduled pod/osd-delete-ownerrefs-serviceaccounts-27990067-rbrjk Successfully assigned openshift-backplane-srep/osd-delete-ownerrefs-serviceaccounts-27990067-rbrjk to ip-10-0-187-75.ec2.internal openshift-apiserver Normal TerminationStart pod/apiserver-6977bc9f6b-6c47k Received signal to terminate, becoming unready, but keeping serving openshift-kube-apiserver Normal ShutdownInitiated pod/kube-apiserver-ip-10-0-239-132.ec2.internal Received signal to terminate, becoming unready, but keeping serving openshift-cluster-node-tuning-operator 55m Warning FailedScheduling pod/tuned-5mn5s running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "tuned-5mn5s": pod tuned-5mn5s is already assigned to node "ip-10-0-232-8.ec2.internal" openshift-cluster-node-tuning-operator 55m Normal Scheduled pod/tuned-5mn5s Successfully assigned openshift-cluster-node-tuning-operator/tuned-5mn5s to ip-10-0-232-8.ec2.internal openshift-kube-apiserver Normal TerminationPreShutdownHooksFinished pod/kube-apiserver-ip-10-0-239-132.ec2.internal All pre-shutdown hooks have been finished openshift-kube-apiserver Normal AfterShutdownDelayDuration pod/kube-apiserver-ip-10-0-239-132.ec2.internal The minimal shutdown duration of 2m9s finished openshift-kube-apiserver Normal InFlightRequestsDrained pod/kube-apiserver-ip-10-0-239-132.ec2.internal All non long-running request(s) in-flight have drained openshift-cluster-node-tuning-operator 32m Normal Scheduled pod/cluster-node-tuning-operator-5886c76fd4-cntr6 Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-5886c76fd4-cntr6 to ip-10-0-140-6.ec2.internal openshift-kube-controller-manager-operator 59m Normal Scheduled pod/kube-controller-manager-operator-655bd6977c-z9mb9 Successfully assigned openshift-kube-controller-manager-operator/kube-controller-manager-operator-655bd6977c-z9mb9 to ip-10-0-197-197.ec2.internal openshift-kube-controller-manager-operator 61m Warning FailedScheduling pod/kube-controller-manager-operator-655bd6977c-z9mb9 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-cluster-node-tuning-operator 59m Normal Scheduled pod/cluster-node-tuning-operator-5886c76fd4-7qpt5 Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-5886c76fd4-7qpt5 to ip-10-0-197-197.ec2.internal openshift-cluster-node-tuning-operator 62m Warning FailedScheduling pod/cluster-node-tuning-operator-5886c76fd4-7qpt5 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-kube-apiserver Normal HTTPServerStoppedListening pod/kube-apiserver-ip-10-0-239-132.ec2.internal HTTP Server has stopped listening openshift-kube-apiserver Normal TerminationGracefulTerminationFinished pod/kube-apiserver-ip-10-0-239-132.ec2.internal All pending requests processed openshift-kube-controller-manager-operator 32m Normal Scheduled pod/kube-controller-manager-operator-655bd6977c-27c5p Successfully assigned openshift-kube-controller-manager-operator/kube-controller-manager-operator-655bd6977c-27c5p to ip-10-0-140-6.ec2.internal openshift-cluster-machine-approver 32m Normal Scheduled pod/machine-approver-5cd47987c9-xkqd2 Successfully assigned openshift-cluster-machine-approver/machine-approver-5cd47987c9-xkqd2 to ip-10-0-239-132.ec2.internal openshift-cluster-machine-approver 59m Normal Scheduled pod/machine-approver-5cd47987c9-96cvq Successfully assigned openshift-cluster-machine-approver/machine-approver-5cd47987c9-96cvq to ip-10-0-197-197.ec2.internal openshift-cluster-machine-approver 62m Warning FailedScheduling pod/machine-approver-5cd47987c9-96cvq 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-multus 55m Normal Scheduled pod/multus-additional-cni-plugins-j5mgq Successfully assigned openshift-multus/multus-additional-cni-plugins-j5mgq to ip-10-0-160-152.ec2.internal openshift-multus 55m Warning FailedScheduling pod/multus-additional-cni-plugins-j5mgq running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "multus-additional-cni-plugins-j5mgq": pod multus-additional-cni-plugins-j5mgq is already assigned to node "ip-10-0-160-152.ec2.internal" openshift-cluster-csi-drivers 59m Normal Scheduled pod/aws-ebs-csi-driver-operator-667bfc499d-pjs9d Successfully assigned openshift-cluster-csi-drivers/aws-ebs-csi-driver-operator-667bfc499d-pjs9d to ip-10-0-140-6.ec2.internal openshift-cluster-csi-drivers 37m Normal Scheduled pod/aws-ebs-csi-driver-operator-667bfc499d-7fmff Successfully assigned openshift-cluster-csi-drivers/aws-ebs-csi-driver-operator-667bfc499d-7fmff to ip-10-0-239-132.ec2.internal openshift-cluster-csi-drivers 59m Normal Scheduled pod/aws-ebs-csi-driver-node-zcbkq Successfully assigned openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-zcbkq to ip-10-0-140-6.ec2.internal openshift-multus 55m Normal Scheduled pod/multus-additional-cni-plugins-l7zm7 Successfully assigned openshift-multus/multus-additional-cni-plugins-l7zm7 to ip-10-0-232-8.ec2.internal openshift-multus 55m Warning FailedScheduling pod/multus-additional-cni-plugins-l7zm7 running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "multus-additional-cni-plugins-l7zm7": pod multus-additional-cni-plugins-l7zm7 is already assigned to node "ip-10-0-232-8.ec2.internal" openshift-cluster-csi-drivers 59m Normal Scheduled pod/aws-ebs-csi-driver-node-ts9mc Successfully assigned openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-ts9mc to ip-10-0-239-132.ec2.internal openshift-multus 39m Normal Scheduled pod/multus-additional-cni-plugins-x8r6f Successfully assigned openshift-multus/multus-additional-cni-plugins-x8r6f to ip-10-0-195-121.ec2.internal openshift-apiserver 55m Normal Scheduled pod/apiserver-6977bc9f6b-6c47k Successfully assigned openshift-apiserver/apiserver-6977bc9f6b-6c47k to ip-10-0-197-197.ec2.internal openshift-apiserver 55m Warning FailedScheduling pod/apiserver-6977bc9f6b-6c47k 0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 3 node(s) didn't match pod anti-affinity rules. preemption: 0/4 nodes are available: 1 Preemption is not helpful for scheduling, 3 node(s) didn't match pod anti-affinity rules.. openshift-apiserver 55m Warning FailedScheduling pod/apiserver-6977bc9f6b-6c47k 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-apiserver 55m Warning FailedScheduling pod/apiserver-6977bc9f6b-6c47k 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-apiserver 55m Warning FailedScheduling pod/apiserver-6977bc9f6b-6c47k 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-apiserver 55m Warning FailedScheduling pod/apiserver-6977bc9f6b-6c47k 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 3 node(s) didn't match pod anti-affinity rules.. openshift-operator-lifecycle-manager 32m Normal Scheduled pod/olm-operator-647f89bf4f-bl8lz Successfully assigned openshift-operator-lifecycle-manager/olm-operator-647f89bf4f-bl8lz to ip-10-0-239-132.ec2.internal openshift-operator-lifecycle-manager 62m Warning FailedScheduling pod/olm-operator-647f89bf4f-rgnx9 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-operator-lifecycle-manager 59m Normal Scheduled pod/olm-operator-647f89bf4f-rgnx9 Successfully assigned openshift-operator-lifecycle-manager/olm-operator-647f89bf4f-rgnx9 to ip-10-0-197-197.ec2.internal openshift-network-operator 61m Normal Scheduled pod/network-operator-6c9d58d76b-pl9td Successfully assigned openshift-network-operator/network-operator-6c9d58d76b-pl9td to ip-10-0-239-132.ec2.internal openshift-ovn-kubernetes 61m Normal Scheduled pod/ovnkube-master-w7545 Successfully assigned openshift-ovn-kubernetes/ovnkube-master-w7545 to ip-10-0-140-6.ec2.internal openshift-multus 57m Normal Scheduled pod/multus-admission-controller-6896747cbb-ljc49 Successfully assigned openshift-multus/multus-admission-controller-6896747cbb-ljc49 to ip-10-0-140-6.ec2.internal openshift-cluster-csi-drivers 40m Normal Scheduled pod/aws-ebs-csi-driver-node-s4chb Successfully assigned openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-s4chb to ip-10-0-187-75.ec2.internal openshift-multus 57m Normal Scheduled pod/multus-admission-controller-6896747cbb-rlm9s Successfully assigned openshift-multus/multus-admission-controller-6896747cbb-rlm9s to ip-10-0-197-197.ec2.internal openshift-multus 61m Warning FailedScheduling pod/multus-admission-controller-6f95d97cb6-7wv72 0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. openshift-multus 59m Normal Scheduled pod/multus-admission-controller-6f95d97cb6-7wv72 Successfully assigned openshift-multus/multus-admission-controller-6f95d97cb6-7wv72 to ip-10-0-197-197.ec2.internal openshift-kube-scheduler-operator 32m Normal Scheduled pod/openshift-kube-scheduler-operator-c98d57874-t6vzp Successfully assigned openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-c98d57874-t6vzp to ip-10-0-140-6.ec2.internal openshift-cluster-csi-drivers 39m Normal Scheduled pod/aws-ebs-csi-driver-node-r2n4w Successfully assigned openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-r2n4w to ip-10-0-195-121.ec2.internal openshift-multus 61m Warning FailedScheduling pod/multus-admission-controller-6f95d97cb6-x5s87 0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. openshift-multus 59m Normal Scheduled pod/multus-admission-controller-6f95d97cb6-x5s87 Successfully assigned openshift-multus/multus-admission-controller-6f95d97cb6-x5s87 to ip-10-0-197-197.ec2.internal openshift-cloud-controller-manager-operator 62m Normal Scheduled pod/cluster-cloud-controller-manager-operator-5dcbbcf757-fqvtw Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5dcbbcf757-fqvtw to ip-10-0-239-132.ec2.internal openshift-operator-lifecycle-manager 32m Normal Scheduled pod/package-server-manager-fc98f8f64-h9b5w Successfully assigned openshift-operator-lifecycle-manager/package-server-manager-fc98f8f64-h9b5w to ip-10-0-140-6.ec2.internal openshift-cluster-csi-drivers 59m Normal Scheduled pod/aws-ebs-csi-driver-node-q9lmf Successfully assigned openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-q9lmf to ip-10-0-197-197.ec2.internal openshift-apiserver 32m Normal Scheduled pod/apiserver-5f568869f-wdslz Successfully assigned openshift-apiserver/apiserver-5f568869f-wdslz to ip-10-0-140-6.ec2.internal openshift-apiserver 34m Warning FailedScheduling pod/apiserver-5f568869f-wdslz 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-apiserver 34m Warning FailedScheduling pod/apiserver-5f568869f-wdslz 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-apiserver 34m Warning FailedScheduling pod/apiserver-5f568869f-wdslz 0/7 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-apiserver Normal TerminationGracefulTerminationFinished pod/apiserver-5f568869f-mpswm All pending requests processed openshift-operator-lifecycle-manager 62m Warning FailedScheduling pod/package-server-manager-fc98f8f64-l2df9 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-operator-lifecycle-manager 59m Normal Scheduled pod/package-server-manager-fc98f8f64-l2df9 Successfully assigned openshift-operator-lifecycle-manager/package-server-manager-fc98f8f64-l2df9 to ip-10-0-197-197.ec2.internal openshift-ovn-kubernetes 61m Normal Scheduled pod/ovnkube-master-l7mb9 Successfully assigned openshift-ovn-kubernetes/ovnkube-master-l7mb9 to ip-10-0-239-132.ec2.internal openshift-cluster-csi-drivers 59m Normal Scheduled pod/aws-ebs-csi-driver-node-nznvd Successfully assigned openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-nznvd to ip-10-0-197-197.ec2.internal openshift-cluster-csi-drivers 59m Normal Scheduled pod/aws-ebs-csi-driver-node-lwrls Successfully assigned openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-lwrls to ip-10-0-239-132.ec2.internal openshift-apiserver 57m Normal Scheduled pod/apiserver-565b67b9f7-lvnhl Successfully assigned openshift-apiserver/apiserver-565b67b9f7-lvnhl to ip-10-0-239-132.ec2.internal openshift-multus 42m Normal Scheduled pod/multus-admission-controller-757b6fbf74-5hdn7 Successfully assigned openshift-multus/multus-admission-controller-757b6fbf74-5hdn7 to ip-10-0-197-197.ec2.internal openshift-multus 32m Normal Scheduled pod/multus-admission-controller-757b6fbf74-g2kdg Successfully assigned openshift-multus/multus-admission-controller-757b6fbf74-g2kdg to ip-10-0-140-6.ec2.internal openshift-multus 36m Normal Scheduled pod/multus-admission-controller-757b6fbf74-hl64m Successfully assigned openshift-multus/multus-admission-controller-757b6fbf74-hl64m to ip-10-0-239-132.ec2.internal openshift-multus 42m Normal Scheduled pod/multus-admission-controller-757b6fbf74-mz54v Successfully assigned openshift-multus/multus-admission-controller-757b6fbf74-mz54v to ip-10-0-140-6.ec2.internal openshift-apiserver Normal TerminationPreShutdownHooksFinished pod/apiserver-5f568869f-mpswm All pre-shutdown hooks have been finished openshift-multus 55m Normal Scheduled pod/multus-d7w6w Successfully assigned openshift-multus/multus-d7w6w to ip-10-0-160-152.ec2.internal openshift-cluster-csi-drivers 55m Warning FailedScheduling pod/aws-ebs-csi-driver-node-8w5jv running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "aws-ebs-csi-driver-node-8w5jv": pod aws-ebs-csi-driver-node-8w5jv is already assigned to node "ip-10-0-232-8.ec2.internal" openshift-cluster-csi-drivers 55m Normal Scheduled pod/aws-ebs-csi-driver-node-8w5jv Successfully assigned openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8w5jv to ip-10-0-232-8.ec2.internal openshift-cluster-csi-drivers 59m Normal Scheduled pod/aws-ebs-csi-driver-node-8l9r7 Successfully assigned openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-8l9r7 to ip-10-0-140-6.ec2.internal openshift-multus 55m Warning FailedScheduling pod/multus-d7w6w running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "multus-d7w6w": pod multus-d7w6w is already assigned to node "ip-10-0-160-152.ec2.internal" openshift-multus 39m Normal Scheduled pod/multus-db5qv Successfully assigned openshift-multus/multus-db5qv to ip-10-0-195-121.ec2.internal openshift-multus 61m Normal Scheduled pod/multus-kkqdt Successfully assigned openshift-multus/multus-kkqdt to ip-10-0-239-132.ec2.internal openshift-kube-scheduler-operator 61m Warning FailedScheduling pod/openshift-kube-scheduler-operator-c98d57874-wj7tl 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-kube-scheduler-operator 59m Normal Scheduled pod/openshift-kube-scheduler-operator-c98d57874-wj7tl Successfully assigned openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-c98d57874-wj7tl to ip-10-0-197-197.ec2.internal openshift-apiserver Normal TerminationStart pod/apiserver-565b67b9f7-lvnhl Received signal to terminate, becoming unready, but keeping serving openshift-apiserver Normal TerminationStoppedServing pod/apiserver-5f568869f-mpswm Server has stopped listening openshift-apiserver Normal TerminationMinimalShutdownDurationFinished pod/apiserver-5f568869f-mpswm The minimal shutdown duration of 15s finished openshift-apiserver Normal TerminationMinimalShutdownDurationFinished pod/apiserver-565b67b9f7-lvnhl The minimal shutdown duration of 15s finished openshift-apiserver Normal TerminationStoppedServing pod/apiserver-565b67b9f7-lvnhl Server has stopped listening openshift-apiserver Normal TerminationPreShutdownHooksFinished pod/apiserver-565b67b9f7-lvnhl All pre-shutdown hooks have been finished openshift-apiserver Normal TerminationGracefulTerminationFinished pod/apiserver-565b67b9f7-lvnhl All pending requests processed openshift-apiserver 57m Normal Scheduled pod/apiserver-565b67b9f7-w2dv2 Successfully assigned openshift-apiserver/apiserver-565b67b9f7-w2dv2 to ip-10-0-197-197.ec2.internal openshift-apiserver Normal TerminationStart pod/apiserver-5f568869f-mpswm Received signal to terminate, becoming unready, but keeping serving openshift-operator-lifecycle-manager 36m Normal Scheduled pod/packageserver-7c998868c6-ctgf5 Successfully assigned openshift-operator-lifecycle-manager/packageserver-7c998868c6-ctgf5 to ip-10-0-239-132.ec2.internal openshift-multus 40m Normal Scheduled pod/multus-xqcfd Successfully assigned openshift-multus/multus-xqcfd to ip-10-0-187-75.ec2.internal openshift-cluster-csi-drivers 55m Warning FailedScheduling pod/aws-ebs-csi-driver-node-2p86w running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "aws-ebs-csi-driver-node-2p86w": pod aws-ebs-csi-driver-node-2p86w is already assigned to node "ip-10-0-160-152.ec2.internal" openshift-cluster-csi-drivers 55m Normal Scheduled pod/aws-ebs-csi-driver-node-2p86w Successfully assigned openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-2p86w to ip-10-0-160-152.ec2.internal openshift-cluster-csi-drivers 59m Normal Scheduled pod/aws-ebs-csi-driver-controller-75b78f4dd4-m5mpr Successfully assigned openshift-cluster-csi-drivers/aws-ebs-csi-driver-controller-75b78f4dd4-m5mpr to ip-10-0-140-6.ec2.internal openshift-cluster-csi-drivers 59m Normal Scheduled pod/aws-ebs-csi-driver-controller-75b78f4dd4-hcwbd Successfully assigned openshift-cluster-csi-drivers/aws-ebs-csi-driver-controller-75b78f4dd4-hcwbd to ip-10-0-239-132.ec2.internal openshift-multus 55m Normal Scheduled pod/multus-ztsxl Successfully assigned openshift-multus/multus-ztsxl to ip-10-0-232-8.ec2.internal openshift-multus 55m Warning FailedScheduling pod/multus-ztsxl running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "multus-ztsxl": pod multus-ztsxl is already assigned to node "ip-10-0-232-8.ec2.internal" openshift-operator-lifecycle-manager 32m Normal Scheduled pod/packageserver-7c998868c6-fzz2h Successfully assigned openshift-operator-lifecycle-manager/packageserver-7c998868c6-fzz2h to ip-10-0-140-6.ec2.internal openshift-cluster-csi-drivers 59m Normal Scheduled pod/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp Successfully assigned openshift-cluster-csi-drivers/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp to ip-10-0-197-197.ec2.internal openshift-operator-lifecycle-manager 59m Normal Scheduled pod/packageserver-7c998868c6-mxs6q Successfully assigned openshift-operator-lifecycle-manager/packageserver-7c998868c6-mxs6q to ip-10-0-239-132.ec2.internal openshift-network-operator 42m Normal Scheduled pod/network-operator-6c9d58d76b-m2fjb Successfully assigned openshift-network-operator/network-operator-6c9d58d76b-m2fjb to ip-10-0-140-6.ec2.internal openshift-apiserver Normal TerminationStart pod/apiserver-565b67b9f7-w2dv2 Received signal to terminate, becoming unready, but keeping serving openshift-multus 55m Normal Scheduled pod/network-metrics-daemon-74bvc Successfully assigned openshift-multus/network-metrics-daemon-74bvc to ip-10-0-160-152.ec2.internal openshift-kube-apiserver Warning KubeAPIReadyz pod/kube-apiserver-ip-10-0-239-132.ec2.internal readyz=true openshift-apiserver Normal TerminationMinimalShutdownDurationFinished pod/apiserver-565b67b9f7-w2dv2 The minimal shutdown duration of 15s finished openshift-apiserver Normal TerminationStoppedServing pod/apiserver-565b67b9f7-w2dv2 Server has stopped listening openshift-apiserver Normal TerminationPreShutdownHooksFinished pod/apiserver-565b67b9f7-w2dv2 All pre-shutdown hooks have been finished openshift-multus 55m Warning FailedScheduling pod/network-metrics-daemon-74bvc running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "network-metrics-daemon-74bvc": pod network-metrics-daemon-74bvc is already assigned to node "ip-10-0-160-152.ec2.internal" openshift-multus 61m Normal Scheduled pod/network-metrics-daemon-7vpmf Successfully assigned openshift-multus/network-metrics-daemon-7vpmf to ip-10-0-239-132.ec2.internal openshift-apiserver Normal TerminationGracefulTerminationFinished pod/apiserver-565b67b9f7-w2dv2 All pending requests processed openshift-apiserver 57m Normal Scheduled pod/apiserver-565b67b9f7-wvhp4 Successfully assigned openshift-apiserver/apiserver-565b67b9f7-wvhp4 to ip-10-0-140-6.ec2.internal openshift-multus 61m Normal Scheduled pod/network-metrics-daemon-9gx7g Successfully assigned openshift-multus/network-metrics-daemon-9gx7g to ip-10-0-197-197.ec2.internal openshift-cluster-csi-drivers 32m Normal Scheduled pod/aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk Successfully assigned openshift-cluster-csi-drivers/aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk to ip-10-0-140-6.ec2.internal openshift-multus 55m Normal Scheduled pod/network-metrics-daemon-f6tv8 Successfully assigned openshift-multus/network-metrics-daemon-f6tv8 to ip-10-0-232-8.ec2.internal openshift-multus 55m Warning FailedScheduling pod/network-metrics-daemon-f6tv8 running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "network-metrics-daemon-f6tv8": pod network-metrics-daemon-f6tv8 is already assigned to node "ip-10-0-232-8.ec2.internal" openshift-multus 40m Normal Scheduled pod/network-metrics-daemon-lbxjr Successfully assigned openshift-multus/network-metrics-daemon-lbxjr to ip-10-0-187-75.ec2.internal openshift-multus 39m Normal Scheduled pod/network-metrics-daemon-qfgm8 Successfully assigned openshift-multus/network-metrics-daemon-qfgm8 to ip-10-0-195-121.ec2.internal openshift-multus 61m Normal Scheduled pod/network-metrics-daemon-v6lsv Successfully assigned openshift-multus/network-metrics-daemon-v6lsv to ip-10-0-140-6.ec2.internal openshift-ingress 58m Warning FailedScheduling pod/router-default-699d8c97f-6nwwk 0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. openshift-network-operator 36m Normal Scheduled pod/network-operator-6c9d58d76b-b79jx Successfully assigned openshift-network-operator/network-operator-6c9d58d76b-b79jx to ip-10-0-239-132.ec2.internal openshift-cloud-controller-manager-operator 42m Normal Scheduled pod/cluster-cloud-controller-manager-operator-5dcbbcf757-wggmm Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5dcbbcf757-wggmm to ip-10-0-140-6.ec2.internal openshift-network-diagnostics 37m Normal Scheduled pod/network-check-source-677bdb7d9-2tx2m Successfully assigned openshift-network-diagnostics/network-check-source-677bdb7d9-2tx2m to ip-10-0-160-152.ec2.internal openshift-cluster-csi-drivers 37m Normal Scheduled pod/aws-ebs-csi-driver-controller-5ff7cf9694-bg92z Successfully assigned openshift-cluster-csi-drivers/aws-ebs-csi-driver-controller-5ff7cf9694-bg92z to ip-10-0-239-132.ec2.internal openshift-network-diagnostics 61m Warning FailedScheduling pod/network-check-source-677bdb7d9-4sw4t 0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. openshift-apiserver 41m Normal Scheduled pod/apiserver-5f568869f-mpswm Successfully assigned openshift-apiserver/apiserver-5f568869f-mpswm to ip-10-0-140-6.ec2.internal openshift-apiserver Normal TerminationStart pod/apiserver-565b67b9f7-wvhp4 Received signal to terminate, becoming unready, but keeping serving openshift-apiserver 41m Warning FailedScheduling pod/apiserver-5f568869f-mpswm 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-apiserver 42m Warning FailedScheduling pod/apiserver-5f568869f-mpswm 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-apiserver Normal TerminationMinimalShutdownDurationFinished pod/apiserver-565b67b9f7-wvhp4 The minimal shutdown duration of 15s finished openshift-apiserver Normal TerminationStoppedServing pod/apiserver-565b67b9f7-wvhp4 Server has stopped listening openshift-apiserver Normal TerminationPreShutdownHooksFinished pod/apiserver-565b67b9f7-wvhp4 All pre-shutdown hooks have been finished openshift-network-diagnostics 57m Warning FailedScheduling pod/network-check-source-677bdb7d9-4sw4t 0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. openshift-network-diagnostics 56m Warning FailedScheduling pod/network-check-source-677bdb7d9-4sw4t 0/3 nodes are available: 3 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. openshift-apiserver Normal TerminationGracefulTerminationFinished pod/apiserver-565b67b9f7-wvhp4 All pending requests processed openshift-network-diagnostics 55m Normal Scheduled pod/network-check-source-677bdb7d9-4sw4t Successfully assigned openshift-network-diagnostics/network-check-source-677bdb7d9-4sw4t to ip-10-0-160-152.ec2.internal openshift-cluster-csi-drivers 58m Normal Scheduled pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Successfully assigned openshift-cluster-csi-drivers/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 to ip-10-0-140-6.ec2.internal openshift-operator-lifecycle-manager 59m Normal Scheduled pod/packageserver-7c998868c6-vtkkk Successfully assigned openshift-operator-lifecycle-manager/packageserver-7c998868c6-vtkkk to ip-10-0-140-6.ec2.internal openshift-network-operator 61m Normal Scheduled pod/mtu-prober-m487h Successfully assigned openshift-network-operator/mtu-prober-m487h to ip-10-0-239-132.ec2.internal openshift-cloud-network-config-controller 59m Normal Scheduled pod/cloud-network-config-controller-7cc55b87d4-drl56 Successfully assigned openshift-cloud-network-config-controller/cloud-network-config-controller-7cc55b87d4-drl56 to ip-10-0-197-197.ec2.internal openshift-cloud-network-config-controller 61m Warning FailedScheduling pod/cloud-network-config-controller-7cc55b87d4-drl56 0/3 nodes are available: 3 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.. openshift-apiserver 41m Warning FailedScheduling pod/apiserver-5f568869f-8zhkc 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-apiserver 41m Warning FailedScheduling pod/apiserver-5f568869f-8zhkc 0/5 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/5 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 3 Preemption is not helpful for scheduling.. openshift-apiserver 39m Warning FailedScheduling pod/apiserver-5f568869f-8zhkc 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-apiserver 39m Warning FailedScheduling pod/apiserver-5f568869f-8zhkc 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-apiserver 39m Warning FailedScheduling pod/apiserver-5f568869f-8zhkc 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-apiserver 37m Warning FailedScheduling pod/apiserver-5f568869f-8zhkc 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-apiserver 37m Warning FailedScheduling pod/apiserver-5f568869f-8zhkc 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-apiserver 37m Warning FailedScheduling pod/apiserver-5f568869f-8zhkc 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 1 node(s) had untolerated taint {node.kubernetes.io/unschedulable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-apiserver 37m Normal Scheduled pod/apiserver-5f568869f-8zhkc Successfully assigned openshift-apiserver/apiserver-5f568869f-8zhkc to ip-10-0-239-132.ec2.internal openshift-network-diagnostics 42m Normal Scheduled pod/network-check-source-677bdb7d9-m9sqk Successfully assigned openshift-network-diagnostics/network-check-source-677bdb7d9-m9sqk to ip-10-0-232-8.ec2.internal openshift-cloud-network-config-controller 32m Normal Scheduled pod/cloud-network-config-controller-7cc55b87d4-7wlrt Successfully assigned openshift-cloud-network-config-controller/cloud-network-config-controller-7cc55b87d4-7wlrt to ip-10-0-140-6.ec2.internal openshift-operator-lifecycle-manager 42m Normal Scheduled pod/packageserver-7c998868c6-wnqfz Successfully assigned openshift-operator-lifecycle-manager/packageserver-7c998868c6-wnqfz to ip-10-0-197-197.ec2.internal openshift-cloud-controller-manager-operator 37m Normal Scheduled pod/cluster-cloud-controller-manager-operator-5dcbbcf757-zfxcs Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-5dcbbcf757-zfxcs to ip-10-0-239-132.ec2.internal openshift-network-diagnostics 55m Normal Scheduled pod/network-check-target-2799t Successfully assigned openshift-network-diagnostics/network-check-target-2799t to ip-10-0-232-8.ec2.internal openshift-network-diagnostics 55m Warning FailedScheduling pod/network-check-target-2799t running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "network-check-target-2799t": pod network-check-target-2799t is already assigned to node "ip-10-0-232-8.ec2.internal" openshift-cloud-credential-operator 53m Normal Scheduled pod/pod-identity-webhook-b645775d7-js8hv Successfully assigned openshift-cloud-credential-operator/pod-identity-webhook-b645775d7-js8hv to ip-10-0-239-132.ec2.internal openshift-cloud-credential-operator 37m Normal Scheduled pod/pod-identity-webhook-b645775d7-jb5tx Successfully assigned openshift-cloud-credential-operator/pod-identity-webhook-b645775d7-jb5tx to ip-10-0-239-132.ec2.internal openshift-network-diagnostics 61m Normal Scheduled pod/network-check-target-dvjbf Successfully assigned openshift-network-diagnostics/network-check-target-dvjbf to ip-10-0-197-197.ec2.internal openshift-cloud-credential-operator 32m Normal Scheduled pod/pod-identity-webhook-b645775d7-cmgdm Successfully assigned openshift-cloud-credential-operator/pod-identity-webhook-b645775d7-cmgdm to ip-10-0-140-6.ec2.internal openshift-network-diagnostics 61m Normal Scheduled pod/network-check-target-tmbg6 Successfully assigned openshift-network-diagnostics/network-check-target-tmbg6 to ip-10-0-140-6.ec2.internal openshift-cloud-credential-operator 42m Normal Scheduled pod/pod-identity-webhook-b645775d7-bhp9j Successfully assigned openshift-cloud-credential-operator/pod-identity-webhook-b645775d7-bhp9j to ip-10-0-197-197.ec2.internal openshift-cloud-credential-operator 53m Normal Scheduled pod/pod-identity-webhook-b645775d7-24tr2 Successfully assigned openshift-cloud-credential-operator/pod-identity-webhook-b645775d7-24tr2 to ip-10-0-140-6.ec2.internal openshift-apiserver 36m Warning FailedScheduling pod/apiserver-5f568869f-b9bw5 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-apiserver 36m Warning FailedScheduling pod/apiserver-5f568869f-b9bw5 0/7 nodes are available: 1 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-apiserver 35m Normal Scheduled pod/apiserver-5f568869f-b9bw5 Successfully assigned openshift-apiserver/apiserver-5f568869f-b9bw5 to ip-10-0-197-197.ec2.internal openshift-apiserver 28m Normal Scheduled pod/apiserver-5f568869f-kw7fx Successfully assigned openshift-apiserver/apiserver-5f568869f-kw7fx to ip-10-0-197-197.ec2.internal openshift-apiserver 31m Warning FailedScheduling pod/apiserver-5f568869f-kw7fx 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-apiserver 31m Warning FailedScheduling pod/apiserver-5f568869f-kw7fx 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-apiserver 32m Warning FailedScheduling pod/apiserver-5f568869f-kw7fx 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-ovn-kubernetes 61m Normal Scheduled pod/ovnkube-master-kzdhz Successfully assigned openshift-ovn-kubernetes/ovnkube-master-kzdhz to ip-10-0-197-197.ec2.internal openshift-apiserver 32m Warning FailedScheduling pod/apiserver-5f568869f-kw7fx 0/7 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/infra: }, 2 node(s) didn't match Pod's node affinity/selector, 2 node(s) didn't match pod anti-affinity rules, 2 node(s) were unschedulable. preemption: 0/7 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 5 Preemption is not helpful for scheduling.. openshift-network-diagnostics 39m Normal Scheduled pod/network-check-target-trrh7 Successfully assigned openshift-network-diagnostics/network-check-target-trrh7 to ip-10-0-195-121.ec2.internal openshift-cloud-credential-operator 59m Normal Scheduled pod/cloud-credential-operator-7fffc6cb67-gkvnc Successfully assigned openshift-cloud-credential-operator/cloud-credential-operator-7fffc6cb67-gkvnc to ip-10-0-197-197.ec2.internal openshift-cloud-credential-operator 62m Warning FailedScheduling pod/cloud-credential-operator-7fffc6cb67-gkvnc 0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. openshift-network-diagnostics 40m Normal Scheduled pod/network-check-target-v468t Successfully assigned openshift-network-diagnostics/network-check-target-v468t to ip-10-0-187-75.ec2.internal openshift-network-diagnostics 61m Normal Scheduled pod/network-check-target-v92f6 Successfully assigned openshift-network-diagnostics/network-check-target-v92f6 to ip-10-0-239-132.ec2.internal openshift-cloud-credential-operator 32m Normal Scheduled pod/cloud-credential-operator-7fffc6cb67-29lts Successfully assigned openshift-cloud-credential-operator/cloud-credential-operator-7fffc6cb67-29lts to ip-10-0-140-6.ec2.internal openshift-apiserver Normal TerminationStart pod/apiserver-5f568869f-b9bw5 Received signal to terminate, becoming unready, but keeping serving openshift-apiserver Normal TerminationGracefulTerminationFinished pod/apiserver-5f568869f-b9bw5 All pending requests processed openshift-network-diagnostics 55m Normal Scheduled pod/network-check-target-w7m4g Successfully assigned openshift-network-diagnostics/network-check-target-w7m4g to ip-10-0-160-152.ec2.internal openshift-apiserver Normal TerminationMinimalShutdownDurationFinished pod/apiserver-5f568869f-b9bw5 The minimal shutdown duration of 15s finished openshift-apiserver Normal TerminationStoppedServing pod/apiserver-5f568869f-b9bw5 Server has stopped listening openshift-ingress-canary 55m Warning FailedScheduling pod/ingress-canary-2zk7z running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "ingress-canary-2zk7z": pod ingress-canary-2zk7z is already assigned to node "ip-10-0-232-8.ec2.internal" openshift-network-diagnostics 55m Warning FailedScheduling pod/network-check-target-w7m4g running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "network-check-target-w7m4g": pod network-check-target-w7m4g is already assigned to node "ip-10-0-160-152.ec2.internal" kube-system 64m Normal LeaderElection lease/kube-scheduler ip-10-0-8-110_5edff8db-edd3-4107-89ad-7f8aaea95dca became leader kube-system 64m Normal LeaderElection configmap/kube-controller-manager ip-10-0-8-110_f6c15ee9-cc8d-4f6c-8823-d3d585a74a15 became leader kube-system 64m Normal LeaderElection lease/kube-controller-manager ip-10-0-8-110_f6c15ee9-cc8d-4f6c-8823-d3d585a74a15 became leader kube-system 64m Normal LeaderElection lease/cluster-policy-controller-lock ip-10-0-8-110_ae8a9f7d-ffcd-4be7-a6c8-4ef81c1ea9f2 became leader kube-system 64m Normal LeaderElection configmap/cluster-policy-controller-lock ip-10-0-8-110_ae8a9f7d-ffcd-4be7-a6c8-4ef81c1ea9f2 became leader kube-system 64m Warning ClusterInfrastructureStatus pod/bootstrap-kube-controller-manager-ip-10-0-8-110 unable to get cluster infrastructure status, using HA cluster values for leader election: the server could not find the requested resource (get infrastructures.config.openshift.io cluster) kube-system 64m Warning FastControllerResync pod/bootstrap-kube-controller-manager-ip-10-0-8-110 Controller "namespace-security-allocation-controller" resync interval is set to 0s which might lead to client request throttling kube-system 64m Warning FastControllerResync pod/bootstrap-kube-controller-manager-ip-10-0-8-110 Controller "pod-security-admission-label-synchronization-controller" resync interval is set to 0s which might lead to client request throttling kube-system 64m Normal LeaderElection configmap/kube-controller-manager ip-10-0-8-110_e97fb60a-b082-425e-8cf9-ad92bf39af65 became leader kube-system 64m Normal LeaderElection lease/kube-controller-manager ip-10-0-8-110_e97fb60a-b082-425e-8cf9-ad92bf39af65 became leader kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-etcd namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-kube-apiserver namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-infra namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for kube-node-lease namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-kube-controller-manager-operator namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for default namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for kube-public namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for kube-system namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-kube-controller-manager namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-kube-apiserver-operator namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-cluster-version namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-cloud-credential-operator namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-kube-scheduler namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-ingress-operator namespace openshift-cluster-version 63m Normal LeaderElection configmap/version ip-10-0-8-110_36e6f622-04c4-411e-a81a-561b85c8f44a became leader openshift-cluster-version 63m Normal LeaderElection lease/version ip-10-0-8-110_36e6f622-04c4-411e-a81a-561b85c8f44a became leader openshift-cluster-version 63m Normal RetrievePayload clusterversion/version Retrieving and verifying payload version="4.13.0-rc.0" image="quay.io/openshift-release-dev/ocp-release@sha256:e686d3cd173d9848fc304da0ebe4d348c6e3be902989f500c5382590e2e41a11" openshift-cluster-version 63m Normal LoadPayload clusterversion/version Loading payload version="4.13.0-rc.0" image="quay.io/openshift-release-dev/ocp-release@sha256:e686d3cd173d9848fc304da0ebe4d348c6e3be902989f500c5382590e2e41a11" openshift-cluster-version 63m Normal PayloadLoaded clusterversion/version Payload loaded version="4.13.0-rc.0" image="quay.io/openshift-release-dev/ocp-release@sha256:e686d3cd173d9848fc304da0ebe4d348c6e3be902989f500c5382590e2e41a11" architecture="amd64" kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-cluster-storage-operator namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-cloud-network-config-controller namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-config-operator namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-network-operator namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-apiserver-operator namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-cluster-samples-operator namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-cluster-csi-drivers namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-cluster-machine-approver namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-cluster-node-tuning-operator namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-marketplace namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-authentication-operator namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-etcd-operator namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-kube-scheduler-operator namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-controller-manager-operator namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-cloud-controller-manager-operator namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-insights namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-image-registry namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-dns-operator namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-machine-config-operator namespace openshift-cluster-version 63m Normal ScalingReplicaSet deployment/cluster-version-operator Scaled up replica set cluster-version-operator-b95587d8c to 1 kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-service-ca-operator namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-cloud-controller-manager namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-kube-storage-version-migrator-operator namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-openstack-infra namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-kni-infra namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-operator-lifecycle-manager namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-ovirt-infra namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-operators namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-vsphere-infra namespace kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-nutanix-infra namespace openshift-apiserver-operator 63m Normal ScalingReplicaSet deployment/openshift-apiserver-operator Scaled up replica set openshift-apiserver-operator-67fd94b9d7 to 1 kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-monitoring namespace openshift-service-ca-operator 63m Normal ScalingReplicaSet deployment/service-ca-operator Scaled up replica set service-ca-operator-7988896c96 to 1 openshift-dns-operator 63m Normal ScalingReplicaSet deployment/dns-operator Scaled up replica set dns-operator-656b9bd9f9 to 1 openshift-network-operator 63m Normal ScalingReplicaSet deployment/network-operator Scaled up replica set network-operator-6c9d58d76b to 1 kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-user-workload-monitoring namespace openshift-marketplace 63m Normal ScalingReplicaSet deployment/marketplace-operator Scaled up replica set marketplace-operator-554c77d6df to 1 openshift-kube-scheduler-operator 63m Normal ScalingReplicaSet deployment/openshift-kube-scheduler-operator Scaled up replica set openshift-kube-scheduler-operator-c98d57874 to 1 openshift-kube-controller-manager-operator 63m Normal ScalingReplicaSet deployment/kube-controller-manager-operator Scaled up replica set kube-controller-manager-operator-655bd6977c to 1 openshift-controller-manager-operator 63m Normal ScalingReplicaSet deployment/openshift-controller-manager-operator Scaled up replica set openshift-controller-manager-operator-6548869cc5 to 1 openshift-operator-lifecycle-manager 63m Normal NoPods poddisruptionbudget/packageserver-pdb No matching pods found openshift-kube-storage-version-migrator-operator 63m Normal ScalingReplicaSet deployment/kube-storage-version-migrator-operator Scaled up replica set kube-storage-version-migrator-operator-7f8b95cf5f to 1 openshift-authentication-operator 63m Normal ScalingReplicaSet deployment/authentication-operator Scaled up replica set authentication-operator-dbb89644b to 1 kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-config-managed namespace openshift-etcd-operator 63m Normal ScalingReplicaSet deployment/etcd-operator Scaled up replica set etcd-operator-775754ddff to 1 openshift-config-operator 63m Normal ScalingReplicaSet deployment/openshift-config-operator Scaled up replica set openshift-config-operator-67bdbffb68 to 1 kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-config namespace openshift-cloud-credential-operator 63m Normal ScalingReplicaSet deployment/cloud-credential-operator Scaled up replica set cloud-credential-operator-7fffc6cb67 to 1 openshift-machine-config-operator 63m Normal ScalingReplicaSet deployment/machine-config-operator Scaled up replica set machine-config-operator-7fd9cd8968 to 1 openshift-monitoring 63m Normal ScalingReplicaSet deployment/cluster-monitoring-operator Scaled up replica set cluster-monitoring-operator-78777bc588 to 1 kube-system 63m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-machine-api namespace openshift-cluster-storage-operator 63m Normal ScalingReplicaSet deployment/csi-snapshot-controller-operator Scaled up replica set csi-snapshot-controller-operator-c9586b974 to 1 openshift-ingress-operator 62m Normal ScalingReplicaSet deployment/ingress-operator Scaled up replica set ingress-operator-6486794b49 to 1 openshift-cluster-node-tuning-operator 62m Normal ScalingReplicaSet deployment/cluster-node-tuning-operator Scaled up replica set cluster-node-tuning-operator-5886c76fd4 to 1 openshift-operator-lifecycle-manager 62m Normal ScalingReplicaSet deployment/package-server-manager Scaled up replica set package-server-manager-fc98f8f64 to 1 openshift-kube-apiserver-operator 62m Normal ScalingReplicaSet deployment/kube-apiserver-operator Scaled up replica set kube-apiserver-operator-79b598d5b4 to 1 openshift-operator-lifecycle-manager 62m Normal ScalingReplicaSet deployment/olm-operator Scaled up replica set olm-operator-647f89bf4f to 1 openshift-insights 62m Normal ScalingReplicaSet deployment/insights-operator Scaled up replica set insights-operator-6fd65c6b65 to 1 openshift-image-registry 62m Normal ScalingReplicaSet deployment/cluster-image-registry-operator Scaled up replica set cluster-image-registry-operator-868788f8c6 to 1 openshift-operator-lifecycle-manager 62m Normal ScalingReplicaSet deployment/catalog-operator Scaled up replica set catalog-operator-567d5cdcc9 to 1 openshift-cluster-storage-operator 62m Normal ScalingReplicaSet deployment/cluster-storage-operator Scaled up replica set cluster-storage-operator-fb5868667 to 1 openshift-cluster-version 62m Warning FailedCreate replicaset/cluster-version-operator-b95587d8c Error creating: pods "cluster-version-operator-b95587d8c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes openshift-machine-api 62m Normal ScalingReplicaSet deployment/machine-api-operator Scaled up replica set machine-api-operator-564474f8c6 to 1 openshift-etcd-operator 62m Warning FailedCreate replicaset/etcd-operator-775754ddff Error creating: pods "etcd-operator-775754ddff-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes openshift-config-operator 62m Warning FailedCreate replicaset/openshift-config-operator-67bdbffb68 Error creating: pods "openshift-config-operator-67bdbffb68-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes openshift-cloud-credential-operator 62m Warning FailedCreate replicaset/cloud-credential-operator-7fffc6cb67 Error creating: pods "cloud-credential-operator-7fffc6cb67-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes openshift-machine-config-operator 62m Warning FailedCreate replicaset/machine-config-operator-7fd9cd8968 Error creating: pods "machine-config-operator-7fd9cd8968-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes openshift-cluster-storage-operator 62m Warning FailedCreate replicaset/csi-snapshot-controller-operator-c9586b974 Error creating: pods "csi-snapshot-controller-operator-c9586b974-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes openshift-monitoring 62m Warning FailedCreate replicaset/cluster-monitoring-operator-78777bc588 Error creating: pods "cluster-monitoring-operator-78777bc588-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes openshift-apiserver-operator 62m Warning FailedCreate replicaset/openshift-apiserver-operator-67fd94b9d7 Error creating: pods "openshift-apiserver-operator-67fd94b9d7-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes openshift-dns-operator 62m Warning FailedCreate replicaset/dns-operator-656b9bd9f9 Error creating: pods "dns-operator-656b9bd9f9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes openshift-network-operator 62m Warning FailedCreate replicaset/network-operator-6c9d58d76b Error creating: pods "network-operator-6c9d58d76b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes openshift-cluster-node-tuning-operator 62m Warning FailedCreate replicaset/cluster-node-tuning-operator-5886c76fd4 Error creating: pods "cluster-node-tuning-operator-5886c76fd4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes openshift-service-ca-operator 62m Warning FailedCreate replicaset/service-ca-operator-7988896c96 Error creating: pods "service-ca-operator-7988896c96-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes openshift-ingress-operator 62m Warning FailedCreate replicaset/ingress-operator-6486794b49 Error creating: pods "ingress-operator-6486794b49-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes openshift-kube-controller-manager-operator 62m Warning FailedCreate replicaset/kube-controller-manager-operator-655bd6977c Error creating: pods "kube-controller-manager-operator-655bd6977c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes openshift-kube-scheduler-operator 62m Warning FailedCreate replicaset/openshift-kube-scheduler-operator-c98d57874 Error creating: pods "openshift-kube-scheduler-operator-c98d57874-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes openshift-marketplace 62m Warning FailedCreate replicaset/marketplace-operator-554c77d6df Error creating: pods "marketplace-operator-554c77d6df-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes openshift-operator-lifecycle-manager 62m Warning FailedCreate replicaset/olm-operator-647f89bf4f Error creating: pods "olm-operator-647f89bf4f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes openshift-operator-lifecycle-manager 62m Warning FailedCreate replicaset/package-server-manager-fc98f8f64 Error creating: pods "package-server-manager-fc98f8f64-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes openshift-controller-manager-operator 62m Warning FailedCreate replicaset/openshift-controller-manager-operator-6548869cc5 Error creating: pods "openshift-controller-manager-operator-6548869cc5-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes openshift-kube-apiserver-operator 62m Warning FailedCreate replicaset/kube-apiserver-operator-79b598d5b4 Error creating: pods "kube-apiserver-operator-79b598d5b4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes openshift-image-registry 62m Warning FailedCreate replicaset/cluster-image-registry-operator-868788f8c6 Error creating: pods "cluster-image-registry-operator-868788f8c6-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes openshift-insights 62m Warning FailedCreate replicaset/insights-operator-6fd65c6b65 Error creating: pods "insights-operator-6fd65c6b65-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes openshift-operator-lifecycle-manager 62m Warning FailedCreate replicaset/catalog-operator-567d5cdcc9 Error creating: pods "catalog-operator-567d5cdcc9-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes openshift-cluster-storage-operator 62m Warning FailedCreate replicaset/cluster-storage-operator-fb5868667 Error creating: pods "cluster-storage-operator-fb5868667-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes openshift-machine-api 62m Warning FailedCreate replicaset/machine-api-operator-564474f8c6 Error creating: pods "machine-api-operator-564474f8c6-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes openshift-kube-storage-version-migrator-operator 62m Warning FailedCreate replicaset/kube-storage-version-migrator-operator-7f8b95cf5f Error creating: pods "kube-storage-version-migrator-operator-7f8b95cf5f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes openshift-authentication-operator 62m Warning FailedCreate replicaset/authentication-operator-dbb89644b Error creating: pods "authentication-operator-dbb89644b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes default 62m Normal RegisteredNode node/ip-10-0-239-132.ec2.internal Node ip-10-0-239-132.ec2.internal event: Registered Node ip-10-0-239-132.ec2.internal in Controller default 62m Normal NodeHasNoDiskPressure node/ip-10-0-239-132.ec2.internal Node ip-10-0-239-132.ec2.internal status is now: NodeHasNoDiskPressure default 62m Normal NodeHasSufficientMemory node/ip-10-0-239-132.ec2.internal Node ip-10-0-239-132.ec2.internal status is now: NodeHasSufficientMemory openshift-etcd-operator 62m Normal SuccessfulCreate replicaset/etcd-operator-775754ddff Created pod: etcd-operator-775754ddff-xjxrm openshift-config-operator 62m Normal SuccessfulCreate replicaset/openshift-config-operator-67bdbffb68 Created pod: openshift-config-operator-67bdbffb68-sdgx7 openshift-cloud-credential-operator 62m Normal SuccessfulCreate replicaset/cloud-credential-operator-7fffc6cb67 Created pod: cloud-credential-operator-7fffc6cb67-gkvnc openshift-machine-config-operator 62m Normal SuccessfulCreate replicaset/machine-config-operator-7fd9cd8968 Created pod: machine-config-operator-7fd9cd8968-9vg57 openshift-monitoring 62m Normal SuccessfulCreate replicaset/cluster-monitoring-operator-78777bc588 Created pod: cluster-monitoring-operator-78777bc588-rhggh openshift-cluster-storage-operator 62m Normal SuccessfulCreate replicaset/csi-snapshot-controller-operator-c9586b974 Created pod: csi-snapshot-controller-operator-c9586b974-wk85s openshift-cluster-node-tuning-operator 62m Normal SuccessfulCreate replicaset/cluster-node-tuning-operator-5886c76fd4 Created pod: cluster-node-tuning-operator-5886c76fd4-7qpt5 openshift-ingress-operator 62m Normal SuccessfulCreate replicaset/ingress-operator-6486794b49 Created pod: ingress-operator-6486794b49-42ddh openshift-operator-lifecycle-manager 62m Normal SuccessfulCreate replicaset/package-server-manager-fc98f8f64 Created pod: package-server-manager-fc98f8f64-l2df9 openshift-kube-apiserver-operator 62m Normal SuccessfulCreate replicaset/kube-apiserver-operator-79b598d5b4 Created pod: kube-apiserver-operator-79b598d5b4-dqp95 openshift-cluster-storage-operator 62m Normal SuccessfulCreate replicaset/cluster-storage-operator-fb5868667 Created pod: cluster-storage-operator-fb5868667-cclnx openshift-insights 62m Normal SuccessfulCreate replicaset/insights-operator-6fd65c6b65 Created pod: insights-operator-6fd65c6b65-vrxhp openshift-image-registry 62m Normal SuccessfulCreate replicaset/cluster-image-registry-operator-868788f8c6 Created pod: cluster-image-registry-operator-868788f8c6-frhj8 openshift-operator-lifecycle-manager 62m Normal SuccessfulCreate replicaset/olm-operator-647f89bf4f Created pod: olm-operator-647f89bf4f-rgnx9 openshift-operator-lifecycle-manager 62m Normal SuccessfulCreate replicaset/catalog-operator-567d5cdcc9 Created pod: catalog-operator-567d5cdcc9-gwwnx openshift-cluster-version 62m Normal ScalingReplicaSet deployment/cluster-version-operator Scaled up replica set cluster-version-operator-5d74b9d6f5 to 1 openshift-cluster-version 62m Normal SuccessfulCreate replicaset/cluster-version-operator-5d74b9d6f5 Created pod: cluster-version-operator-5d74b9d6f5-qzcfb openshift-cluster-version 62m Normal ScalingReplicaSet deployment/cluster-version-operator Scaled down replica set cluster-version-operator-b95587d8c to 0 from 1 openshift-machine-api 62m Normal SuccessfulCreate replicaset/machine-api-operator-564474f8c6 Created pod: machine-api-operator-564474f8c6-284hs openshift-machine-api 62m Normal ScalingReplicaSet deployment/control-plane-machine-set-operator Scaled up replica set control-plane-machine-set-operator-77b4c948f8 to 1 openshift-machine-api 62m Normal SuccessfulCreate replicaset/control-plane-machine-set-operator-77b4c948f8 Created pod: control-plane-machine-set-operator-77b4c948f8-s7qsh openshift-cluster-machine-approver 62m Normal SuccessfulCreate replicaset/machine-approver-5cd47987c9 Created pod: machine-approver-5cd47987c9-96cvq openshift-cluster-machine-approver 62m Normal ScalingReplicaSet deployment/machine-approver Scaled up replica set machine-approver-5cd47987c9 to 1 openshift-machine-api 62m Normal ScalingReplicaSet deployment/cluster-baremetal-operator Scaled up replica set cluster-baremetal-operator-cb6794dd9 to 1 openshift-machine-api 62m Normal SuccessfulCreate replicaset/cluster-autoscaler-operator-7fcffdb7c8 Created pod: cluster-autoscaler-operator-7fcffdb7c8-g4w4m openshift-machine-api 62m Normal SuccessfulCreate replicaset/cluster-baremetal-operator-cb6794dd9 Created pod: cluster-baremetal-operator-cb6794dd9-8bqk2 openshift-machine-api 62m Normal ScalingReplicaSet deployment/cluster-autoscaler-operator Scaled up replica set cluster-autoscaler-operator-7fcffdb7c8 to 1 openshift-cloud-controller-manager-operator 62m Normal ScalingReplicaSet deployment/cluster-cloud-controller-manager-operator Scaled up replica set cluster-cloud-controller-manager-operator-5dcbbcf757 to 1 openshift-cloud-controller-manager-operator 62m Normal SuccessfulCreate replicaset/cluster-cloud-controller-manager-operator-5dcbbcf757 Created pod: cluster-cloud-controller-manager-operator-5dcbbcf757-fqvtw openshift-cloud-controller-manager-operator 62m Normal Pulling pod/cluster-cloud-controller-manager-operator-5dcbbcf757-fqvtw Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77345c48a82b167f67364ffd41788160b5d06e746946d9ea67191fa18cf34806" openshift-apiserver-operator 61m Normal SuccessfulCreate replicaset/openshift-apiserver-operator-67fd94b9d7 Created pod: openshift-apiserver-operator-67fd94b9d7-nvg29 openshift-cloud-controller-manager-operator 61m Normal Pulled pod/cluster-cloud-controller-manager-operator-5dcbbcf757-fqvtw Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77345c48a82b167f67364ffd41788160b5d06e746946d9ea67191fa18cf34806" already present on machine openshift-cloud-controller-manager-operator 61m Normal Started pod/cluster-cloud-controller-manager-operator-5dcbbcf757-fqvtw Started container cluster-cloud-controller-manager openshift-cloud-controller-manager-operator 61m Normal Pulled pod/cluster-cloud-controller-manager-operator-5dcbbcf757-fqvtw Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77345c48a82b167f67364ffd41788160b5d06e746946d9ea67191fa18cf34806" in 2.208190411s (2.208203534s including waiting) openshift-cloud-controller-manager-operator 61m Normal LeaderElection lease/cluster-cloud-controller-manager-leader ip-10-0-239-132_2aad4ed9-ea22-45cc-8f59-61a539eb9fae became leader openshift-cloud-controller-manager-operator 61m Normal Created pod/cluster-cloud-controller-manager-operator-5dcbbcf757-fqvtw Created container cluster-cloud-controller-manager openshift-cloud-controller-manager-operator 61m Normal Created pod/cluster-cloud-controller-manager-operator-5dcbbcf757-fqvtw Created container config-sync-controllers openshift-service-ca-operator 61m Normal SuccessfulCreate replicaset/service-ca-operator-7988896c96 Created pod: service-ca-operator-7988896c96-5q667 openshift-dns-operator 61m Normal SuccessfulCreate replicaset/dns-operator-656b9bd9f9 Created pod: dns-operator-656b9bd9f9-lb9ps openshift-network-operator 61m Normal SuccessfulCreate replicaset/network-operator-6c9d58d76b Created pod: network-operator-6c9d58d76b-pl9td openshift-cloud-controller-manager-operator 61m Normal LeaderElection lease/cluster-cloud-config-sync-leader ip-10-0-239-132_d6bee811-e81e-453a-bc45-9a1b282f765b became leader openshift-cloud-controller-manager-operator 61m Normal Started pod/cluster-cloud-controller-manager-operator-5dcbbcf757-fqvtw Started container config-sync-controllers openshift-network-operator 61m Normal Pulling pod/network-operator-6c9d58d76b-pl9td Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" openshift-marketplace 61m Normal SuccessfulCreate replicaset/marketplace-operator-554c77d6df Created pod: marketplace-operator-554c77d6df-2q9k5 openshift-kube-controller-manager-operator 61m Normal SuccessfulCreate replicaset/kube-controller-manager-operator-655bd6977c Created pod: kube-controller-manager-operator-655bd6977c-z9mb9 openshift-kube-scheduler-operator 61m Normal SuccessfulCreate replicaset/openshift-kube-scheduler-operator-c98d57874 Created pod: openshift-kube-scheduler-operator-c98d57874-wj7tl openshift-network-operator 61m Normal Pulled pod/network-operator-6c9d58d76b-pl9td Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" in 2.720762931s (2.720775394s including waiting) openshift-controller-manager-operator 61m Normal SuccessfulCreate replicaset/openshift-controller-manager-operator-6548869cc5 Created pod: openshift-controller-manager-operator-6548869cc5-9kqx5 openshift-network-operator 61m Normal LeaderElection configmap/network-operator-lock ip-10-0-239-132_47d869d8-14fc-426b-90e5-e5d7e2de98fe became leader openshift-network-operator 61m Normal LeaderElection lease/network-operator-lock ip-10-0-239-132_47d869d8-14fc-426b-90e5-e5d7e2de98fe became leader openshift-network-operator 61m Warning FastControllerResync deployment/network-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-network-operator 61m Normal SuccessfulCreate job/mtu-prober Created pod: mtu-prober-m487h openshift-network-operator 61m Warning StatusNotFound deployment/network-operator Unable to determine current operator status for cluster-network-operator openshift-network-operator 61m Normal Started pod/mtu-prober-m487h Started container prober openshift-kube-storage-version-migrator-operator 61m Normal SuccessfulCreate replicaset/kube-storage-version-migrator-operator-7f8b95cf5f Created pod: kube-storage-version-migrator-operator-7f8b95cf5f-x5hzl openshift-network-operator 61m Normal Created pod/mtu-prober-m487h Created container prober openshift-network-operator 61m Normal Pulled pod/mtu-prober-m487h Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" already present on machine openshift-authentication-operator 61m Normal SuccessfulCreate replicaset/authentication-operator-dbb89644b Created pod: authentication-operator-dbb89644b-tbxcm openshift-network-operator 61m Normal Completed job/mtu-prober Job completed default 61m Normal RegisteredNode node/ip-10-0-140-6.ec2.internal Node ip-10-0-140-6.ec2.internal event: Registered Node ip-10-0-140-6.ec2.internal in Controller default 61m Normal RegisteredNode node/ip-10-0-197-197.ec2.internal Node ip-10-0-197-197.ec2.internal event: Registered Node ip-10-0-197-197.ec2.internal in Controller openshift-cloud-network-config-controller 61m Normal SuccessfulCreate replicaset/cloud-network-config-controller-7cc55b87d4 Created pod: cloud-network-config-controller-7cc55b87d4-drl56 openshift-cloud-network-config-controller 61m Normal ScalingReplicaSet deployment/cloud-network-config-controller Scaled up replica set cloud-network-config-controller-7cc55b87d4 to 1 kube-system 61m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-multus namespace openshift-multus 61m Normal SuccessfulCreate daemonset/multus Created pod: multus-kkqdt openshift-multus 61m Normal SuccessfulCreate daemonset/multus Created pod: multus-486wq openshift-multus 61m Normal SuccessfulCreate daemonset/multus Created pod: multus-7x2mr openshift-multus 61m Normal SuccessfulCreate daemonset/multus-additional-cni-plugins Created pod: multus-additional-cni-plugins-hg7bc openshift-multus 61m Normal SuccessfulCreate daemonset/network-metrics-daemon Created pod: network-metrics-daemon-9gx7g openshift-multus 61m Normal Pulling pod/multus-kkqdt Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" openshift-multus 61m Normal Pulling pod/multus-additional-cni-plugins-g7hvw Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633" openshift-multus 61m Normal SuccessfulCreate daemonset/network-metrics-daemon Created pod: network-metrics-daemon-v6lsv openshift-multus 61m Normal SuccessfulCreate daemonset/network-metrics-daemon Created pod: network-metrics-daemon-7vpmf openshift-multus 61m Normal SuccessfulCreate daemonset/multus-additional-cni-plugins Created pod: multus-additional-cni-plugins-b2lhx openshift-multus 61m Normal SuccessfulCreate daemonset/multus-additional-cni-plugins Created pod: multus-additional-cni-plugins-g7hvw openshift-multus 61m Normal Pulling pod/multus-additional-cni-plugins-b2lhx Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633" openshift-multus 61m Normal Pulling pod/multus-7x2mr Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" openshift-multus 61m Normal Pulled pod/multus-additional-cni-plugins-g7hvw Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633" in 2.123440168s (2.123461103s including waiting) openshift-multus 61m Normal Pulling pod/multus-additional-cni-plugins-g7hvw Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be088cb9218a248cd4a616132c8daf531502c7b85404aadb0b73e76f8dba01df" openshift-multus 61m Normal Started pod/multus-additional-cni-plugins-g7hvw Started container egress-router-binary-copy openshift-multus 61m Normal SuccessfulCreate replicaset/multus-admission-controller-6f95d97cb6 Created pod: multus-admission-controller-6f95d97cb6-x5s87 openshift-multus 61m Normal SuccessfulCreate replicaset/multus-admission-controller-6f95d97cb6 Created pod: multus-admission-controller-6f95d97cb6-7wv72 openshift-multus 61m Normal ScalingReplicaSet deployment/multus-admission-controller Scaled up replica set multus-admission-controller-6f95d97cb6 to 2 openshift-multus 61m Normal Created pod/multus-additional-cni-plugins-g7hvw Created container egress-router-binary-copy openshift-multus 61m Normal Started pod/multus-additional-cni-plugins-b2lhx Started container egress-router-binary-copy openshift-multus 61m Normal Created pod/multus-additional-cni-plugins-b2lhx Created container egress-router-binary-copy openshift-multus 61m Normal Pulled pod/multus-additional-cni-plugins-b2lhx Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633" in 2.097937113s (2.097950218s including waiting) kube-system 61m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-ovn-kubernetes namespace openshift-multus 61m Normal Pulling pod/multus-additional-cni-plugins-b2lhx Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be088cb9218a248cd4a616132c8daf531502c7b85404aadb0b73e76f8dba01df" openshift-multus 61m Normal Started pod/multus-7x2mr Started container kube-multus openshift-multus 61m Normal Created pod/multus-7x2mr Created container kube-multus openshift-multus 61m Normal Pulled pod/multus-7x2mr Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" in 6.136140587s (6.136150142s including waiting) openshift-multus 61m Normal Pulled pod/multus-kkqdt Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" in 7.800644532s (7.800652952s including waiting) openshift-multus 61m Normal Pulled pod/multus-additional-cni-plugins-g7hvw Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be088cb9218a248cd4a616132c8daf531502c7b85404aadb0b73e76f8dba01df" in 5.056110388s (5.056121324s including waiting) openshift-multus 61m Normal Started pod/multus-kkqdt Started container kube-multus openshift-multus 61m Normal Pulled pod/multus-additional-cni-plugins-b2lhx Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be088cb9218a248cd4a616132c8daf531502c7b85404aadb0b73e76f8dba01df" in 4.829124191s (4.829133087s including waiting) openshift-multus 61m Normal Created pod/multus-kkqdt Created container kube-multus openshift-multus 61m Normal Pulling pod/multus-additional-cni-plugins-g7hvw Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dec1af3c24148179ab45f9137ba74876f4287121f07aa32dc7a3643da749a9" openshift-multus 61m Normal Started pod/multus-additional-cni-plugins-g7hvw Started container cni-plugins openshift-multus 61m Normal Created pod/multus-additional-cni-plugins-g7hvw Created container cni-plugins openshift-multus 61m Normal Created pod/multus-additional-cni-plugins-g7hvw Created container bond-cni-plugin openshift-ovn-kubernetes 61m Normal NoPods poddisruptionbudget/ovn-raft-quorum-guard No matching pods found kube-system 61m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-host-network namespace openshift-multus 61m Normal Started pod/multus-additional-cni-plugins-b2lhx Started container cni-plugins openshift-multus 61m Normal Pulled pod/multus-additional-cni-plugins-g7hvw Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dec1af3c24148179ab45f9137ba74876f4287121f07aa32dc7a3643da749a9" in 759.123516ms (759.135142ms including waiting) openshift-multus 61m Normal Created pod/multus-additional-cni-plugins-b2lhx Created container cni-plugins openshift-multus 61m Normal Pulling pod/multus-additional-cni-plugins-b2lhx Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dec1af3c24148179ab45f9137ba74876f4287121f07aa32dc7a3643da749a9" openshift-multus 61m Normal Pulling pod/multus-additional-cni-plugins-g7hvw Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4903dfa59fc0fcc3521f6431d51c6ed0ef563b0c63614a2df44ee4d447ab6c5d" kube-system 61m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-network-diagnostics namespace openshift-ovn-kubernetes 61m Normal SuccessfulCreate daemonset/ovnkube-master Created pod: ovnkube-master-l7mb9 openshift-ovn-kubernetes 61m Normal SuccessfulCreate daemonset/ovnkube-master Created pod: ovnkube-master-w7545 openshift-ovn-kubernetes 61m Normal SuccessfulCreate daemonset/ovnkube-master Created pod: ovnkube-master-kzdhz openshift-ovn-kubernetes 61m Normal SuccessfulCreate daemonset/ovnkube-node Created pod: ovnkube-node-wsrzb openshift-ovn-kubernetes 61m Normal SuccessfulCreate daemonset/ovnkube-node Created pod: ovnkube-node-8qw6d openshift-ovn-kubernetes 61m Normal SuccessfulCreate daemonset/ovnkube-node Created pod: ovnkube-node-x8pqn openshift-ovn-kubernetes 61m Normal Pulling pod/ovnkube-master-l7mb9 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" openshift-ovn-kubernetes 61m Normal Pulling pod/ovnkube-node-8qw6d Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" openshift-multus 61m Normal Pulled pod/multus-additional-cni-plugins-b2lhx Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dec1af3c24148179ab45f9137ba74876f4287121f07aa32dc7a3643da749a9" in 793.134491ms (793.141412ms including waiting) openshift-ovn-kubernetes 61m Normal Pulling pod/ovnkube-master-w7545 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" openshift-multus 61m Normal Started pod/multus-additional-cni-plugins-g7hvw Started container bond-cni-plugin openshift-ovn-kubernetes 61m Normal Pulling pod/ovnkube-node-wsrzb Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" openshift-multus 61m Normal Pulling pod/multus-additional-cni-plugins-b2lhx Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4903dfa59fc0fcc3521f6431d51c6ed0ef563b0c63614a2df44ee4d447ab6c5d" openshift-multus 61m Normal Started pod/multus-additional-cni-plugins-b2lhx Started container bond-cni-plugin openshift-network-diagnostics 61m Normal SuccessfulCreate replicaset/network-check-source-677bdb7d9 Created pod: network-check-source-677bdb7d9-4sw4t openshift-network-diagnostics 61m Normal ScalingReplicaSet deployment/network-check-source Scaled up replica set network-check-source-677bdb7d9 to 1 openshift-multus 61m Normal Created pod/multus-additional-cni-plugins-b2lhx Created container bond-cni-plugin openshift-multus 61m Normal Pulling pod/multus-additional-cni-plugins-g7hvw Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" openshift-network-diagnostics 61m Normal SuccessfulCreate daemonset/network-check-target Created pod: network-check-target-v92f6 openshift-ovn-kubernetes 61m Normal Pulling pod/ovnkube-master-kzdhz Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" openshift-multus 61m Normal Pulling pod/multus-additional-cni-plugins-hg7bc Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633" openshift-network-diagnostics 61m Normal SuccessfulCreate daemonset/network-check-target Created pod: network-check-target-dvjbf openshift-multus 61m Normal Pulled pod/multus-additional-cni-plugins-g7hvw Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4903dfa59fc0fcc3521f6431d51c6ed0ef563b0c63614a2df44ee4d447ab6c5d" in 1.63342741s (1.633434513s including waiting) openshift-multus 61m Normal Created pod/multus-additional-cni-plugins-g7hvw Created container routeoverride-cni openshift-multus 61m Normal Started pod/multus-additional-cni-plugins-g7hvw Started container routeoverride-cni openshift-multus 61m Normal Pulled pod/multus-additional-cni-plugins-b2lhx Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4903dfa59fc0fcc3521f6431d51c6ed0ef563b0c63614a2df44ee4d447ab6c5d" in 1.343340081s (1.343354626s including waiting) openshift-ovn-kubernetes 61m Normal Pulling pod/ovnkube-node-x8pqn Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" openshift-network-diagnostics 61m Normal SuccessfulCreate daemonset/network-check-target Created pod: network-check-target-tmbg6 openshift-multus 61m Normal Pulling pod/multus-486wq Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" openshift-multus 61m Normal Created pod/multus-additional-cni-plugins-b2lhx Created container routeoverride-cni openshift-multus 61m Normal Started pod/multus-additional-cni-plugins-b2lhx Started container routeoverride-cni openshift-multus 61m Normal Pulling pod/multus-additional-cni-plugins-b2lhx Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" openshift-multus 61m Normal Pulled pod/multus-additional-cni-plugins-hg7bc Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633" in 3.33715344s (3.33718866s including waiting) openshift-multus 61m Normal Created pod/multus-additional-cni-plugins-hg7bc Created container egress-router-binary-copy openshift-multus 60m Normal Pulling pod/multus-additional-cni-plugins-hg7bc Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be088cb9218a248cd4a616132c8daf531502c7b85404aadb0b73e76f8dba01df" openshift-multus 60m Normal Started pod/multus-additional-cni-plugins-hg7bc Started container egress-router-binary-copy openshift-ovn-kubernetes 60m Normal Created pod/ovnkube-master-l7mb9 Created container northd openshift-multus 60m Normal Pulled pod/multus-additional-cni-plugins-b2lhx Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" in 8.214162479s (8.214169832s including waiting) openshift-multus 60m Normal Created pod/multus-additional-cni-plugins-b2lhx Created container whereabouts-cni-bincopy openshift-multus 60m Normal Started pod/multus-additional-cni-plugins-b2lhx Started container whereabouts-cni-bincopy openshift-ovn-kubernetes 60m Normal Pulled pod/ovnkube-master-l7mb9 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" in 11.364962068s (11.364991627s including waiting) openshift-ovn-kubernetes 60m Normal Started pod/ovnkube-master-l7mb9 Started container northd openshift-ovn-kubernetes 60m Normal Pulled pod/ovnkube-node-8qw6d Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" in 11.196719323s (11.196733019s including waiting) openshift-ovn-kubernetes 60m Normal Pulled pod/ovnkube-master-l7mb9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 60m Normal Pulling pod/ovnkube-node-wsrzb Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" openshift-ovn-kubernetes 60m Normal Started pod/ovnkube-node-wsrzb Started container ovn-acl-logging openshift-ovn-kubernetes 60m Normal Pulling pod/ovnkube-node-8qw6d Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" openshift-ovn-kubernetes 60m Normal Started pod/ovnkube-node-8qw6d Started container ovn-acl-logging openshift-ovn-kubernetes 60m Normal Created pod/ovnkube-node-8qw6d Created container ovn-acl-logging openshift-ovn-kubernetes 60m Normal Created pod/ovnkube-node-wsrzb Created container ovn-acl-logging openshift-ovn-kubernetes 60m Normal Pulled pod/ovnkube-node-wsrzb Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 60m Normal Pulled pod/ovnkube-node-wsrzb Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" in 11.17966555s (11.179678029s including waiting) openshift-ovn-kubernetes 60m Normal Started pod/ovnkube-master-l7mb9 Started container nbdb openshift-ovn-kubernetes 60m Normal Pulled pod/ovnkube-master-w7545 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" in 11.395725311s (11.395736929s including waiting) openshift-ovn-kubernetes 60m Normal Created pod/ovnkube-master-w7545 Created container northd openshift-ovn-kubernetes 60m Normal Started pod/ovnkube-master-w7545 Started container northd openshift-ovn-kubernetes 60m Normal Pulled pod/ovnkube-master-w7545 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 60m Normal Created pod/ovnkube-master-w7545 Created container nbdb openshift-ovn-kubernetes 60m Normal Started pod/ovnkube-master-w7545 Started container nbdb openshift-ovn-kubernetes 60m Normal Pulled pod/ovnkube-node-8qw6d Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-multus 60m Normal Created pod/multus-additional-cni-plugins-g7hvw Created container whereabouts-cni-bincopy openshift-multus 60m Normal Pulled pod/multus-additional-cni-plugins-g7hvw Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" already present on machine openshift-ovn-kubernetes 60m Normal Created pod/ovnkube-master-l7mb9 Created container nbdb openshift-multus 60m Normal Pulled pod/multus-additional-cni-plugins-g7hvw Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" in 8.692568105s (8.692583167s including waiting) openshift-multus 60m Normal Started pod/multus-additional-cni-plugins-g7hvw Started container whereabouts-cni-bincopy openshift-multus 60m Normal Created pod/multus-additional-cni-plugins-b2lhx Created container whereabouts-cni openshift-multus 60m Normal Started pod/multus-additional-cni-plugins-b2lhx Started container whereabouts-cni openshift-multus 60m Normal Started pod/multus-additional-cni-plugins-g7hvw Started container whereabouts-cni openshift-multus 60m Normal Created pod/multus-additional-cni-plugins-g7hvw Created container whereabouts-cni openshift-multus 60m Normal Pulled pod/multus-additional-cni-plugins-b2lhx Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" already present on machine openshift-multus 60m Normal Pulled pod/multus-additional-cni-plugins-g7hvw Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" already present on machine openshift-ovn-kubernetes 60m Normal Pulled pod/ovnkube-node-wsrzb Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 60m Normal Created pod/ovnkube-node-wsrzb Created container ovnkube-node openshift-ovn-kubernetes 60m Normal Started pod/ovnkube-node-wsrzb Started container kube-rbac-proxy openshift-ovn-kubernetes 60m Normal Created pod/ovnkube-node-wsrzb Created container kube-rbac-proxy openshift-multus 60m Normal Created pod/multus-additional-cni-plugins-g7hvw Created container kube-multus-additional-cni-plugins openshift-ovn-kubernetes 60m Normal Created pod/ovnkube-node-8qw6d Created container kube-rbac-proxy openshift-ovn-kubernetes 60m Normal Started pod/ovnkube-node-8qw6d Started container kube-rbac-proxy openshift-ovn-kubernetes 60m Normal Started pod/ovnkube-node-wsrzb Started container ovnkube-node openshift-ovn-kubernetes 60m Normal Started pod/ovnkube-node-wsrzb Started container kube-rbac-proxy-ovn-metrics openshift-ovn-kubernetes 60m Normal Pulled pod/ovnkube-node-8qw6d Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 60m Normal Pulled pod/ovnkube-node-wsrzb Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-ovn-kubernetes 60m Normal Started pod/ovnkube-node-8qw6d Started container ovnkube-node openshift-ovn-kubernetes 60m Normal Started pod/ovnkube-node-8qw6d Started container kube-rbac-proxy-ovn-metrics openshift-ovn-kubernetes 60m Normal Pulled pod/ovnkube-node-wsrzb Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" in 1.388240004s (1.388253991s including waiting) openshift-ovn-kubernetes 60m Normal Pulled pod/ovnkube-node-8qw6d Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-ovn-kubernetes 60m Normal Created pod/ovnkube-node-8qw6d Created container ovnkube-node openshift-ovn-kubernetes 60m Normal Created pod/ovnkube-node-wsrzb Created container kube-rbac-proxy-ovn-metrics openshift-ovn-kubernetes 60m Normal Created pod/ovnkube-node-8qw6d Created container kube-rbac-proxy-ovn-metrics openshift-multus 60m Normal Pulled pod/multus-additional-cni-plugins-b2lhx Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" already present on machine openshift-ovn-kubernetes 60m Normal Pulled pod/ovnkube-node-8qw6d Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" in 1.37335557s (1.37337236s including waiting) openshift-multus 60m Normal Created pod/multus-additional-cni-plugins-b2lhx Created container kube-multus-additional-cni-plugins openshift-multus 60m Normal Pulled pod/multus-486wq Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" in 14.622323114s (14.622331468s including waiting) openshift-ovn-kubernetes 60m Normal Created pod/ovnkube-master-kzdhz Created container northd openshift-ovn-kubernetes 60m Normal Pulled pod/ovnkube-node-x8pqn Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 60m Normal Pulled pod/ovnkube-master-kzdhz Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 60m Normal Pulled pod/ovnkube-node-x8pqn Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" in 16.217576842s (16.21758709s including waiting) openshift-multus 60m Normal Pulled pod/multus-additional-cni-plugins-hg7bc Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be088cb9218a248cd4a616132c8daf531502c7b85404aadb0b73e76f8dba01df" in 9.796369533s (9.796387957s including waiting) openshift-ovn-kubernetes 60m Normal Started pod/ovnkube-node-x8pqn Started container ovn-acl-logging openshift-multus 60m Normal Created pod/multus-486wq Created container kube-multus openshift-ovn-kubernetes 60m Normal Pulling pod/ovnkube-node-x8pqn Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" openshift-ovn-kubernetes 60m Normal Started pod/ovnkube-master-kzdhz Started container nbdb openshift-multus 60m Normal Started pod/multus-486wq Started container kube-multus openshift-ovn-kubernetes 60m Normal Started pod/ovnkube-master-kzdhz Started container northd openshift-multus 60m Normal Started pod/multus-additional-cni-plugins-hg7bc Started container cni-plugins openshift-ovn-kubernetes 60m Normal Created pod/ovnkube-master-kzdhz Created container nbdb openshift-ovn-kubernetes 60m Normal Pulled pod/ovnkube-master-kzdhz Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" in 16.204570417s (16.204579326s including waiting) openshift-ovn-kubernetes 60m Normal Created pod/ovnkube-node-x8pqn Created container ovn-acl-logging openshift-multus 60m Normal Created pod/multus-additional-cni-plugins-hg7bc Created container cni-plugins openshift-multus 60m Normal Pulling pod/multus-additional-cni-plugins-hg7bc Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dec1af3c24148179ab45f9137ba74876f4287121f07aa32dc7a3643da749a9" openshift-ovn-kubernetes 60m Normal Pulled pod/ovnkube-node-x8pqn Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" in 1.237629105s (1.237644374s including waiting) openshift-ovn-kubernetes 60m Normal Created pod/ovnkube-node-x8pqn Created container kube-rbac-proxy-ovn-metrics openshift-multus 60m Normal Pulling pod/multus-additional-cni-plugins-hg7bc Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4903dfa59fc0fcc3521f6431d51c6ed0ef563b0c63614a2df44ee4d447ab6c5d" openshift-ovn-kubernetes 60m Normal Started pod/ovnkube-node-x8pqn Started container kube-rbac-proxy openshift-ovn-kubernetes 60m Normal Created pod/ovnkube-node-x8pqn Created container kube-rbac-proxy openshift-ovn-kubernetes 60m Normal Started pod/ovnkube-node-x8pqn Started container kube-rbac-proxy-ovn-metrics openshift-ovn-kubernetes 60m Normal Pulled pod/ovnkube-node-x8pqn Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 60m Normal Created pod/ovnkube-node-x8pqn Created container ovnkube-node openshift-multus 60m Normal Pulled pod/multus-additional-cni-plugins-hg7bc Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dec1af3c24148179ab45f9137ba74876f4287121f07aa32dc7a3643da749a9" in 681.550497ms (681.563682ms including waiting) openshift-ovn-kubernetes 60m Normal Started pod/ovnkube-node-x8pqn Started container ovnkube-node openshift-multus 60m Normal Started pod/multus-additional-cni-plugins-hg7bc Started container bond-cni-plugin openshift-ovn-kubernetes 60m Normal Pulled pod/ovnkube-node-x8pqn Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-multus 60m Normal Created pod/multus-additional-cni-plugins-hg7bc Created container bond-cni-plugin openshift-multus 60m Normal Pulled pod/multus-additional-cni-plugins-hg7bc Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4903dfa59fc0fcc3521f6431d51c6ed0ef563b0c63614a2df44ee4d447ab6c5d" in 872.305576ms (872.319256ms including waiting) openshift-multus 60m Normal Started pod/multus-additional-cni-plugins-hg7bc Started container routeoverride-cni openshift-multus 60m Normal Created pod/multus-additional-cni-plugins-hg7bc Created container routeoverride-cni openshift-multus 60m Warning FailedMount pod/network-metrics-daemon-v6lsv MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered openshift-multus 60m Warning FailedMount pod/network-metrics-daemon-7vpmf MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered openshift-multus 60m Normal Pulling pod/multus-additional-cni-plugins-hg7bc Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" openshift-multus 60m Warning NetworkNotReady pod/network-metrics-daemon-7vpmf network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? openshift-multus 60m Normal Pulled pod/multus-additional-cni-plugins-hg7bc Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" in 1.784147732s (1.784159257s including waiting) openshift-multus 60m Normal Started pod/multus-additional-cni-plugins-hg7bc Started container whereabouts-cni-bincopy openshift-multus 60m Normal Created pod/multus-additional-cni-plugins-hg7bc Created container whereabouts-cni-bincopy openshift-multus 60m Warning NetworkNotReady pod/network-metrics-daemon-v6lsv network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? openshift-multus 60m Normal Pulled pod/multus-additional-cni-plugins-hg7bc Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" already present on machine openshift-multus 60m Normal Created pod/multus-additional-cni-plugins-hg7bc Created container whereabouts-cni openshift-multus 60m Normal Started pod/multus-additional-cni-plugins-hg7bc Started container whereabouts-cni openshift-multus 60m Normal Created pod/multus-additional-cni-plugins-hg7bc Created container kube-multus-additional-cni-plugins openshift-multus 60m Normal Pulled pod/multus-additional-cni-plugins-hg7bc Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" already present on machine openshift-multus 60m Warning FailedMount pod/network-metrics-daemon-9gx7g MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered openshift-network-diagnostics 60m Warning FailedMount pod/network-check-target-tmbg6 MountVolume.SetUp failed for volume "kube-api-access-95d6w" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] openshift-network-diagnostics 60m Warning FailedMount pod/network-check-target-v92f6 MountVolume.SetUp failed for volume "kube-api-access-m9r6x" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] openshift-network-diagnostics 60m Warning FailedMount pod/network-check-target-dvjbf MountVolume.SetUp failed for volume "kube-api-access-zm9s2" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] openshift-multus 60m Warning NetworkNotReady pod/network-metrics-daemon-9gx7g network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? openshift-network-diagnostics 60m Warning NetworkNotReady pod/network-check-target-tmbg6 network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? openshift-network-diagnostics 60m Warning NetworkNotReady pod/network-check-target-dvjbf network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? openshift-network-diagnostics 60m Warning NetworkNotReady pod/network-check-target-v92f6 network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? openshift-ovn-kubernetes 60m Normal Pulled pod/ovnkube-master-kzdhz Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-ovn-kubernetes 60m Normal Created pod/ovnkube-master-l7mb9 Created container kube-rbac-proxy openshift-ovn-kubernetes 60m Normal Pulled pod/ovnkube-master-w7545 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 60m Normal Started pod/ovnkube-master-w7545 Started container kube-rbac-proxy openshift-ovn-kubernetes 60m Normal Pulled pod/ovnkube-master-l7mb9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-ovn-kubernetes 60m Normal Started pod/ovnkube-master-w7545 Started container sbdb openshift-ovn-kubernetes 60m Normal Started pod/ovnkube-master-l7mb9 Started container kube-rbac-proxy openshift-ovn-kubernetes 60m Normal Pulled pod/ovnkube-master-l7mb9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 60m Normal Created pod/ovnkube-master-l7mb9 Created container sbdb openshift-ovn-kubernetes 60m Normal Created pod/ovnkube-master-w7545 Created container kube-rbac-proxy openshift-ovn-kubernetes 60m Normal Created pod/ovnkube-master-w7545 Created container sbdb openshift-ovn-kubernetes 60m Normal Started pod/ovnkube-master-l7mb9 Started container sbdb openshift-ovn-kubernetes 60m Normal Pulled pod/ovnkube-master-w7545 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-ovn-kubernetes 60m Normal Created pod/ovnkube-master-kzdhz Created container kube-rbac-proxy openshift-ovn-kubernetes 60m Normal Started pod/ovnkube-master-kzdhz Started container kube-rbac-proxy openshift-ovn-kubernetes 60m Normal Started pod/ovnkube-master-kzdhz Started container sbdb openshift-ovn-kubernetes 60m Normal Pulled pod/ovnkube-master-kzdhz Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 60m Normal Created pod/ovnkube-master-kzdhz Created container sbdb openshift-cluster-version 60m Warning FailedMount pod/cluster-version-operator-5d74b9d6f5-qzcfb Unable to attach or mount volumes: unmounted volumes=[serving-cert], unattached volumes=[service-ca kube-api-access etc-ssl-certs etc-cvo-updatepayloads serving-cert]: timed out waiting for the condition openshift-ovn-kubernetes 60m Warning Unhealthy pod/ovnkube-node-8qw6d Readiness probe failed: openshift-cluster-version 60m Warning FailedMount pod/cluster-version-operator-5d74b9d6f5-qzcfb MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found openshift-ovn-kubernetes 59m Normal Pulled pod/ovnkube-master-w7545 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 59m Normal Pulled pod/ovnkube-master-l7mb9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 59m Normal Started pod/ovnkube-master-w7545 Started container ovnkube-master openshift-ovn-kubernetes 59m Normal Pulled pod/ovnkube-master-w7545 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 59m Normal Created pod/ovnkube-master-w7545 Created container ovnkube-master openshift-ovn-kubernetes 59m Normal Created pod/ovnkube-master-w7545 Created container ovn-dbchecker default 59m Warning ErrorReconcilingNode node/ip-10-0-197-197.ec2.internal [k8s.ovn.org/node-chassis-id annotation not found for node ip-10-0-197-197.ec2.internal, macAddress annotation not found for node "ip-10-0-197-197.ec2.internal" , k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-197-197.ec2.internal"] default 59m Warning ErrorReconcilingNode node/ip-10-0-140-6.ec2.internal [k8s.ovn.org/node-chassis-id annotation not found for node ip-10-0-140-6.ec2.internal, macAddress annotation not found for node "ip-10-0-140-6.ec2.internal" , k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-140-6.ec2.internal"] openshift-multus 59m Warning ErrorAddingResource pod/network-metrics-daemon-7vpmf addLogicalPort failed for openshift-multus/network-metrics-daemon-7vpmf: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-239-132.ec2.internal" openshift-multus 59m Warning ErrorUpdatingResource pod/network-metrics-daemon-7vpmf addLogicalPort failed for openshift-multus/network-metrics-daemon-7vpmf: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-239-132.ec2.internal" openshift-network-diagnostics 59m Warning ErrorUpdatingResource pod/network-check-target-tmbg6 addLogicalPort failed for openshift-network-diagnostics/network-check-target-tmbg6: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-140-6.ec2.internal" openshift-network-diagnostics 59m Warning ErrorAddingResource pod/network-check-target-tmbg6 addLogicalPort failed for openshift-network-diagnostics/network-check-target-tmbg6: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-140-6.ec2.internal" openshift-ovn-kubernetes 59m Normal Created pod/ovnkube-master-l7mb9 Created container ovn-dbchecker openshift-ovn-kubernetes 59m Normal Started pod/ovnkube-master-l7mb9 Started container ovn-dbchecker openshift-ovn-kubernetes 59m Normal Pulled pod/ovnkube-master-kzdhz Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 59m Normal Started pod/ovnkube-master-kzdhz Started container ovnkube-master openshift-ovn-kubernetes 59m Normal LeaderElection lease/ovn-kubernetes-master ip-10-0-239-132.ec2.internal became leader openshift-multus 59m Warning ErrorAddingResource pod/network-metrics-daemon-9gx7g addLogicalPort failed for openshift-multus/network-metrics-daemon-9gx7g: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-197-197.ec2.internal" openshift-ovn-kubernetes 59m Normal LeaderElection lease/ovn-kubernetes-cluster-manager ip-10-0-140-6.ec2.internal became leader openshift-ovn-kubernetes 59m Normal Pulled pod/ovnkube-master-kzdhz Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 59m Normal Started pod/ovnkube-master-kzdhz Started container ovn-dbchecker openshift-network-diagnostics 59m Warning ErrorAddingResource pod/network-check-target-v92f6 addLogicalPort failed for openshift-network-diagnostics/network-check-target-v92f6: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-239-132.ec2.internal" openshift-network-diagnostics 59m Warning ErrorUpdatingResource pod/network-check-target-v92f6 addLogicalPort failed for openshift-network-diagnostics/network-check-target-v92f6: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-239-132.ec2.internal" default 59m Warning ErrorReconcilingNode node/ip-10-0-239-132.ec2.internal [k8s.ovn.org/node-chassis-id annotation not found for node ip-10-0-239-132.ec2.internal, macAddress annotation not found for node "ip-10-0-239-132.ec2.internal" , k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-239-132.ec2.internal"] openshift-ovn-kubernetes 59m Normal Started pod/ovnkube-master-w7545 Started container ovn-dbchecker openshift-network-diagnostics 59m Warning ErrorUpdatingResource pod/network-check-target-dvjbf addLogicalPort failed for openshift-network-diagnostics/network-check-target-dvjbf: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-197-197.ec2.internal" openshift-network-diagnostics 59m Warning ErrorAddingResource pod/network-check-target-dvjbf addLogicalPort failed for openshift-network-diagnostics/network-check-target-dvjbf: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-197-197.ec2.internal" openshift-multus 59m Warning ErrorUpdatingResource pod/network-metrics-daemon-9gx7g addLogicalPort failed for openshift-multus/network-metrics-daemon-9gx7g: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-197-197.ec2.internal" openshift-multus 59m Warning ErrorAddingResource pod/network-metrics-daemon-v6lsv addLogicalPort failed for openshift-multus/network-metrics-daemon-v6lsv: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-140-6.ec2.internal" openshift-multus 59m Warning ErrorUpdatingResource pod/network-metrics-daemon-v6lsv addLogicalPort failed for openshift-multus/network-metrics-daemon-v6lsv: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-140-6.ec2.internal" openshift-ovn-kubernetes 59m Normal Created pod/ovnkube-master-kzdhz Created container ovnkube-master openshift-ovn-kubernetes 59m Normal Created pod/ovnkube-master-kzdhz Created container ovn-dbchecker openshift-ovn-kubernetes 59m Normal Started pod/ovnkube-node-x8pqn Started container ovn-controller default 59m Warning ErrorReconcilingNode node/ip-10-0-239-132.ec2.internal error creating gateway for node ip-10-0-239-132.ec2.internal: failed to init shared interface gateway: failed to create MAC Binding for dummy nexthop ip-10-0-239-132.ec2.internal: error getting datapath GR_ip-10-0-239-132.ec2.internal: object not found default 59m Warning ErrorReconcilingNode node/ip-10-0-197-197.ec2.internal error creating gateway for node ip-10-0-197-197.ec2.internal: failed to init shared interface gateway: failed to create MAC Binding for dummy nexthop ip-10-0-197-197.ec2.internal: error getting datapath GR_ip-10-0-197-197.ec2.internal: object not found openshift-ovn-kubernetes 59m Normal Created pod/ovnkube-node-x8pqn Created container ovn-controller openshift-ovn-kubernetes 59m Warning Unhealthy pod/ovnkube-node-x8pqn Readiness probe failed: default 59m Warning ErrorReconcilingNode node/ip-10-0-140-6.ec2.internal error creating gateway for node ip-10-0-140-6.ec2.internal: failed to init shared interface gateway: failed to create MAC Binding for dummy nexthop ip-10-0-140-6.ec2.internal: error getting datapath GR_ip-10-0-140-6.ec2.internal: object not found openshift-ovn-kubernetes 59m Normal Pulled pod/ovnkube-node-x8pqn Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 59m Normal Started pod/ovnkube-node-wsrzb Started container ovn-controller openshift-ovn-kubernetes 59m Normal Pulled pod/ovnkube-node-8qw6d Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 59m Warning Unhealthy pod/ovnkube-node-wsrzb Readiness probe failed: openshift-ovn-kubernetes 59m Normal Pulled pod/ovnkube-node-wsrzb Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 59m Normal Created pod/ovnkube-node-8qw6d Created container ovn-controller openshift-ovn-kubernetes 59m Normal Started pod/ovnkube-node-8qw6d Started container ovn-controller openshift-ovn-kubernetes 59m Normal Created pod/ovnkube-node-wsrzb Created container ovn-controller openshift-operator-lifecycle-manager 59m Normal Pulling pod/package-server-manager-fc98f8f64-l2df9 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:497121bfdb3b293af2c939af393bfbda39aeace9d754f7288a244cd7ec2662d7" openshift-kube-storage-version-migrator-operator 59m Normal AddedInterface pod/kube-storage-version-migrator-operator-7f8b95cf5f-x5hzl Add eth0 [10.130.0.32/23] from ovn-kubernetes openshift-machine-config-operator 59m Normal AddedInterface pod/machine-config-operator-7fd9cd8968-9vg57 Add eth0 [10.130.0.12/23] from ovn-kubernetes openshift-config-operator 59m Normal AddedInterface pod/openshift-config-operator-67bdbffb68-sdgx7 Add eth0 [10.130.0.24/23] from ovn-kubernetes openshift-etcd-operator 59m Normal AddedInterface pod/etcd-operator-775754ddff-xjxrm Add eth0 [10.130.0.8/23] from ovn-kubernetes openshift-controller-manager-operator 59m Normal Pulling pod/openshift-controller-manager-operator-6548869cc5-9kqx5 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa8066a640500eaaf14c73b769e8792c0b420a927adb8db98ec47d9440a85d32" openshift-etcd-operator 59m Normal Pulling pod/etcd-operator-775754ddff-xjxrm Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" openshift-apiserver-operator 59m Normal Pulling pod/openshift-apiserver-operator-67fd94b9d7-nvg29 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:55b8c96568666d4340d71558c31742bd8b5c02ab0cca7913fa41586d5f2de697" openshift-cluster-storage-operator 59m Normal AddedInterface pod/csi-snapshot-controller-operator-c9586b974-wk85s Add eth0 [10.130.0.9/23] from ovn-kubernetes openshift-cluster-storage-operator 59m Normal Pulling pod/csi-snapshot-controller-operator-c9586b974-wk85s Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85e377fa5f92f13c07ca57eeaa575f7ef80ed954ae231f70ca70bfbe173b070b" openshift-controller-manager-operator 59m Normal AddedInterface pod/openshift-controller-manager-operator-6548869cc5-9kqx5 Add eth0 [10.130.0.23/23] from ovn-kubernetes openshift-config-operator 59m Normal Pulling pod/openshift-config-operator-67bdbffb68-sdgx7 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6eca04bc4045ccf6694e6e0c94453e9c1d8dcbb669a58419603b3c2aab18488b" openshift-insights 59m Normal AddedInterface pod/insights-operator-6fd65c6b65-vrxhp Add eth0 [10.130.0.28/23] from ovn-kubernetes openshift-kube-controller-manager-operator 59m Normal Pulling pod/kube-controller-manager-operator-655bd6977c-z9mb9 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" openshift-machine-config-operator 59m Normal Started pod/machine-config-operator-7fd9cd8968-9vg57 Started container machine-config-operator openshift-machine-config-operator 59m Normal Created pod/machine-config-operator-7fd9cd8968-9vg57 Created container machine-config-operator openshift-kube-storage-version-migrator-operator 59m Normal Pulling pod/kube-storage-version-migrator-operator-7f8b95cf5f-x5hzl Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc8e1a30ec145b1e91f862880b9866d48abe8056fe69edd94d760739137b6d4a" openshift-kube-apiserver-operator 59m Normal AddedInterface pod/kube-apiserver-operator-79b598d5b4-dqp95 Add eth0 [10.130.0.13/23] from ovn-kubernetes openshift-machine-config-operator 59m Normal Pulled pod/machine-config-operator-7fd9cd8968-9vg57 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" already present on machine openshift-kube-scheduler-operator 59m Normal Pulling pod/openshift-kube-scheduler-operator-c98d57874-wj7tl Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" openshift-insights 59m Normal Pulling pod/insights-operator-6fd65c6b65-vrxhp Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7cb4c45f3e100ceddafee4c6ccd57d79f5a6627686484aba625c1486c2ffc1c8" openshift-operator-lifecycle-manager 59m Normal AddedInterface pod/package-server-manager-fc98f8f64-l2df9 Add eth0 [10.130.0.14/23] from ovn-kubernetes openshift-apiserver-operator 59m Normal AddedInterface pod/openshift-apiserver-operator-67fd94b9d7-nvg29 Add eth0 [10.130.0.17/23] from ovn-kubernetes openshift-kube-controller-manager-operator 59m Normal AddedInterface pod/kube-controller-manager-operator-655bd6977c-z9mb9 Add eth0 [10.130.0.31/23] from ovn-kubernetes openshift-kube-scheduler-operator 59m Normal AddedInterface pod/openshift-kube-scheduler-operator-c98d57874-wj7tl Add eth0 [10.130.0.19/23] from ovn-kubernetes openshift-kube-apiserver-operator 59m Normal Pulling pod/kube-apiserver-operator-79b598d5b4-dqp95 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" openshift-cluster-storage-operator 59m Normal AddedInterface pod/cluster-storage-operator-fb5868667-cclnx Add eth0 [10.130.0.27/23] from ovn-kubernetes openshift-cloud-network-config-controller 59m Warning Failed pod/cloud-network-config-controller-7cc55b87d4-drl56 Error: ErrImagePull openshift-cluster-storage-operator 59m Normal Pulling pod/cluster-storage-operator-fb5868667-cclnx Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2a4719dd49c67aa02ad187264977e0b64ad2b0d6725e99b1d460567663961ef4" openshift-service-ca-operator 59m Normal AddedInterface pod/service-ca-operator-7988896c96-5q667 Add eth0 [10.130.0.33/23] from ovn-kubernetes openshift-service-ca-operator 59m Normal Pulling pod/service-ca-operator-7988896c96-5q667 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7f7cb6554c1dc9b5b3b58f162f592062e5c63bf24c5ed90a62074e117be3f743" openshift-authentication-operator 59m Normal AddedInterface pod/authentication-operator-dbb89644b-tbxcm Add eth0 [10.130.0.21/23] from ovn-kubernetes default 59m Normal OperatorVersionChanged /machine-config clusteroperator/machine-config-operator started a version change from [] to [{operator 4.13.0-rc.0}] openshift-cloud-network-config-controller 59m Normal AddedInterface pod/cloud-network-config-controller-7cc55b87d4-drl56 Add eth0 [10.130.0.30/23] from ovn-kubernetes openshift-authentication-operator 59m Normal Pulling pod/authentication-operator-dbb89644b-tbxcm Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8b9deb101306eca89fb04662fd5266a3704ad19d6e54cae5ae79e373c0ec62d" openshift-cloud-network-config-controller 59m Warning Failed pod/cloud-network-config-controller-7cc55b87d4-drl56 Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737fbb45ea282de2eba6ed7c7e0112d62d31a74ed0dc6b9d0b1ad01975227945": pull QPS exceeded openshift-machine-config-operator 59m Normal CustomResourceDefinitionCreated deployment/machine-config-operator Created CustomResourceDefinition.apiextensions.k8s.io/controllerconfigs.machineconfiguration.openshift.io because it was missing openshift-machine-config-operator 59m Normal ClusterRoleCreated deployment/machine-config-operator Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon-events because it was missing openshift-machine-config-operator 59m Normal SecretCreated deployment/machine-config-operator Created Secret/master-user-data-managed -n openshift-machine-api because it was missing openshift-machine-config-operator 59m Normal SecretCreated deployment/machine-config-operator Created Secret/worker-user-data-managed -n openshift-machine-api because it was missing openshift-machine-config-operator 59m Normal ClusterRoleCreated deployment/machine-config-operator Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon because it was missing openshift-machine-config-operator 59m Normal RoleBindingCreated deployment/machine-config-operator Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n openshift-machine-config-operator because it was missing openshift-machine-config-operator 59m Normal RoleBindingCreated deployment/machine-config-operator Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n default because it was missing openshift-machine-config-operator 59m Normal ClusterRoleBindingCreated deployment/machine-config-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-daemon because it was missing openshift-machine-config-operator 59m Normal ServiceAccountCreated deployment/machine-config-operator Created ServiceAccount/machine-config-daemon -n openshift-machine-config-operator because it was missing openshift-cloud-network-config-controller 59m Normal BackOff pod/cloud-network-config-controller-7cc55b87d4-drl56 Back-off pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737fbb45ea282de2eba6ed7c7e0112d62d31a74ed0dc6b9d0b1ad01975227945" openshift-cloud-network-config-controller 59m Warning Failed pod/cloud-network-config-controller-7cc55b87d4-drl56 Error: ImagePullBackOff openshift-machine-config-operator 59m Normal SuccessfulCreate daemonset/machine-config-daemon Created pod: machine-config-daemon-s6f62 openshift-machine-config-operator 59m Normal SuccessfulCreate daemonset/machine-config-daemon Created pod: machine-config-daemon-ll5kq openshift-machine-config-operator 59m Normal SuccessfulCreate daemonset/machine-config-daemon Created pod: machine-config-daemon-zlzm2 openshift-machine-config-operator 59m Normal SecretCreated deployment/machine-config-operator Created Secret/cookie-secret -n openshift-machine-config-operator because it was missing openshift-kube-storage-version-migrator-operator 59m Normal Pulled pod/kube-storage-version-migrator-operator-7f8b95cf5f-x5hzl Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc8e1a30ec145b1e91f862880b9866d48abe8056fe69edd94d760739137b6d4a" in 11.225178763s (11.22518738s including waiting) openshift-cluster-storage-operator 59m Normal Created pod/csi-snapshot-controller-operator-c9586b974-wk85s Created container csi-snapshot-controller-operator openshift-kube-controller-manager-operator 59m Normal Pulled pod/kube-controller-manager-operator-655bd6977c-z9mb9 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" in 10.895339142s (10.895456203s including waiting) openshift-controller-manager-operator 59m Normal Pulled pod/openshift-controller-manager-operator-6548869cc5-9kqx5 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa8066a640500eaaf14c73b769e8792c0b420a927adb8db98ec47d9440a85d32" in 11.251156641s (11.251164926s including waiting) openshift-cluster-storage-operator 59m Normal Pulled pod/cluster-storage-operator-fb5868667-cclnx Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2a4719dd49c67aa02ad187264977e0b64ad2b0d6725e99b1d460567663961ef4" in 10.536991537s (10.536998135s including waiting) openshift-insights 59m Normal Pulled pod/insights-operator-6fd65c6b65-vrxhp Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7cb4c45f3e100ceddafee4c6ccd57d79f5a6627686484aba625c1486c2ffc1c8" in 10.950229955s (10.950238504s including waiting) openshift-service-ca-operator 59m Normal Pulled pod/service-ca-operator-7988896c96-5q667 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7f7cb6554c1dc9b5b3b58f162f592062e5c63bf24c5ed90a62074e117be3f743" in 10.456980329s (10.457010412s including waiting) openshift-kube-scheduler-operator 59m Normal Pulled pod/openshift-kube-scheduler-operator-c98d57874-wj7tl Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" in 11.278931219s (11.278938396s including waiting) openshift-authentication-operator 59m Normal Pulled pod/authentication-operator-dbb89644b-tbxcm Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8b9deb101306eca89fb04662fd5266a3704ad19d6e54cae5ae79e373c0ec62d" in 10.463746372s (10.463753465s including waiting) openshift-etcd-operator 59m Normal Pulled pod/etcd-operator-775754ddff-xjxrm Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" in 10.740967697s (10.740974682s including waiting) openshift-operator-lifecycle-manager 59m Normal Created pod/package-server-manager-fc98f8f64-l2df9 Created container package-server-manager openshift-apiserver-operator 59m Normal Pulled pod/openshift-apiserver-operator-67fd94b9d7-nvg29 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:55b8c96568666d4340d71558c31742bd8b5c02ab0cca7913fa41586d5f2de697" in 10.978213373s (10.978219826s including waiting) openshift-operator-lifecycle-manager 59m Normal Pulled pod/package-server-manager-fc98f8f64-l2df9 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:497121bfdb3b293af2c939af393bfbda39aeace9d754f7288a244cd7ec2662d7" in 11.035834925s (11.03584421s including waiting) openshift-cluster-storage-operator 59m Normal Pulled pod/csi-snapshot-controller-operator-c9586b974-wk85s Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85e377fa5f92f13c07ca57eeaa575f7ef80ed954ae231f70ca70bfbe173b070b" in 11.394254449s (11.394267809s including waiting) openshift-config-operator 59m Normal Pulled pod/openshift-config-operator-67bdbffb68-sdgx7 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6eca04bc4045ccf6694e6e0c94453e9c1d8dcbb669a58419603b3c2aab18488b" in 10.718173755s (10.71818186s including waiting) openshift-kube-apiserver-operator 59m Normal Pulled pod/kube-apiserver-operator-79b598d5b4-dqp95 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" in 10.617830747s (10.617845902s including waiting) openshift-cluster-storage-operator 59m Normal Started pod/csi-snapshot-controller-operator-c9586b974-wk85s Started container csi-snapshot-controller-operator openshift-operator-lifecycle-manager 59m Normal Started pod/package-server-manager-fc98f8f64-l2df9 Started container package-server-manager openshift-kube-scheduler-operator 59m Warning FastControllerResync deployment/openshift-kube-scheduler-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler-operator 59m Warning FastControllerResync deployment/openshift-kube-scheduler-operator Controller "GuardController" resync interval is set to 0s which might lead to client request throttling openshift-cluster-storage-operator 59m Normal LeaderElection lease/csi-snapshot-controller-operator-lock csi-snapshot-controller-operator-c9586b974-wk85s_53b8d07a-ce0e-437e-b8bc-96ceedbe97b2 became leader openshift-kube-scheduler-operator 59m Normal LeaderElection lease/openshift-cluster-kube-scheduler-operator-lock openshift-kube-scheduler-operator-c98d57874-wj7tl_a98ba682-ee5e-46f4-ac9f-f7e3784f31ec became leader openshift-kube-scheduler-operator 59m Normal LeaderElection configmap/openshift-cluster-kube-scheduler-operator-lock openshift-kube-scheduler-operator-c98d57874-wj7tl_a98ba682-ee5e-46f4-ac9f-f7e3784f31ec became leader openshift-cluster-storage-operator 59m Warning FastControllerResync deployment/csi-snapshot-controller-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-cluster-storage-operator 59m Normal LeaderElection configmap/csi-snapshot-controller-operator-lock csi-snapshot-controller-operator-c9586b974-wk85s_53b8d07a-ce0e-437e-b8bc-96ceedbe97b2 became leader openshift-cluster-storage-operator 59m Warning FastControllerResync deployment/cluster-storage-operator Controller "SnapshotCRDController" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler-operator 59m Warning FastControllerResync deployment/openshift-kube-scheduler-operator Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling openshift-cluster-storage-operator 59m Normal LeaderElection configmap/cluster-storage-operator-lock cluster-storage-operator-fb5868667-cclnx_eb922104-b16c-4a4d-bd79-cff2fe0e2a63 became leader openshift-cluster-storage-operator 59m Normal LeaderElection lease/cluster-storage-operator-lock cluster-storage-operator-fb5868667-cclnx_eb922104-b16c-4a4d-bd79-cff2fe0e2a63 became leader openshift-kube-scheduler-operator 59m Warning FastControllerResync deployment/openshift-kube-scheduler-operator Controller "NodeController" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler-operator 59m Warning FastControllerResync deployment/openshift-kube-scheduler-operator Controller "PruneController" resync interval is set to 0s which might lead to client request throttling openshift-cluster-storage-operator 59m Warning FastControllerResync deployment/cluster-storage-operator Controller "DefaultStorageClassController" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler-operator 59m Warning FastControllerResync deployment/openshift-kube-scheduler-operator Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 59m Warning FastControllerResync deployment/kube-apiserver-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler-operator 59m Normal ObservedConfigChanged deployment/openshift-kube-scheduler-operator Writing updated observed config: map[string]any{... openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Degraded set to False ("All is well"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"} {"sharedresource.openshift.io" "sharedconfigmaps" "" ""} {"sharedresource.openshift.io" "sharedsecrets" "" ""}],status.versions changed from [] to [{"operator" "4.13.0-rc.0"}] openshift-cluster-storage-operator 59m Normal OperatorVersionChanged deployment/cluster-storage-operator clusteroperator/storage version "operator" changed from "" to "4.13.0-rc.0" openshift-cluster-storage-operator 59m Warning FastControllerResync deployment/cluster-storage-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-cluster-storage-operator 59m Warning FastControllerResync deployment/cluster-storage-operator Controller "VSphereProblemDetectorStarter" resync interval is set to 0s which might lead to client request throttling openshift-cluster-storage-operator 59m Warning FastControllerResync deployment/cluster-storage-operator Controller "CSIDriverStarter" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler-operator 59m Normal ObserveTLSSecurityProfile deployment/openshift-kube-scheduler-operator cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to True ("DefaultStorageClassControllerAvailable: StorageClass provided by supplied CSI Driver instead of the cluster-storage-operator") openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: status.relatedObjects changed from [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"} {"sharedresource.openshift.io" "sharedconfigmaps" "" ""} {"sharedresource.openshift.io" "sharedsecrets" "" ""}] to [{"" "serviceaccounts" "openshift-cluster-csi-drivers" "aws-ebs-csi-driver-operator"} {"rbac.authorization.k8s.io" "roles" "openshift-cluster-csi-drivers" "aws-ebs-csi-driver-operator-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-cluster-csi-drivers" "aws-ebs-csi-driver-operator-rolebinding"} {"rbac.authorization.k8s.io" "clusterroles" "" "aws-ebs-csi-driver-operator-clusterrole"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "aws-ebs-csi-driver-operator-clusterrolebinding"} {"rbac.authorization.k8s.io" "roles" "openshift-config-managed" "aws-ebs-csi-driver-operator-aws-config-role"} {"rbac.authorization.k8s.io" "rolebindings" "openshift-config-managed" "aws-ebs-csi-driver-operator-aws-config-clusterrolebinding"} {"operator.openshift.io" "clustercsidrivers" "" "ebs.csi.aws.com"} {"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"} {"sharedresource.openshift.io" "sharedconfigmaps" "" ""} {"sharedresource.openshift.io" "sharedsecrets" "" ""}] openshift-kube-scheduler-operator 59m Normal ObserveTLSSecurityProfile deployment/openshift-kube-scheduler-operator minTLSVersion changed to VersionTLS12 openshift-cluster-storage-operator 59m Normal DeploymentCreated deployment/cluster-storage-operator Created Deployment.apps/aws-ebs-csi-driver-operator -n openshift-cluster-csi-drivers because it was missing openshift-kube-storage-version-migrator-operator 59m Warning FastControllerResync deployment/kube-storage-version-migrator-operator Controller "StaticConditionsController" resync interval is set to 0s which might lead to client request throttling openshift-kube-storage-version-migrator-operator 59m Warning FastControllerResync deployment/kube-storage-version-migrator-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler-operator 59m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Progressing changed from False to True ("AWSEBSProgressing: Waiting for Deployment to act on changes") openshift-cluster-storage-operator 59m Normal ServiceAccountCreated deployment/cluster-storage-operator Created ServiceAccount/aws-ebs-csi-driver-operator -n openshift-cluster-csi-drivers because it was missing openshift-kube-storage-version-migrator-operator 59m Normal OperatorVersionChanged deployment/kube-storage-version-migrator-operator clusteroperator/kube-storage-version-migrator version "operator" changed from "" to "4.13.0-rc.0" openshift-kube-storage-version-migrator-operator 59m Normal OperatorStatusChanged deployment/kube-storage-version-migrator-operator Status for clusteroperator/kube-storage-version-migrator changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),status.versions changed from [] to [{"operator" "4.13.0-rc.0"}] openshift-cluster-storage-operator 59m Normal ClusterCSIDriverCreated deployment/cluster-storage-operator Created ClusterCSIDriver.operator.openshift.io/ebs.csi.aws.com because it was missing openshift-kube-storage-version-migrator-operator 59m Normal OperatorStatusChanged deployment/kube-storage-version-migrator-operator Status for clusteroperator/kube-storage-version-migrator changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to True ("All is well"),status.versions changed from [] to [{"operator" "4.13.0-rc.0"}] openshift-kube-storage-version-migrator-operator 59m Normal OperatorStatusChanged deployment/kube-storage-version-migrator-operator Status for clusteroperator/kube-storage-version-migrator changed: Upgradeable changed from Unknown to True ("All is well") openshift-kube-storage-version-migrator-operator 59m Normal ServiceAccountCreated deployment/kube-storage-version-migrator-operator Created ServiceAccount/kube-storage-version-migrator-sa -n openshift-kube-storage-version-migrator because it was missing openshift-cluster-storage-operator 59m Normal RoleCreated deployment/cluster-storage-operator Created Role.rbac.authorization.k8s.io/aws-ebs-csi-driver-operator-role -n openshift-cluster-csi-drivers because it was missing openshift-cluster-storage-operator 59m Normal RoleBindingCreated deployment/cluster-storage-operator Created RoleBinding.rbac.authorization.k8s.io/aws-ebs-csi-driver-operator-rolebinding -n openshift-cluster-csi-drivers because it was missing openshift-kube-scheduler-operator 59m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Progressing message changed from "All is well" to "NodeInstallerProgressing: 3 nodes are at revision 0",Available message changed from "StaticPodsAvailable: 0 nodes are active; " to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0" openshift-kube-scheduler-operator 59m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" openshift-kube-scheduler-operator 59m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready" openshift-kube-controller-manager-operator 59m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"" "nodes" "" ""} {"certificates.k8s.io" "certificatesigningrequests" "" ""}] to [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"certificates.k8s.io" "certificatesigningrequests" "" ""} {"" "nodes" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.13.0-rc.0"}] openshift-kube-controller-manager-operator 59m Normal PodDisruptionBudgetCreated deployment/kube-controller-manager-operator Created PodDisruptionBudget.policy/kube-controller-manager-guard-pdb -n openshift-kube-controller-manager because it was missing openshift-kube-scheduler-operator 59m Normal PodDisruptionBudgetCreated deployment/openshift-kube-scheduler-operator Created PodDisruptionBudget.policy/openshift-kube-scheduler-guard-pdb -n openshift-kube-scheduler because it was missing openshift-kube-scheduler-operator 59m Normal MasterNodeObserved deployment/openshift-kube-scheduler-operator Observed new master node ip-10-0-197-197.ec2.internal openshift-kube-scheduler-operator 59m Normal MasterNodeObserved deployment/openshift-kube-scheduler-operator Observed new master node ip-10-0-140-6.ec2.internal openshift-kube-scheduler-operator 59m Normal MasterNodeObserved deployment/openshift-kube-scheduler-operator Observed new master node ip-10-0-239-132.ec2.internal openshift-cluster-storage-operator 59m Normal ClusterRoleCreated deployment/cluster-storage-operator Created ClusterRole.rbac.authorization.k8s.io/aws-ebs-csi-driver-operator-clusterrole because it was missing openshift-kube-controller-manager-operator 59m Normal OperatorVersionChanged deployment/kube-controller-manager-operator clusteroperator/kube-controller-manager version "raw-internal" changed from "" to "4.13.0-rc.0" openshift-kube-controller-manager-operator 59m Normal CABundleUpdateRequired deployment/kube-controller-manager-operator "csr-controller-signer-ca" in "openshift-kube-controller-manager-operator" requires a new cert openshift-kube-controller-manager-operator 59m Warning FastControllerResync deployment/kube-controller-manager-operator Controller "InstallerController" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager-operator 59m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Upgradeable changed from Unknown to True ("All is well") openshift-kube-scheduler-operator 59m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Upgradeable changed from Unknown to True ("All is well") openshift-kube-controller-manager-operator 59m Warning FastControllerResync deployment/kube-controller-manager-operator Controller "GuardController" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler-operator 59m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded changed from Unknown to False ("InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]"),Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; ") openshift-kube-controller-manager-operator 59m Warning FastControllerResync deployment/kube-controller-manager-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager-operator 59m Warning FastControllerResync deployment/kube-controller-manager-operator Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler-operator 59m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"" "namespaces" "" "openshift-kube-scheduler"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-scheduler" ""}] to [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""}],status.versions changed from [] to [{"raw-internal" "4.13.0-rc.0"}] openshift-kube-storage-version-migrator-operator 59m Normal ClusterRoleBindingCreated deployment/kube-storage-version-migrator-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/storage-version-migration-migrator because it was missing openshift-kube-storage-version-migrator-operator 59m Normal DeploymentCreated deployment/kube-storage-version-migrator-operator Created Deployment.apps/migrator -n openshift-kube-storage-version-migrator because it was missing openshift-kube-storage-version-migrator-operator 59m Normal OperatorStatusChanged deployment/kube-storage-version-migrator-operator Status for clusteroperator/kube-storage-version-migrator changed: Degraded changed from Unknown to False ("All is well") openshift-kube-storage-version-migrator-operator 59m Normal OperatorStatusChanged deployment/kube-storage-version-migrator-operator Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from Unknown to True ("KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("KubeStorageVersionMigratorAvailable: Waiting for Deployment") openshift-kube-scheduler-operator 59m Normal OperatorVersionChanged deployment/openshift-kube-scheduler-operator clusteroperator/kube-scheduler version "raw-internal" changed from "" to "4.13.0-rc.0" openshift-kube-scheduler-operator 59m Normal RevisionTriggered deployment/openshift-kube-scheduler-operator new revision 1 triggered by "configmap \"kube-scheduler-pod\" not found" openshift-cluster-csi-drivers 59m Normal ScalingReplicaSet deployment/aws-ebs-csi-driver-operator Scaled up replica set aws-ebs-csi-driver-operator-667bfc499d to 1 openshift-cluster-csi-drivers 59m Warning FailedCreate replicaset/aws-ebs-csi-driver-operator-667bfc499d Error creating: pods "aws-ebs-csi-driver-operator-667bfc499d-" is forbidden: error looking up service account openshift-cluster-csi-drivers/aws-ebs-csi-driver-operator: serviceaccount "aws-ebs-csi-driver-operator" not found openshift-kube-scheduler-operator 59m Warning FastControllerResync deployment/openshift-kube-scheduler-operator Controller "InstallerController" resync interval is set to 0s which might lead to client request throttling openshift-kube-storage-version-migrator-operator 59m Normal OperatorStatusChanged deployment/kube-storage-version-migrator-operator Status for clusteroperator/kube-storage-version-migrator changed: Progressing message changed from "KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes" to "KubeStorageVersionMigratorProgressing: Waiting for Deployment to deploy pods" openshift-authentication-operator 59m Warning FastControllerResync deployment/authentication-operator Controller "OAuthAPIServerControllerWorkloadController" resync interval is set to 0s which might lead to client request throttling openshift-authentication-operator 59m Warning FastControllerResync deployment/authentication-operator Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 59m Normal ServiceAccountIssuer deployment/kube-apiserver-operator Issuer set to default value "https://kubernetes.default.svc" openshift-kube-apiserver-operator 59m Normal OperatorVersionChanged deployment/kube-apiserver-operator clusteroperator/kube-apiserver version "raw-internal" changed from "" to "4.13.0-rc.0" openshift-apiserver-operator 59m Warning FastControllerResync deployment/openshift-apiserver-operator Controller "OpenShiftAPIServerWorkloadController" resync interval is set to 0s which might lead to client request throttling openshift-apiserver-operator 59m Warning FastControllerResync deployment/openshift-apiserver-operator Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling openshift-authentication-operator 59m Warning FastControllerResync deployment/authentication-operator Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling openshift-authentication-operator 59m Warning FastControllerResync deployment/authentication-operator Controller "SecretRevisionPruneController" resync interval is set to 0s which might lead to client request throttling openshift-etcd-operator 59m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Upgradeable changed from Unknown to True ("All is well") openshift-kube-apiserver-operator 59m Warning FastControllerResync deployment/kube-apiserver-operator Controller "InstallerController" resync interval is set to 0s which might lead to client request throttling openshift-etcd-operator 59m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded set to False ("EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),status.versions changed from [] to [{"raw-internal" "4.13.0-rc.0"}] openshift-etcd-operator 59m Warning RequiredInstallerResourcesMissing deployment/etcd-operator configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, secrets: etcd-all-certs, configmaps: etcd-endpoints-0,etcd-metrics-proxy-client-ca-0,etcd-metrics-proxy-serving-ca-0,etcd-peer-client-ca-0,etcd-pod-0,etcd-serving-ca-0, secrets: etcd-all-certs-0 openshift-etcd-operator 59m Normal RevisionTriggered deployment/etcd-operator new revision 1 triggered by "configmap \"etcd-pod\" not found" openshift-etcd-operator 59m Normal OperatorVersionChanged deployment/etcd-operator clusteroperator/etcd version "raw-internal" changed from "" to "4.13.0-rc.0" openshift-authentication-operator 59m Warning FastControllerResync deployment/authentication-operator Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling openshift-authentication-operator 59m Normal LeaderElection configmap/cluster-authentication-operator-lock authentication-operator-dbb89644b-tbxcm_e550219f-7838-4d4c-a1d9-b07de870e09a became leader openshift-authentication-operator 59m Normal LeaderElection lease/cluster-authentication-operator-lock authentication-operator-dbb89644b-tbxcm_e550219f-7838-4d4c-a1d9-b07de870e09a became leader openshift-etcd-operator 59m Normal OperatorLogLevelChange deployment/etcd-operator Operator log level changed from "Debug" to "Normal" openshift-etcd-operator 59m Warning FastControllerResync deployment/etcd-operator Controller "InstallerController" resync interval is set to 0s which might lead to client request throttling openshift-etcd-operator 59m Warning ReportEtcdMembersErrorUpdatingStatus deployment/etcd-operator etcds.operator.openshift.io "cluster" not found openshift-authentication-operator 59m Warning FastControllerResync deployment/authentication-operator Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling openshift-authentication-operator 59m Warning FastControllerResync deployment/authentication-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-authentication-operator 59m Normal RevisionTriggered deployment/authentication-operator new revision 1 triggered by "configmap \"audit\" not found" openshift-authentication-operator 59m Normal OperatorVersionChanged deployment/authentication-operator clusteroperator/authentication version "operator" changed from "" to "4.13.0-rc.0" openshift-authentication-operator 59m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded set to False ("All is well"),Progressing set to Unknown (""),Available set to False ("OAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found"),Upgradeable set to True ("All is well"),status.relatedObjects changed from [] to [{"operator.openshift.io" "authentications" "" "cluster"} {"config.openshift.io" "authentications" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"route.openshift.io" "routes" "openshift-authentication" "oauth-openshift"} {"" "services" "openshift-authentication" "oauth-openshift"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-authentication"} {"" "namespaces" "" "openshift-authentication-operator"} {"" "namespaces" "" "openshift-ingress"} {"" "namespaces" "" "openshift-oauth-apiserver"}],status.versions changed from [] to [{"operator" "4.13.0-rc.0"}] default 59m Warning FailedToCreateEndpoint endpoints/csi-snapshot-webhook Failed to create endpoint for service openshift-cluster-storage-operator/csi-snapshot-webhook: endpoints "csi-snapshot-webhook" already exists openshift-kube-apiserver-operator 59m Warning FastControllerResync deployment/kube-apiserver-operator Controller "webhookSupportabilityController" resync interval is set to 0s which might lead to client request throttling openshift-apiserver-operator 59m Warning FastControllerResync deployment/openshift-apiserver-operator Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling openshift-kube-storage-version-migrator-operator 59m Normal NamespaceCreated deployment/kube-storage-version-migrator-operator Created Namespace/openshift-kube-storage-version-migrator because it was missing openshift-kube-apiserver-operator 59m Warning FastControllerResync deployment/kube-apiserver-operator Controller "KubeletVersionSkewController" resync interval is set to 0s which might lead to client request throttling openshift-cluster-storage-operator 59m Normal ScalingReplicaSet deployment/csi-snapshot-controller Scaled up replica set csi-snapshot-controller-f58c44499 to 2 openshift-etcd-operator 59m Normal LeaderElection configmap/openshift-cluster-etcd-operator-lock etcd-operator-775754ddff-xjxrm_5bdb2f6f-2e6d-473d-b13b-4d039c21ba55 became leader openshift-kube-apiserver-operator 59m Warning FastControllerResync deployment/kube-apiserver-operator Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling openshift-etcd-operator 59m Warning FastControllerResync deployment/etcd-operator Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling openshift-cluster-storage-operator 59m Normal CustomResourceDefinitionCreated deployment/csi-snapshot-controller-operator Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io because it was missing openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/csi-snapshot-controller-operator Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from Unknown to True ("CSISnapshotWebhookControllerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CSISnapshotWebhookControllerAvailable: Waiting for Deployment") openshift-cluster-storage-operator 59m Normal PodDisruptionBudgetCreated deployment/csi-snapshot-controller-operator Created PodDisruptionBudget.policy/csi-snapshot-webhook-pdb -n openshift-cluster-storage-operator because it was missing openshift-etcd-operator 59m Warning FastControllerResync deployment/etcd-operator Controller "GuardController" resync interval is set to 0s which might lead to client request throttling openshift-cluster-storage-operator 59m Normal CustomResourceDefinitionCreated deployment/csi-snapshot-controller-operator Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io because it was missing openshift-cluster-storage-operator 59m Normal PodDisruptionBudgetCreated deployment/csi-snapshot-controller-operator Created PodDisruptionBudget.policy/csi-snapshot-controller-pdb -n openshift-cluster-storage-operator because it was missing openshift-kube-storage-version-migrator-operator 59m Normal LeaderElection configmap/openshift-kube-storage-version-migrator-operator-lock kube-storage-version-migrator-operator-7f8b95cf5f-x5hzl_68c990c1-af49-4146-a5c5-8ceb5b0acefe became leader openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/csi-snapshot-controller-operator Status for clusteroperator/csi-snapshot-controller changed: Upgradeable changed from Unknown to True ("All is well") openshift-cluster-storage-operator 59m Normal ServiceCreated deployment/csi-snapshot-controller-operator Created Service/csi-snapshot-webhook -n openshift-cluster-storage-operator because it was missing openshift-cluster-storage-operator 59m Normal DeploymentCreated deployment/csi-snapshot-controller-operator Created Deployment.apps/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing openshift-cluster-storage-operator 59m Normal CustomResourceDefinitionCreated deployment/csi-snapshot-controller-operator Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io because it was missing openshift-cluster-storage-operator 59m Normal DeploymentCreated deployment/csi-snapshot-controller-operator Created Deployment.apps/csi-snapshot-webhook -n openshift-cluster-storage-operator because it was missing openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/csi-snapshot-controller-operator Status for clusteroperator/csi-snapshot-controller changed: Degraded set to False ("All is well"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"operator.openshift.io" "csisnapshotcontrollers" "" "cluster"}] openshift-cluster-storage-operator 59m Normal ServiceAccountCreated deployment/csi-snapshot-controller-operator Created ServiceAccount/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing openshift-cluster-storage-operator 59m Normal ValidatingWebhookConfigurationCreated deployment/csi-snapshot-controller-operator Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/snapshot.storage.k8s.io because it was missing openshift-kube-scheduler 59m Normal NoPods poddisruptionbudget/openshift-kube-scheduler-guard-pdb No matching pods found openshift-etcd-operator 59m Normal LeaderElection lease/openshift-cluster-etcd-operator-lock etcd-operator-775754ddff-xjxrm_5bdb2f6f-2e6d-473d-b13b-4d039c21ba55 became leader openshift-etcd-operator 59m Warning FastControllerResync deployment/etcd-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-cluster-storage-operator 59m Normal SuccessfulCreate replicaset/csi-snapshot-webhook-75476bf784 Created pod: csi-snapshot-webhook-75476bf784-7z4rl openshift-authentication-operator 59m Warning FastControllerResync deployment/authentication-operator Controller "OAuthServerWorkloadController" resync interval is set to 0s which might lead to client request throttling openshift-cluster-storage-operator 59m Normal SuccessfulCreate replicaset/csi-snapshot-webhook-75476bf784 Created pod: csi-snapshot-webhook-75476bf784-7vh6f openshift-etcd-operator 59m Warning FastControllerResync deployment/etcd-operator Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager 59m Normal NoPods poddisruptionbudget/kube-controller-manager-guard-pdb No matching pods found openshift-kube-apiserver-operator 59m Warning FastControllerResync deployment/kube-apiserver-operator Controller "FeatureUpgradeableController" resync interval is set to 0s which might lead to client request throttling openshift-cluster-storage-operator 59m Normal ScalingReplicaSet deployment/csi-snapshot-webhook Scaled up replica set csi-snapshot-webhook-75476bf784 to 2 openshift-cluster-storage-operator 59m Normal SuccessfulCreate replicaset/csi-snapshot-controller-f58c44499 Created pod: csi-snapshot-controller-f58c44499-k4v7v openshift-cluster-storage-operator 59m Normal SuccessfulCreate replicaset/csi-snapshot-controller-f58c44499 Created pod: csi-snapshot-controller-f58c44499-qvgsh openshift-kube-apiserver-operator 59m Normal LeaderElection lease/kube-apiserver-operator-lock kube-apiserver-operator-79b598d5b4-dqp95_98f6b4f0-48f8-4ee3-be3f-bcdb6729152d became leader openshift-apiserver-operator 59m Normal LeaderElection configmap/openshift-apiserver-operator-lock openshift-apiserver-operator-67fd94b9d7-nvg29_14d7f0d6-1237-4945-9b78-888a09a82553 became leader openshift-kube-controller-manager-operator 59m Normal LeaderElection configmap/kube-controller-manager-operator-lock kube-controller-manager-operator-655bd6977c-z9mb9_eecf92f2-b08d-4265-beac-bd4d51ddcc1d became leader openshift-apiserver-operator 59m Warning FastControllerResync deployment/openshift-apiserver-operator Controller "ConnectivityCheckController" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 59m Warning FastControllerResync deployment/kube-apiserver-operator Controller "GuardController" resync interval is set to 0s which might lead to client request throttling openshift-apiserver-operator 59m Warning FastControllerResync deployment/openshift-apiserver-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager-operator 59m Normal LeaderElection lease/kube-controller-manager-operator-lock kube-controller-manager-operator-655bd6977c-z9mb9_eecf92f2-b08d-4265-beac-bd4d51ddcc1d became leader openshift-etcd-operator 59m Warning FastControllerResync deployment/etcd-operator Controller "NodeController" resync interval is set to 0s which might lead to client request throttling kube-system 59m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-kube-storage-version-migrator namespace openshift-kube-storage-version-migrator 59m Normal AddedInterface pod/migrator-579f5cd9c5-sk4xj Add eth0 [10.129.0.7/23] from ovn-kubernetes openshift-kube-storage-version-migrator 59m Normal Pulling pod/migrator-579f5cd9c5-sk4xj Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39ef66439265e28941d847694107b349dff04d9cc64f0b713882e1895ea2acb9" openshift-apiserver-operator 59m Normal LeaderElection lease/openshift-apiserver-operator-lock openshift-apiserver-operator-67fd94b9d7-nvg29_14d7f0d6-1237-4945-9b78-888a09a82553 became leader openshift-apiserver-operator 59m Warning FastControllerResync deployment/openshift-apiserver-operator Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling openshift-apiserver-operator 59m Warning FastControllerResync deployment/openshift-apiserver-operator Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling openshift-kube-apiserver-operator 59m Warning FastControllerResync deployment/kube-apiserver-operator Controller "ConnectivityCheckController" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 59m Warning FastControllerResync deployment/kube-apiserver-operator Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 59m Warning FastControllerResync deployment/kube-apiserver-operator Controller "PruneController" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 59m Warning FastControllerResync deployment/kube-apiserver-operator Controller "NodeController" resync interval is set to 0s which might lead to client request throttling openshift-kube-storage-version-migrator 59m Normal SuccessfulCreate replicaset/migrator-579f5cd9c5 Created pod: migrator-579f5cd9c5-sk4xj openshift-apiserver-operator 59m Warning FastControllerResync deployment/openshift-apiserver-operator Controller "SecretRevisionPruneController" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 59m Warning FastControllerResync deployment/kube-apiserver-operator Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling openshift-kube-storage-version-migrator 59m Normal ScalingReplicaSet deployment/migrator Scaled up replica set migrator-579f5cd9c5 to 1 openshift-kube-controller-manager-operator 59m Warning FastControllerResync deployment/kube-controller-manager-operator Controller "NodeController" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager-operator 59m Warning FastControllerResync deployment/kube-controller-manager-operator Controller "PruneController" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager-operator 59m Warning FastControllerResync deployment/kube-controller-manager-operator Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 59m Warning FastControllerResync deployment/kube-apiserver-operator Controller "EventWatchController" resync interval is set to 0s which might lead to client request throttling openshift-etcd-operator 59m Warning FastControllerResync deployment/etcd-operator Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling openshift-etcd-operator 59m Warning FastControllerResync deployment/etcd-operator Controller "PruneController" resync interval is set to 0s which might lead to client request throttling openshift-controller-manager-operator 59m Normal ClusterRoleCreated deployment/openshift-controller-manager-operator Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing openshift-controller-manager-operator 59m Normal LeaderElection lease/openshift-controller-manager-operator-lock openshift-controller-manager-operator-6548869cc5-9kqx5_3fb16691-188f-4440-ab42-0d58094b6988 became leader openshift-controller-manager-operator 59m Normal LeaderElection configmap/openshift-controller-manager-operator-lock openshift-controller-manager-operator-6548869cc5-9kqx5_3fb16691-188f-4440-ab42-0d58094b6988 became leader openshift-multus 59m Warning FailedMount pod/multus-admission-controller-6f95d97cb6-7wv72 MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found kube-system 59m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-controller-manager namespace kube-system 59m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-route-controller-manager namespace kube-system 59m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-service-ca namespace openshift-multus 59m Warning FailedMount pod/multus-admission-controller-6f95d97cb6-x5s87 MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found openshift-apiserver-operator 59m Normal OperatorVersionChanged deployment/openshift-apiserver-operator clusteroperator/openshift-apiserver version "operator" changed from "" to "4.13.0-rc.0" openshift-apiserver-operator 59m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to True ("All is well"),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftapiservers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-apiserver-operator"} {"" "namespaces" "" "openshift-apiserver"} {"" "namespaces" "" "openshift-etcd-operator"} {"" "endpoints" "openshift-etcd" "host-etcd-2"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-apiserver" ""} {"apiregistration.k8s.io" "apiservices" "" "v1.apps.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.authorization.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.build.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.image.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.project.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.quota.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.route.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.security.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.template.openshift.io"}],status.versions changed from [] to [{"operator" "4.13.0-rc.0"}] openshift-apiserver-operator 59m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded changed from Unknown to False ("APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: "),Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to False ("APIServicesAvailable: endpoints \"api\" not found") openshift-machine-api 59m Warning FailedMount pod/machine-api-operator-564474f8c6-284hs MountVolume.SetUp failed for volume "machine-api-operator-tls" : secret "machine-api-operator-tls" not found openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/csi-snapshot-controller-operator Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotWebhookControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotControllerProgressing: Waiting for Deployment to act on changes\nCSISnapshotWebhookControllerProgressing: Waiting for Deployment to act on changes",Available message changed from "CSISnapshotWebhookControllerAvailable: Waiting for Deployment" to "CSISnapshotControllerAvailable: Waiting for Deployment\nCSISnapshotWebhookControllerAvailable: Waiting for Deployment" openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/csi-snapshot-controller-operator Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to act on changes\nCSISnapshotWebhookControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods\nCSISnapshotWebhookControllerProgressing: Waiting for Deployment to act on changes" openshift-cluster-storage-operator 59m Warning FailedMount pod/csi-snapshot-webhook-75476bf784-7z4rl MountVolume.SetUp failed for volume "certs" : failed to sync secret cache: timed out waiting for the condition openshift-cluster-storage-operator 59m Warning FailedMount pod/csi-snapshot-webhook-75476bf784-7z4rl MountVolume.SetUp failed for volume "kube-api-access-qbpvp" : failed to sync configmap cache: timed out waiting for the condition openshift-service-ca-operator 59m Normal RoleBindingCreated deployment/service-ca-operator Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing openshift-cluster-storage-operator 59m Normal Pulling pod/csi-snapshot-controller-f58c44499-qvgsh Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f6985210e2dec2b96cd8cd1dc6965ce2710b23b2c515d9ae67a694245bd41082" openshift-cluster-storage-operator 59m Normal AddedInterface pod/csi-snapshot-controller-f58c44499-qvgsh Add eth0 [10.129.0.5/23] from ovn-kubernetes openshift-ingress-operator 59m Warning FailedMount pod/ingress-operator-6486794b49-42ddh MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Progressing message changed from "AWSEBSProgressing: Waiting for Deployment to act on changes" to "AWSEBSProgressing: Waiting for Deployment to deploy pods" openshift-dns-operator 59m Warning FailedMount pod/dns-operator-656b9bd9f9-lb9ps MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found openshift-config-operator 59m Normal OperatorStatusChanged deployment/openshift-config-operator Status for clusteroperator/config-operator changed: Degraded changed from Unknown to False ("All is well") openshift-config-operator 59m Normal OperatorStatusChanged deployment/openshift-config-operator Status for clusteroperator/config-operator changed: Degraded set to Unknown (""),Progressing set to False ("All is well"),Available set to True ("All is well"),Upgradeable set to True ("All is well"),status.relatedObjects changed from [] to [{"operator.openshift.io" "configs" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-operator"}],status.versions changed from [] to [{"operator" "4.13.0-rc.0"}] openshift-config-operator 59m Normal OperatorVersionChanged deployment/openshift-config-operator clusteroperator/config-operator version "operator" changed from "" to "4.13.0-rc.0" openshift-config-operator 59m Normal ConfigOperatorStatusChanged deployment/openshift-config-operator Operator conditions defaulted: [{OperatorAvailable True 2023-03-21 12:14:31 +0000 UTC AsExpected } {OperatorProgressing False 2023-03-21 12:14:31 +0000 UTC AsExpected } {OperatorUpgradeable True 2023-03-21 12:14:31 +0000 UTC AsExpected }] openshift-config-operator 59m Warning FastControllerResync deployment/openshift-config-operator Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling openshift-config-operator 59m Warning FastControllerResync deployment/openshift-config-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager-operator 59m Normal RevisionTriggered deployment/kube-controller-manager-operator new revision 1 triggered by "configmap \"kube-controller-manager-pod\" not found" openshift-kube-apiserver-operator 59m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded changed from Unknown to False ("NodeControllerDegraded: All master nodes are ready"),Upgradeable changed from Unknown to True ("KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.") openshift-controller-manager-operator 59m Normal ClusterRoleCreated deployment/openshift-controller-manager-operator Created ClusterRole.rbac.authorization.k8s.io/system:deployer because it was missing openshift-cluster-storage-operator 59m Normal Pulling pod/csi-snapshot-controller-f58c44499-k4v7v Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f6985210e2dec2b96cd8cd1dc6965ce2710b23b2c515d9ae67a694245bd41082" openshift-cloud-credential-operator 59m Warning FailedMount pod/cloud-credential-operator-7fffc6cb67-gkvnc MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" : secret "cloud-credential-operator-serving-cert" not found openshift-cluster-storage-operator 59m Normal RoleCreated deployment/cluster-storage-operator Created Role.rbac.authorization.k8s.io/aws-ebs-csi-driver-operator-aws-config-role -n openshift-config-managed because it was missing openshift-kube-controller-manager-operator 59m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded changed from Unknown to False ("All is well") openshift-cluster-node-tuning-operator 59m Warning FailedMount pod/cluster-node-tuning-operator-5886c76fd4-7qpt5 MountVolume.SetUp failed for volume "apiservice-cert" : secret "performance-addon-operator-webhook-cert" not found openshift-cluster-node-tuning-operator 59m Warning FailedMount pod/cluster-node-tuning-operator-5886c76fd4-7qpt5 MountVolume.SetUp failed for volume "node-tuning-operator-tls" : secret "node-tuning-operator-tls" not found openshift-service-ca-operator 59m Warning FastControllerResync deployment/service-ca-operator Controller "ServiceCAOperator" resync interval is set to 0s which might lead to client request throttling openshift-service-ca-operator 59m Normal RoleCreated deployment/service-ca-operator Created Role.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing openshift-kube-apiserver-operator 59m Normal MasterNodeObserved deployment/kube-apiserver-operator Observed new master node ip-10-0-197-197.ec2.internal openshift-kube-apiserver-operator 59m Normal MasterNodeObserved deployment/kube-apiserver-operator Observed new master node ip-10-0-239-132.ec2.internal openshift-kube-apiserver-operator 59m Normal MasterNodeObserved deployment/kube-apiserver-operator Observed new master node ip-10-0-140-6.ec2.internal openshift-kube-apiserver-operator 59m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""}] to [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.13.0-rc.0"}] openshift-controller-manager-operator 59m Warning ConfigMapCreateFailed deployment/openshift-controller-manager-operator Failed to create ConfigMap/config -n openshift-controller-manager: namespaces "openshift-controller-manager" not found openshift-controller-manager-operator 59m Normal OperatorStatusChanged deployment/openshift-controller-manager-operator Status for clusteroperator/openshift-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftcontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-controller-manager-operator"} {"" "namespaces" "" "openshift-controller-manager"} {"" "namespaces" "" "openshift-route-controller-manager"}] openshift-controller-manager-operator 59m Normal ObserveFeatureFlagsUpdated deployment/openshift-controller-manager-operator Updated featureGates to openshift-controller-manager-operator 59m Normal ObservedConfigChanged deployment/openshift-controller-manager-operator Writing updated observed config: map[string]any{... openshift-controller-manager-operator 59m Normal ClusterRoleCreated deployment/openshift-controller-manager-operator Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing openshift-controller-manager-operator 59m Normal ClusterRoleBindingCreated deployment/openshift-controller-manager-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing openshift-controller-manager-operator 59m Normal OperatorStatusChanged deployment/openshift-controller-manager-operator Status for clusteroperator/openshift-controller-manager changed: Degraded changed from Unknown to False ("All is well") openshift-operator-lifecycle-manager 59m Normal LeaderElection lease/packageserver-controller-lock package-server-manager-fc98f8f64-l2df9_8f31042d-2f72-4d94-97e8-b1e3935fc2b6 became leader openshift-cluster-csi-drivers 59m Normal AddedInterface pod/aws-ebs-csi-driver-operator-667bfc499d-pjs9d Add eth0 [10.128.0.7/23] from ovn-kubernetes openshift-cluster-machine-approver 59m Warning FailedMount pod/machine-approver-5cd47987c9-96cvq MountVolume.SetUp failed for volume "machine-approver-tls" : secret "machine-approver-tls" not found openshift-cluster-csi-drivers 59m Normal Pulling pod/aws-ebs-csi-driver-operator-667bfc499d-pjs9d Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:189279778e9140f0b47f3e8c58ac6262cf1dbe573ae1a651e8a6e675b7d7b369" openshift-controller-manager-operator 59m Normal ClusterRoleCreated deployment/openshift-controller-manager-operator Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing openshift-controller-manager-operator 59m Normal ClusterRoleBindingCreated deployment/openshift-controller-manager-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Degraded message changed from "All is well" to "AWSEBSCSIDriverOperatorDeploymentDegraded: deployment openshift-cluster-csi-drivers/aws-ebs-csi-driver-operator has some pods failing; unavailable replicas=1" openshift-cluster-csi-drivers 59m Normal SuccessfulCreate replicaset/aws-ebs-csi-driver-operator-667bfc499d Created pod: aws-ebs-csi-driver-operator-667bfc499d-pjs9d openshift-cluster-storage-operator 59m Normal ClusterRoleBindingCreated deployment/cluster-storage-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/aws-ebs-csi-driver-operator-clusterrolebinding because it was missing openshift-operator-lifecycle-manager 59m Warning FailedMount pod/olm-operator-647f89bf4f-rgnx9 MountVolume.SetUp failed for volume "srv-cert" : secret "olm-operator-serving-cert" not found openshift-kube-scheduler-operator 59m Warning RequiredInstallerResourcesMissing deployment/openshift-kube-scheduler-operator secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0 openshift-kube-controller-manager-operator 59m Normal ObservedConfigChanged deployment/kube-controller-manager-operator Writing updated observed config: map[string]any{... openshift-controller-manager-operator 59m Normal ConfigMapCreateFailed deployment/openshift-controller-manager-operator Failed to create configmap/openshift-service-ca-n openshift-controller-manager: namespaces "openshift-controller-manager" not found openshift-kube-controller-manager-operator 59m Normal ObserveFeatureFlagsUpdated deployment/kube-controller-manager-operator Updated featureGates to APIPriorityAndFairness=true,RotateKubeletServerCertificate=true,DownwardAPIHugePages=true,OpenShiftPodSecurityAdmission=true,RetroactiveDefaultStorageClass=false openshift-controller-manager-operator 59m Normal ClusterRoleCreated deployment/openshift-controller-manager-operator Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing openshift-controller-manager-operator 59m Normal ClusterRoleBindingCreated deployment/openshift-controller-manager-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing openshift-kube-controller-manager-operator 59m Normal ObserveTLSSecurityProfile deployment/kube-controller-manager-operator cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] openshift-kube-controller-manager-operator 59m Normal ObserveTLSSecurityProfile deployment/kube-controller-manager-operator minTLSVersion changed to VersionTLS12 openshift-kube-controller-manager-operator 59m Normal ObserveFeatureFlagsUpdated deployment/kube-controller-manager-operator Updated extendedArguments.feature-gates to APIPriorityAndFairness=true,RotateKubeletServerCertificate=true,DownwardAPIHugePages=true,OpenShiftPodSecurityAdmission=true,RetroactiveDefaultStorageClass=false openshift-service-ca-operator 59m Normal ClusterRoleBindingCreated deployment/service-ca-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing openshift-controller-manager-operator 59m Warning ConfigMapCreateFailed deployment/openshift-controller-manager-operator Failed to create ConfigMap/openshift-global-ca -n openshift-controller-manager: namespaces "openshift-controller-manager" not found openshift-service-ca-operator 59m Normal ClusterRoleCreated deployment/service-ca-operator Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing openshift-config-operator 59m Normal LeaderElection lease/config-operator-lock openshift-config-operator-67bdbffb68-sdgx7_469f05ba-bb6a-4067-9ef8-632982d057a5 became leader openshift-config-operator 59m Normal LeaderElection configmap/config-operator-lock openshift-config-operator-67bdbffb68-sdgx7_469f05ba-bb6a-4067-9ef8-632982d057a5 became leader openshift-service-ca-operator 59m Normal NamespaceCreated deployment/service-ca-operator Created Namespace/openshift-service-ca because it was missing openshift-controller-manager-operator 59m Warning RoleBindingCreateFailed deployment/openshift-controller-manager-operator Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found openshift-machine-api 59m Warning FailedMount pod/control-plane-machine-set-operator-77b4c948f8-s7qsh MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : secret "control-plane-machine-set-operator-tls" not found openshift-service-ca-operator 59m Normal OperatorStatusChanged deployment/service-ca-operator Status for clusteroperator/service-ca changed: Degraded changed from Unknown to False ("All is well") openshift-service-ca-operator 59m Normal LeaderElection configmap/service-ca-operator-lock service-ca-operator-7988896c96-5q667_563275ae-7a4d-46be-9a54-d3f1fbf0e8fb became leader openshift-service-ca-operator 59m Normal LeaderElection lease/service-ca-operator-lock service-ca-operator-7988896c96-5q667_563275ae-7a4d-46be-9a54-d3f1fbf0e8fb became leader openshift-service-ca-operator 59m Normal OperatorStatusChanged deployment/service-ca-operator Status for clusteroperator/service-ca changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "servicecas" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-service-ca-operator"} {"" "namespaces" "" "openshift-service-ca"}] openshift-service-ca-operator 59m Warning FastControllerResync deployment/service-ca-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler-operator 59m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" openshift-kube-scheduler-operator 59m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/revision-status-1 -n openshift-kube-scheduler because it was missing openshift-controller-manager-operator 59m Normal ClusterRoleBindingCreated deployment/openshift-controller-manager-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing openshift-controller-manager-operator 59m Normal ClusterRoleCreated deployment/openshift-controller-manager-operator Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing openshift-controller-manager-operator 59m Normal ClusterRoleBindingCreated deployment/openshift-controller-manager-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/system:deployer because it was missing openshift-controller-manager-operator 59m Warning RoleCreateFailed deployment/openshift-controller-manager-operator Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager: namespaces "openshift-controller-manager" not found openshift-controller-manager-operator 59m Normal ClusterRoleBindingCreated deployment/openshift-controller-manager-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing openshift-kube-scheduler-operator 59m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nRevisionControllerDegraded: configmaps \"kube-scheduler-pod\" not found" openshift-controller-manager-operator 59m Warning ConfigMapCreateFailed deployment/openshift-controller-manager-operator Failed to create ConfigMap/config -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found openshift-controller-manager-operator 59m Normal NamespaceCreated deployment/openshift-controller-manager-operator Created Namespace/openshift-controller-manager because it was missing openshift-controller-manager-operator 59m Normal ClusterRoleCreated deployment/openshift-controller-manager-operator Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing openshift-controller-manager-operator 59m Normal RoleBindingCreated deployment/openshift-controller-manager-operator Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing openshift-controller-manager-operator 59m Normal RoleCreated deployment/openshift-controller-manager-operator Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing openshift-controller-manager-operator 59m Normal ServiceCreated deployment/openshift-controller-manager-operator Created Service/controller-manager -n openshift-controller-manager because it was missing openshift-controller-manager-operator 59m Normal ServiceAccountCreated deployment/openshift-controller-manager-operator Created ServiceAccount/openshift-controller-manager-sa -n openshift-controller-manager because it was missing openshift-controller-manager-operator 59m Normal RoleBindingCreated deployment/openshift-controller-manager-operator Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing openshift-controller-manager-operator 59m Normal RoleCreated deployment/openshift-controller-manager-operator Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing openshift-controller-manager-operator 59m Normal RoleBindingCreated deployment/openshift-controller-manager-operator Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing openshift-controller-manager-operator 59m Normal RoleCreated deployment/openshift-controller-manager-operator Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing openshift-controller-manager-operator 59m Normal ClusterRoleBindingCreated deployment/openshift-controller-manager-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing openshift-controller-manager-operator 59m Normal ClusterRoleCreated deployment/openshift-controller-manager-operator Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing openshift-controller-manager-operator 59m Normal ServiceCreated deployment/openshift-controller-manager-operator Created Service/route-controller-manager -n openshift-route-controller-manager because it was missing openshift-controller-manager-operator 59m Normal RoleBindingCreated deployment/openshift-controller-manager-operator Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing openshift-controller-manager-operator 59m Normal RoleCreated deployment/openshift-controller-manager-operator Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing openshift-controller-manager-operator 59m Normal RoleBindingCreated deployment/openshift-controller-manager-operator Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing openshift-controller-manager-operator 59m Normal RoleCreated deployment/openshift-controller-manager-operator Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing openshift-controller-manager-operator 59m Normal ServiceAccountCreated deployment/openshift-controller-manager-operator Created ServiceAccount/route-controller-manager-sa -n openshift-route-controller-manager because it was missing openshift-controller-manager-operator 59m Normal NamespaceCreated deployment/openshift-controller-manager-operator Created Namespace/openshift-route-controller-manager because it was missing openshift-controller-manager-operator 59m Warning RoleBindingCreateFailed deployment/openshift-controller-manager-operator Failed to create RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found openshift-controller-manager-operator 59m Warning RoleCreateFailed deployment/openshift-controller-manager-operator Failed to create Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager: namespaces "openshift-route-controller-manager" not found openshift-controller-manager-operator 59m Normal ClusterRoleBindingCreated deployment/openshift-controller-manager-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing openshift-cluster-storage-operator 59m Normal AddedInterface pod/csi-snapshot-controller-f58c44499-k4v7v Add eth0 [10.128.0.6/23] from ovn-kubernetes openshift-kube-apiserver-operator 59m Normal CABundleUpdateRequired deployment/kube-apiserver-operator "localhost-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert openshift-apiserver-operator 59m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: " to "APIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: " openshift-kube-storage-version-migrator 59m Normal Created pod/migrator-579f5cd9c5-sk4xj Created container migrator openshift-route-controller-manager 59m Normal ScalingReplicaSet deployment/route-controller-manager Scaled up replica set route-controller-manager-7d7696bfd4 to 3 openshift-controller-manager 59m Normal SuccessfulCreate replicaset/controller-manager-64556d4c99 Created pod: controller-manager-64556d4c99-46tn2 openshift-controller-manager 59m Normal SuccessfulCreate replicaset/controller-manager-64556d4c99 Created pod: controller-manager-64556d4c99-8fw47 openshift-controller-manager 59m Normal ScalingReplicaSet deployment/controller-manager Scaled up replica set controller-manager-64556d4c99 to 3 openshift-controller-manager 59m Normal SuccessfulCreate replicaset/controller-manager-64556d4c99 Created pod: controller-manager-64556d4c99-kxhn7 openshift-kube-apiserver-operator 59m Normal CABundleUpdateRequired deployment/kube-apiserver-operator "service-network-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert openshift-cluster-storage-operator 59m Normal Pulled pod/csi-snapshot-controller-f58c44499-qvgsh Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f6985210e2dec2b96cd8cd1dc6965ce2710b23b2c515d9ae67a694245bd41082" in 1.374784603s (1.374798621s including waiting) openshift-kube-scheduler-operator 59m Normal ClusterRoleBindingCreated deployment/openshift-kube-scheduler-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-scheduler-installer because it was missing openshift-kube-scheduler-operator 59m Normal ServiceAccountCreated deployment/openshift-kube-scheduler-operator Created ServiceAccount/installer-sa -n openshift-kube-scheduler because it was missing openshift-kube-scheduler-operator 59m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/revision-status-2 -n openshift-kube-scheduler because it was missing openshift-kube-scheduler-operator 59m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nRevisionControllerDegraded: configmaps \"kube-scheduler-pod\" not found" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-1,kube-scheduler-cert-syncer-kubeconfig-1,kube-scheduler-pod-1,scheduler-kubeconfig-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nRevisionControllerDegraded: configmaps \"kube-scheduler-pod\" not found",Progressing changed from False to True ("NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 1" openshift-kube-storage-version-migrator 59m Normal Started pod/migrator-579f5cd9c5-sk4xj Started container migrator openshift-etcd-operator 59m Normal SecretCreated deployment/etcd-operator Created Secret/etcd-serving-ip-10-0-239-132.ec2.internal -n openshift-etcd because it was missing openshift-route-controller-manager 59m Normal SuccessfulCreate replicaset/route-controller-manager-7d7696bfd4 Created pod: route-controller-manager-7d7696bfd4-z2bjq openshift-controller-manager-operator 59m Normal RoleCreated deployment/openshift-controller-manager-operator Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing openshift-etcd-operator 59m Normal SecretCreated deployment/etcd-operator Created Secret/etcd-serving-metrics-ip-10-0-239-132.ec2.internal -n openshift-etcd because it was missing openshift-etcd-operator 59m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/revision-status-1 -n openshift-etcd because it was missing openshift-controller-manager-operator 59m Normal RoleBindingCreated deployment/openshift-controller-manager-operator Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing openshift-controller-manager-operator 59m Normal RoleCreated deployment/openshift-controller-manager-operator Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing openshift-controller-manager-operator 59m Normal RoleBindingCreated deployment/openshift-controller-manager-operator Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing openshift-controller-manager-operator 59m Normal DeploymentCreated deployment/openshift-controller-manager-operator Created Deployment.apps/controller-manager -n openshift-controller-manager because it was missing openshift-controller-manager-operator 59m Normal DeploymentCreated deployment/openshift-controller-manager-operator Created Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it was missing openshift-kube-apiserver-operator 59m Normal CABundleUpdateRequired deployment/kube-apiserver-operator "kube-apiserver-to-kubelet-client-ca" in "openshift-kube-apiserver-operator" requires a new cert openshift-kube-storage-version-migrator 59m Normal Pulled pod/migrator-579f5cd9c5-sk4xj Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39ef66439265e28941d847694107b349dff04d9cc64f0b713882e1895ea2acb9" in 1.08308186s (1.083118341s including waiting) openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/csi-snapshot-controller-operator Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods\nCSISnapshotWebhookControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods\nCSISnapshotWebhookControllerProgressing: Waiting for Deployment to deploy pods" openshift-service-ca-operator 59m Normal SecretCreated deployment/service-ca-operator Created Secret/signing-key -n openshift-service-ca because it was missing openshift-etcd-operator 59m Normal SecretCreated deployment/etcd-operator Created Secret/etcd-serving-ip-10-0-140-6.ec2.internal -n openshift-etcd because it was missing openshift-etcd-operator 59m Normal SecretCreated deployment/etcd-operator Created Secret/etcd-peer-ip-10-0-239-132.ec2.internal -n openshift-etcd because it was missing openshift-apiserver-operator 59m Normal ObservedConfigChanged deployment/openshift-apiserver-operator Writing updated observed config: map[string]any{... openshift-kube-controller-manager-operator 59m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "All is well" to "InstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]",Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; ") openshift-service-ca-operator 59m Normal ServiceAccountCreated deployment/service-ca-operator Created ServiceAccount/service-ca -n openshift-service-ca because it was missing openshift-route-controller-manager 59m Normal SuccessfulCreate replicaset/route-controller-manager-7d7696bfd4 Created pod: route-controller-manager-7d7696bfd4-2tvnf openshift-kube-apiserver-operator 59m Normal SignerUpdateRequired deployment/kube-apiserver-operator "localhost-recovery-serving-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: missing notAfter openshift-kube-controller-manager-operator 59m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/csr-controller-signer-ca -n openshift-kube-controller-manager-operator because it was missing openshift-kube-controller-manager-operator 59m Normal TargetUpdateRequired deployment/kube-controller-manager-operator "csr-signer" in "openshift-kube-controller-manager-operator" requires a new target cert/key pair: missing notAfter openshift-route-controller-manager 59m Normal SuccessfulCreate replicaset/route-controller-manager-7d7696bfd4 Created pod: route-controller-manager-7d7696bfd4-zpkmp openshift-kube-controller-manager-operator 59m Normal TargetConfigDeleted deployment/kube-controller-manager-operator Deleted target configmap openshift-config-managed/csr-controller-ca because source config does not exist openshift-apiserver-operator 59m Normal ObserveTLSSecurityProfile deployment/openshift-apiserver-operator cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] openshift-apiserver-operator 59m Normal ObserveTLSSecurityProfile deployment/openshift-apiserver-operator minTLSVersion changed to VersionTLS12 openshift-cluster-storage-operator 59m Normal RoleBindingCreated deployment/cluster-storage-operator Created RoleBinding.rbac.authorization.k8s.io/aws-ebs-csi-driver-operator-aws-config-clusterrolebinding -n openshift-config-managed because it was missing openshift-apiserver-operator 59m Normal RoutingConfigSubdomainChanged deployment/openshift-apiserver-operator Domain changed from "" to "apps.qeaisrhods-c13.abmw.s1.devshift.org" openshift-kube-apiserver-operator 59m Normal CABundleUpdateRequired deployment/kube-apiserver-operator "kube-apiserver-aggregator-client-ca" in "openshift-config-managed" requires a new cert openshift-apiserver-operator 59m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: " to "APIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: \nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found" openshift-kube-apiserver-operator 59m Normal SignerUpdateRequired deployment/kube-apiserver-operator "node-system-admin-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: missing notAfter openshift-kube-apiserver-operator 59m Normal CABundleUpdateRequired deployment/kube-apiserver-operator "loadbalancer-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Degraded message changed from "AWSEBSCSIDriverOperatorDeploymentDegraded: deployment openshift-cluster-csi-drivers/aws-ebs-csi-driver-operator has some pods failing; unavailable replicas=1" to "All is well" openshift-service-ca-operator 59m Normal OperatorStatusChanged deployment/service-ca-operator Status for clusteroperator/service-ca changed: Progressing changed from Unknown to True ("Progressing: \nProgressing: service-ca does not have available replicas"),Available changed from Unknown to True ("All is well"),Upgradeable changed from Unknown to True ("All is well") openshift-service-ca 59m Normal SuccessfulCreate replicaset/service-ca-57bb877df5 Created pod: service-ca-57bb877df5-24vfr openshift-service-ca-operator 59m Normal ConfigMapCreated deployment/service-ca-operator Created ConfigMap/signing-cabundle -n openshift-service-ca because it was missing openshift-kube-apiserver-operator 59m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" openshift-etcd-operator 59m Normal SecretCreated deployment/etcd-operator Created Secret/etcd-all-certs -n openshift-etcd because it was missing openshift-cluster-storage-operator 59m Normal LeaderElection lease/snapshot-controller-leader csi-snapshot-controller-f58c44499-qvgsh became leader openshift-kube-apiserver-operator 59m Normal RevisionTriggered deployment/kube-apiserver-operator new revision 1 triggered by "configmap \"kube-apiserver-pod\" not found" openshift-kube-apiserver-operator 59m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]",Progressing changed from Unknown to False ("NodeInstallerProgressing: 3 nodes are at revision 0"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0") openshift-kube-apiserver-operator 59m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" openshift-cluster-csi-drivers 59m Normal Started pod/aws-ebs-csi-driver-operator-667bfc499d-pjs9d Started container aws-ebs-csi-driver-operator openshift-cluster-csi-drivers 59m Normal Created pod/aws-ebs-csi-driver-operator-667bfc499d-pjs9d Created container aws-ebs-csi-driver-operator openshift-cluster-csi-drivers 59m Normal Pulled pod/aws-ebs-csi-driver-operator-667bfc499d-pjs9d Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:189279778e9140f0b47f3e8c58ac6262cf1dbe573ae1a651e8a6e675b7d7b369" in 2.118965249s (2.1189718s including waiting) openshift-kube-apiserver-operator 59m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" openshift-controller-manager 59m Normal SuccessfulDelete replicaset/controller-manager-64556d4c99 Deleted pod: controller-manager-64556d4c99-46tn2 openshift-etcd-operator 59m Normal SecretCreated deployment/etcd-operator Created Secret/etcd-serving-metrics-ip-10-0-197-197.ec2.internal -n openshift-etcd because it was missing openshift-etcd-operator 59m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-metrics-proxy-client-ca -n openshift-etcd because it was missing openshift-controller-manager-operator 59m Normal ConfigMapCreated deployment/openshift-controller-manager-operator Created ConfigMap/config -n openshift-route-controller-manager because it was missing openshift-etcd-operator 59m Normal SecretCreated deployment/etcd-operator Created Secret/etcd-serving-ip-10-0-197-197.ec2.internal -n openshift-etcd because it was missing openshift-kube-controller-manager-operator 59m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/revision-status-1 -n openshift-kube-controller-manager because it was missing openshift-etcd-operator 59m Normal NamespaceUpdated deployment/etcd-operator Updated Namespace/openshift-etcd because it changed openshift-etcd-operator 59m Normal SecretCreated deployment/etcd-operator Created Secret/etcd-peer-ip-10-0-197-197.ec2.internal -n openshift-etcd because it was missing openshift-etcd-operator 59m Normal ClusterRoleBindingCreated deployment/etcd-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-etcd-installer because it was missing openshift-etcd-operator 59m Normal ServiceAccountCreated deployment/etcd-operator Created ServiceAccount/installer-sa -n openshift-etcd because it was missing openshift-controller-manager 59m Normal SuccessfulCreate replicaset/controller-manager-78f477fd5c Created pod: controller-manager-78f477fd5c-r8mcx openshift-kube-controller-manager-operator 59m Warning ConfigMissing deployment/kube-controller-manager-operator no observedConfig openshift-authentication-operator 59m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" openshift-etcd-operator 59m Normal SecretCreated deployment/etcd-operator Created Secret/etcd-peer-ip-10-0-140-6.ec2.internal -n openshift-etcd because it was missing openshift-service-ca 59m Normal Pulling pod/service-ca-57bb877df5-24vfr Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7f7cb6554c1dc9b5b3b58f162f592062e5c63bf24c5ed90a62074e117be3f743" openshift-controller-manager-operator 59m Normal ConfigMapCreated deployment/openshift-controller-manager-operator Created ConfigMap/openshift-global-ca -n openshift-controller-manager because it was missing openshift-etcd-operator 59m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-peer-client-ca -n openshift-etcd because it was missing openshift-controller-manager-operator 59m Normal ConfigMapCreated deployment/openshift-controller-manager-operator Created configmap/openshift-service-ca-n openshift-controller-manager because it was missing openshift-etcd-operator 59m Normal SecretCreated deployment/etcd-operator Created Secret/etcd-serving-metrics-ip-10-0-140-6.ec2.internal -n openshift-etcd because it was missing openshift-controller-manager-operator 59m Normal ConfigMapCreated deployment/openshift-controller-manager-operator Created ConfigMap/config -n openshift-controller-manager because it was missing openshift-controller-manager-operator 59m Normal OperatorStatusChanged deployment/openshift-controller-manager-operator Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"v3.11.0/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"v3.11.0/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"v3.11.0/openshift-controller-manager/route-controller-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"v3.11.0/openshift-controller-manager/route-controller-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " openshift-service-ca 59m Normal AddedInterface pod/service-ca-57bb877df5-24vfr Add eth0 [10.128.0.10/23] from ovn-kubernetes openshift-route-controller-manager 59m Warning FailedMount pod/route-controller-manager-7d7696bfd4-zpkmp MountVolume.SetUp failed for volume "config" : configmap "config" not found openshift-cloud-network-config-controller 59m Normal Pulling pod/cloud-network-config-controller-7cc55b87d4-drl56 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737fbb45ea282de2eba6ed7c7e0112d62d31a74ed0dc6b9d0b1ad01975227945" openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/csi-snapshot-controller-operator Status for clusteroperator/csi-snapshot-controller changed: Available message changed from "CSISnapshotControllerAvailable: Waiting for Deployment\nCSISnapshotWebhookControllerAvailable: Waiting for Deployment" to "CSISnapshotWebhookControllerAvailable: Waiting for Deployment" openshift-kube-controller-manager-operator 59m Normal SecretCreated deployment/kube-controller-manager-operator Created Secret/csr-signer -n openshift-kube-controller-manager-operator because it was missing openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Progressing message changed from "AWSEBSProgressing: Waiting for Deployment to deploy pods" to "AWSEBSCSIDriverOperatorCRProgressing: Waiting for AWSEBS operator to report status\nAWSEBSProgressing: Waiting for Deployment to deploy pods",Available changed from True to False ("AWSEBSCSIDriverOperatorCRAvailable: Waiting for AWSEBS operator to report status") openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Upgradeable changed from Unknown to True ("All is well") openshift-service-ca-operator 59m Normal ConfigMapCreated deployment/service-ca-operator Created ConfigMap/service-ca -n openshift-config-managed because it was missing openshift-cluster-storage-operator 59m Normal Pulled pod/csi-snapshot-controller-f58c44499-k4v7v Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f6985210e2dec2b96cd8cd1dc6965ce2710b23b2c515d9ae67a694245bd41082" in 1.8593736s (1.859380986s including waiting) openshift-cluster-storage-operator 59m Normal Created pod/csi-snapshot-controller-f58c44499-k4v7v Created container snapshot-controller openshift-cluster-storage-operator 59m Normal Started pod/csi-snapshot-controller-f58c44499-k4v7v Started container snapshot-controller openshift-controller-manager 59m Warning FailedMount pod/controller-manager-64556d4c99-46tn2 MountVolume.SetUp failed for volume "proxy-ca-bundles" : configmap "openshift-global-ca" not found openshift-kube-apiserver-operator 59m Normal PodDisruptionBudgetCreated deployment/kube-apiserver-operator Created PodDisruptionBudget.policy/kube-apiserver-guard-pdb -n openshift-kube-apiserver because it was missing openshift-controller-manager 59m Warning FailedMount pod/controller-manager-64556d4c99-46tn2 MountVolume.SetUp failed for volume "config" : configmap "config" not found openshift-kube-scheduler-operator 59m Normal NamespaceUpdated deployment/openshift-kube-scheduler-operator Updated Namespace/openshift-kube-scheduler because it changed openshift-service-ca 59m Normal ScalingReplicaSet deployment/service-ca Scaled up replica set service-ca-57bb877df5 to 1 openshift-cluster-storage-operator 59m Normal Started pod/csi-snapshot-controller-f58c44499-qvgsh Started container snapshot-controller openshift-kube-scheduler-operator 59m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/config -n openshift-kube-scheduler because it was missing openshift-controller-manager 59m Warning FailedMount pod/controller-manager-64556d4c99-8fw47 MountVolume.SetUp failed for volume "proxy-ca-bundles" : configmap "openshift-global-ca" not found openshift-controller-manager 59m Warning FailedMount pod/controller-manager-64556d4c99-8fw47 MountVolume.SetUp failed for volume "config" : configmap "config" not found openshift-kube-apiserver-operator 59m Normal CABundleUpdateRequired deployment/kube-apiserver-operator "kube-control-plane-signer-ca" in "openshift-kube-apiserver-operator" requires a new cert openshift-controller-manager 59m Warning FailedMount pod/controller-manager-64556d4c99-kxhn7 MountVolume.SetUp failed for volume "config" : configmap "config" not found openshift-controller-manager 59m Warning FailedMount pod/controller-manager-64556d4c99-kxhn7 MountVolume.SetUp failed for volume "proxy-ca-bundles" : configmap "openshift-global-ca" not found openshift-service-ca-operator 59m Normal DeploymentCreated deployment/service-ca-operator Created Deployment.apps/service-ca -n openshift-service-ca because it was missing openshift-controller-manager 59m Normal ScalingReplicaSet deployment/controller-manager Scaled down replica set controller-manager-64556d4c99 to 2 from 3 openshift-controller-manager 59m Normal ScalingReplicaSet deployment/controller-manager Scaled up replica set controller-manager-78f477fd5c to 1 from 0 openshift-kube-storage-version-migrator-operator 59m Normal OperatorStatusChanged deployment/kube-storage-version-migrator-operator Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") openshift-kube-apiserver 59m Normal NoPods poddisruptionbudget/kube-apiserver-guard-pdb No matching pods found openshift-route-controller-manager 59m Warning FailedMount pod/route-controller-manager-7d7696bfd4-z2bjq MountVolume.SetUp failed for volume "config" : configmap "config" not found openshift-route-controller-manager 59m Warning FailedMount pod/route-controller-manager-7d7696bfd4-2tvnf MountVolume.SetUp failed for volume "config" : configmap "config" not found openshift-cluster-storage-operator 59m Normal Created pod/csi-snapshot-controller-f58c44499-qvgsh Created container snapshot-controller openshift-controller-manager-operator 59m Normal OperatorStatusChanged deployment/openshift-controller-manager-operator Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"v3.11.0/openshift-controller-manager/leader-role.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"v3.11.0/openshift-controller-manager/leader-rolebinding.yaml\" (string): namespaces \"openshift-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"v3.11.0/openshift-controller-manager/route-controller-leader-role.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: \"v3.11.0/openshift-controller-manager/route-controller-leader-rolebinding.yaml\" (string): namespaces \"openshift-route-controller-manager\" not found\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well" openshift-kube-controller-manager-operator 59m Normal ClusterRoleBindingCreated deployment/kube-controller-manager-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-controller-manager-installer because it was missing openshift-apiserver-operator 59m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: \nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found" to "APIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: \nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nRevisionControllerDegraded: namespaces \"openshift-apiserver\" not found" openshift-kube-controller-manager-operator 59m Normal SecretCreated deployment/kube-controller-manager-operator Created Secret/service-account-private-key -n openshift-kube-controller-manager because it was missing openshift-kube-controller-manager-operator 59m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/kube-controller-cert-syncer-kubeconfig -n openshift-kube-controller-manager because it was missing openshift-kube-controller-manager-operator 59m Normal RoleBindingCreated deployment/kube-controller-manager-operator Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-controller-manager -n kube-system because it was missing openshift-service-ca-operator 59m Normal DeploymentUpdated deployment/service-ca-operator Updated Deployment.apps/service-ca -n openshift-service-ca because it changed openshift-apiserver-operator 59m Warning ConfigMapCreateFailed deployment/openshift-apiserver-operator Failed to create ConfigMap/etcd-serving-ca -n openshift-apiserver: namespaces "openshift-apiserver" not found openshift-route-controller-manager 59m Normal SuccessfulDelete replicaset/route-controller-manager-7d7696bfd4 Deleted pod: route-controller-manager-7d7696bfd4-2tvnf openshift-etcd-operator 59m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/cluster-config-v1 -n openshift-etcd because it was missing openshift-cluster-csi-drivers 59m Normal LeaderElection configmap/aws-ebs-csi-driver-operator-lock aws-ebs-csi-driver-operator-667bfc499d-pjs9d_77b20aad-767e-4bf2-9131-8007dda7e905 became leader openshift-cluster-csi-drivers 59m Normal LeaderElection lease/aws-ebs-csi-driver-operator-lock aws-ebs-csi-driver-operator-667bfc499d-pjs9d_77b20aad-767e-4bf2-9131-8007dda7e905 became leader openshift-apiserver-operator 59m Normal RevisionTriggered deployment/openshift-apiserver-operator new revision 1 triggered by "configmap \"audit\" not found" openshift-etcd-operator 59m Warning ScriptControllerErrorUpdatingStatus deployment/etcd-operator Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again openshift-etcd-operator 59m Normal ServiceAccountCreated deployment/etcd-operator Created ServiceAccount/etcd-sa -n openshift-etcd because it was missing openshift-etcd-operator 59m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-serving-ca -n openshift-etcd because it was missing openshift-apiserver-operator 59m Warning ConfigMapCreateFailed deployment/openshift-apiserver-operator Failed to create ConfigMap/revision-status-1 -n openshift-apiserver: namespaces "openshift-apiserver" not found openshift-cluster-csi-drivers 59m Warning FastControllerResync deployment/aws-ebs-csi-driver-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-cluster-csi-drivers 59m Normal StorageClassCreated deployment/aws-ebs-csi-driver-operator Created StorageClass.storage.k8s.io/gp3-csi because it was missing openshift-cluster-csi-drivers 59m Normal CSIDriverCreated deployment/aws-ebs-csi-driver-operator Created CSIDriver.storage.k8s.io/ebs.csi.aws.com because it was missing openshift-controller-manager-operator 59m Normal OperatorStatusChanged deployment/openshift-controller-manager-operator Status for clusteroperator/openshift-controller-manager changed: Progressing changed from Unknown to True ("Progressing: deployment/controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2."),Available changed from Unknown to False ("Available: no pods available on any node."),Upgradeable changed from Unknown to True ("All is well") openshift-controller-manager-operator 59m Normal ConfigMapUpdated deployment/openshift-controller-manager-operator Updated ConfigMap/config -n openshift-controller-manager:... openshift-cluster-csi-drivers 59m Normal ClusterRoleBindingCreated deployment/aws-ebs-csi-driver-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/ebs-csi-attacher-binding because it was missing openshift-route-controller-manager 59m Normal ScalingReplicaSet deployment/route-controller-manager Scaled down replica set route-controller-manager-7d7696bfd4 to 2 from 3 openshift-route-controller-manager 59m Normal ScalingReplicaSet deployment/route-controller-manager Scaled up replica set route-controller-manager-6b76fb6ddf to 1 from 0 openshift-cluster-csi-drivers 59m Normal ClusterRoleBindingCreated deployment/aws-ebs-csi-driver-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/ebs-csi-provisioner-binding because it was missing openshift-cluster-csi-drivers 59m Normal ClusterRoleCreated deployment/aws-ebs-csi-driver-operator Created ClusterRole.rbac.authorization.k8s.io/ebs-external-provisioner-role because it was missing openshift-cluster-csi-drivers 59m Normal DaemonSetCreated deployment/aws-ebs-csi-driver-operator Created DaemonSet.apps/aws-ebs-csi-driver-node -n openshift-cluster-csi-drivers because it was missing openshift-cluster-csi-drivers 59m Normal ClusterRoleCreated deployment/aws-ebs-csi-driver-operator Created ClusterRole.rbac.authorization.k8s.io/ebs-external-attacher-role because it was missing openshift-cluster-csi-drivers 59m Normal ServiceMonitorCreated deployment/aws-ebs-csi-driver-operator Created ServiceMonitor.monitoring.coreos.com/v1 because it was missing openshift-cluster-csi-drivers 59m Normal StorageClassCreated deployment/aws-ebs-csi-driver-operator Created StorageClass.storage.k8s.io/gp2-csi because it was missing openshift-cluster-csi-drivers 59m Normal DeploymentCreated deployment/aws-ebs-csi-driver-operator Created Deployment.apps/aws-ebs-csi-driver-controller -n openshift-cluster-csi-drivers because it was missing openshift-kube-scheduler-operator 59m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig -n openshift-kube-scheduler because it was missing openshift-cluster-csi-drivers 59m Normal ScalingReplicaSet deployment/aws-ebs-csi-driver-controller Scaled up replica set aws-ebs-csi-driver-controller-75b78f4dd4 to 2 openshift-kube-controller-manager-operator 59m Normal ServiceAccountCreated deployment/kube-controller-manager-operator Created ServiceAccount/installer-sa -n openshift-kube-controller-manager because it was missing openshift-kube-scheduler-operator 59m Normal RoleBindingCreated deployment/openshift-kube-scheduler-operator Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-scheduler -n kube-system because it was missing openshift-route-controller-manager 59m Normal SuccessfulCreate replicaset/route-controller-manager-6b76fb6ddf Created pod: route-controller-manager-6b76fb6ddf-hqd6b openshift-kube-scheduler-operator 59m Normal ClusterRoleBindingCreated deployment/openshift-kube-scheduler-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler:public-2 because it was missing openshift-kube-scheduler-operator 59m Normal RoleCreated deployment/openshift-kube-scheduler-operator Created Role.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing openshift-kube-controller-manager-operator 59m Normal NamespaceUpdated deployment/kube-controller-manager-operator Updated Namespace/openshift-kube-controller-manager because it changed openshift-cluster-storage-operator 59m Normal OperatorVersionChanged deployment/csi-snapshot-controller-operator clusteroperator/csi-snapshot-controller version "operator" changed from "" to "4.13.0-rc.0" openshift-kube-scheduler-operator 59m Normal RoleBindingCreated deployment/openshift-kube-scheduler-operator Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing openshift-cluster-storage-operator 59m Normal OperatorVersionChanged deployment/csi-snapshot-controller-operator clusteroperator/csi-snapshot-controller version "csi-snapshot-controller" changed from "" to "4.13.0-rc.0" openshift-cluster-csi-drivers 59m Normal VolumeSnapshotClassCreated deployment/aws-ebs-csi-driver-operator Created VolumeSnapshotClass.snapshot.storage.k8s.io/v1 because it was missing openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/csi-snapshot-controller-operator Status for clusteroperator/csi-snapshot-controller changed: status.versions changed from [] to [{"operator" "4.13.0-rc.0"} {"csi-snapshot-controller" "4.13.0-rc.0"}] openshift-etcd-operator 59m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-metrics-proxy-serving-ca -n openshift-etcd because it was missing openshift-cluster-csi-drivers 59m Warning FailedCreate replicaset/aws-ebs-csi-driver-controller-75b78f4dd4 Error creating: pods "aws-ebs-csi-driver-controller-75b78f4dd4-" is forbidden: error looking up service account openshift-cluster-csi-drivers/aws-ebs-csi-driver-controller-sa: serviceaccount "aws-ebs-csi-driver-controller-sa" not found openshift-cluster-csi-drivers 59m Normal SuccessfulCreate replicaset/aws-ebs-csi-driver-controller-75b78f4dd4 Created pod: aws-ebs-csi-driver-controller-75b78f4dd4-m5mpr openshift-controller-manager 59m Normal ScalingReplicaSet deployment/controller-manager Scaled up replica set controller-manager-579956b947 to 1 from 0 openshift-cluster-csi-drivers 59m Normal SuccessfulCreate replicaset/aws-ebs-csi-driver-controller-75b78f4dd4 Created pod: aws-ebs-csi-driver-controller-75b78f4dd4-hcwbd openshift-cluster-csi-drivers 59m Normal NoPods poddisruptionbudget/aws-ebs-csi-driver-controller-pdb No matching pods found openshift-controller-manager 59m Normal ScalingReplicaSet deployment/controller-manager Scaled down replica set controller-manager-64556d4c99 to 1 from 2 openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Progressing message changed from "AWSEBSCSIDriverOperatorCRProgressing: Waiting for AWSEBS operator to report status\nAWSEBSProgressing: Waiting for Deployment to deploy pods" to "AWSEBSCSIDriverOperatorCRProgressing: Waiting for AWSEBS operator to report status" openshift-service-ca 59m Normal Created pod/service-ca-57bb877df5-24vfr Created container service-ca-controller openshift-service-ca 59m Normal Started pod/service-ca-57bb877df5-24vfr Started container service-ca-controller openshift-etcd-operator 59m Normal SecretDeleted deployment/etcd-operator Deleted Secret/etcd-client -n openshift-etcd-operator kube-system 59m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-apiserver namespace openshift-cloud-network-config-controller 59m Normal Pulled pod/cloud-network-config-controller-7cc55b87d4-drl56 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737fbb45ea282de2eba6ed7c7e0112d62d31a74ed0dc6b9d0b1ad01975227945" in 1.680190862s (1.680201175s including waiting) openshift-cloud-network-config-controller 59m Normal Created pod/cloud-network-config-controller-7cc55b87d4-drl56 Created container controller openshift-cloud-network-config-controller 59m Normal Started pod/cloud-network-config-controller-7cc55b87d4-drl56 Started container controller openshift-apiserver-operator 59m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: \nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nRevisionControllerDegraded: namespaces \"openshift-apiserver\" not found" to "APIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: \nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nRevisionControllerDegraded: namespaces \"openshift-apiserver\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-apiserver\" not found" openshift-apiserver-operator 59m Normal SecretCreated deployment/openshift-apiserver-operator Created Secret/etcd-client -n openshift-apiserver because it was missing openshift-apiserver-operator 59m Normal ConfigMapCreated deployment/openshift-apiserver-operator Created ConfigMap/revision-status-1 -n openshift-apiserver because it was missing openshift-controller-manager 59m Normal SuccessfulDelete replicaset/controller-manager-64556d4c99 Deleted pod: controller-manager-64556d4c99-8fw47 openshift-apiserver-operator 59m Normal ClusterRoleBindingCreated deployment/openshift-apiserver-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-apiserver because it was missing openshift-authentication-operator 59m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" openshift-authentication-operator 59m Normal CSRCreated deployment/authentication-operator A csr "system:openshift:openshift-authenticator-ng2vb" is created for OpenShiftAuthenticatorCertRequester openshift-authentication-operator 59m Normal CSRApproval deployment/authentication-operator The CSR "system:openshift:openshift-authenticator-ng2vb" has been approved openshift-controller-manager 59m Normal SuccessfulCreate replicaset/controller-manager-579956b947 Created pod: controller-manager-579956b947-ql6fs openshift-authentication-operator 59m Normal NoValidCertificateFound deployment/authentication-operator No valid client certificate for OpenShiftAuthenticatorCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates openshift-kube-controller-manager-operator 59m Normal RoleCreated deployment/kube-controller-manager-operator Created Role.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing openshift-etcd-operator 59m Normal SecretCreated deployment/etcd-operator Created Secret/etcd-client -n openshift-etcd-operator because it was missing openshift-apiserver-operator 59m Normal ConfigMapCreated deployment/openshift-apiserver-operator Created ConfigMap/audit -n openshift-apiserver because it was missing openshift-etcd-operator 59m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-ca-bundle -n openshift-etcd because it was missing openshift-etcd-operator 59m Normal ConfigMapUpdated deployment/etcd-operator Updated ConfigMap/etcd-ca-bundle -n openshift-etcd-operator:... openshift-apiserver-operator 59m Normal NamespaceCreated deployment/openshift-apiserver-operator Created Namespace/openshift-apiserver because it was missing openshift-etcd-operator 59m Normal ServiceUpdated deployment/etcd-operator Updated Service/etcd -n openshift-etcd because it changed openshift-kube-controller-manager-operator 59m Normal RoleBindingCreated deployment/kube-controller-manager-operator Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing openshift-cluster-csi-drivers 59m Warning FailedCreate daemonset/aws-ebs-csi-driver-node Error creating: pods "aws-ebs-csi-driver-node-" is forbidden: error looking up service account openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-sa: serviceaccount "aws-ebs-csi-driver-node-sa" not found openshift-kube-controller-manager-operator 59m Normal ServiceCreated deployment/kube-controller-manager-operator Created Service/kube-controller-manager -n openshift-kube-controller-manager because it was missing openshift-kube-controller-manager-operator 59m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/config -n openshift-kube-controller-manager because it was missing openshift-kube-scheduler-operator 59m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-1,kube-scheduler-cert-syncer-kubeconfig-1,kube-scheduler-pod-1,scheduler-kubeconfig-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nRevisionControllerDegraded: configmaps \"kube-scheduler-pod\" not found" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-1,kube-scheduler-cert-syncer-kubeconfig-1,kube-scheduler-pod-1,scheduler-kubeconfig-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nRevisionControllerDegraded: configmaps \"kube-scheduler-pod\" not found" openshift-cluster-csi-drivers 59m Normal ClusterRoleBindingCreated deployment/aws-ebs-csi-driver-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/ebs-csi-resizer-binding because it was missing openshift-cluster-csi-drivers 59m Normal ConfigMapCreated deployment/aws-ebs-csi-driver-operator Created ConfigMap/aws-ebs-csi-driver-trusted-ca-bundle -n openshift-cluster-csi-drivers because it was missing openshift-cluster-csi-drivers 59m Normal ServiceAccountCreated deployment/aws-ebs-csi-driver-operator Created ServiceAccount/aws-ebs-csi-driver-node-sa -n openshift-cluster-csi-drivers because it was missing openshift-cluster-csi-drivers 59m Normal ClusterRoleCreated deployment/aws-ebs-csi-driver-operator Created ClusterRole.rbac.authorization.k8s.io/ebs-external-resizer-role because it was missing openshift-cluster-csi-drivers 59m Normal PodDisruptionBudgetCreated deployment/aws-ebs-csi-driver-operator Created PodDisruptionBudget.policy/aws-ebs-csi-driver-controller-pdb -n openshift-cluster-csi-drivers because it was missing openshift-cluster-csi-drivers 59m Normal ServiceAccountCreated deployment/aws-ebs-csi-driver-operator Created ServiceAccount/aws-ebs-csi-driver-controller-sa -n openshift-cluster-csi-drivers because it was missing openshift-kube-scheduler-operator 59m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/serviceaccount-ca -n openshift-kube-scheduler because it was missing openshift-authentication-operator 59m Warning ConfigMapCreateFailed deployment/authentication-operator Failed to create ConfigMap/revision-status-1 -n openshift-oauth-apiserver: namespaces "openshift-oauth-apiserver" not found openshift-service-ca 59m Normal Pulled pod/service-ca-57bb877df5-24vfr Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7f7cb6554c1dc9b5b3b58f162f592062e5c63bf24c5ed90a62074e117be3f743" in 1.536116535s (1.536129581s including waiting) openshift-apiserver-operator 59m Normal ConfigMapCreated deployment/openshift-apiserver-operator Created ConfigMap/audit-1 -n openshift-apiserver because it was missing openshift-apiserver-operator 59m Normal RevisionCreate deployment/openshift-apiserver-operator Revision 0 created because configmap "audit" not found openshift-service-ca 59m Warning FastControllerResync deployment/service-ca Controller "ServiceServingCertController" resync interval is set to 0s which might lead to client request throttling openshift-service-ca 59m Warning FastControllerResync deployment/service-ca Controller "CRDCABundleInjector" resync interval is set to 0s which might lead to client request throttling openshift-machine-config-operator 59m Warning FailedMount pod/machine-config-daemon-zlzm2 MountVolume.SetUp failed for volume "proxy-tls" : secret "proxy-tls" not found openshift-apiserver-operator 59m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found" to "APIServicesAvailable: PreconditionNotReady" openshift-service-ca 59m Warning FastControllerResync deployment/service-ca Controller "ConfigMapCABundleInjector" resync interval is set to 0s which might lead to client request throttling openshift-service-ca 59m Warning FastControllerResync deployment/service-ca Controller "ValidatingWebhookCABundleInjector" resync interval is set to 0s which might lead to client request throttling openshift-service-ca 59m Warning FastControllerResync deployment/service-ca Controller "APIServiceCABundleInjector" resync interval is set to 0s which might lead to client request throttling openshift-service-ca-operator 59m Normal OperatorStatusChanged deployment/service-ca-operator Status for clusteroperator/service-ca changed: Progressing changed from True to False ("Progressing: All service-ca-operator deployments updated") openshift-etcd-operator 59m Normal SecretCreated deployment/etcd-operator Created Secret/etcd-client -n openshift-etcd because it was missing openshift-controller-manager-operator 59m Normal OperatorStatusChanged deployment/openshift-controller-manager-operator Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 1, desired generation is 2.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2." to "Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" openshift-machine-config-operator 59m Warning FailedMount pod/machine-config-daemon-ll5kq MountVolume.SetUp failed for volume "proxy-tls" : secret "proxy-tls" not found openshift-cluster-csi-drivers 59m Normal ClusterRoleCreated deployment/aws-ebs-csi-driver-operator Created ClusterRole.rbac.authorization.k8s.io/ebs-privileged-role because it was missing openshift-service-ca 59m Normal LeaderElection lease/service-ca-controller-lock service-ca-57bb877df5-24vfr_64d3350b-7290-42cf-9f75-0e6e512ae477 became leader openshift-service-ca 59m Normal LeaderElection configmap/service-ca-controller-lock service-ca-57bb877df5-24vfr_64d3350b-7290-42cf-9f75-0e6e512ae477 became leader openshift-service-ca 59m Warning FastControllerResync deployment/service-ca Controller "ServiceServingCertUpdateController" resync interval is set to 0s which might lead to client request throttling openshift-authentication-operator 59m Warning ConfigMapCreateFailed deployment/authentication-operator Failed to create ConfigMap/audit -n openshift-authentication: namespaces "openshift-authentication" not found openshift-service-ca 59m Warning FastControllerResync deployment/service-ca Controller "LegacyVulnerableConfigMapCABundleInjector" resync interval is set to 0s which might lead to client request throttling openshift-authentication-operator 59m Warning ConfigMapCreateFailed deployment/authentication-operator Failed to create ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver: namespaces "openshift-oauth-apiserver" not found openshift-service-ca-operator 59m Normal OperatorStatusChanged deployment/service-ca-operator Status for clusteroperator/service-ca changed: Progressing changed from True to False ("Progressing: All service-ca-operator deployments updated"),status.versions changed from [] to [{"operator" "4.13.0-rc.0"}] openshift-kube-scheduler-operator 59m Normal ServiceCreated deployment/openshift-kube-scheduler-operator Created Service/scheduler -n openshift-kube-scheduler because it was missing openshift-kube-controller-manager-operator 59m Normal ServiceAccountCreated deployment/kube-controller-manager-operator Created ServiceAccount/kube-controller-manager-sa -n openshift-kube-controller-manager because it was missing openshift-cluster-csi-drivers 59m Normal ClusterRoleCreated deployment/aws-ebs-csi-driver-operator Created ClusterRole.rbac.authorization.k8s.io/ebs-external-snapshotter-role because it was missing openshift-apiserver-operator 59m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: \nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nRevisionControllerDegraded: namespaces \"openshift-apiserver\" not found" to "APIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: \nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found" openshift-service-ca-operator 59m Normal OperatorStatusChanged deployment/service-ca-operator Status for clusteroperator/service-ca changed: status.versions changed from [] to [{"operator" "4.13.0-rc.0"}] openshift-apiserver-operator 59m Normal ServiceCreated deployment/openshift-apiserver-operator Created Service/api -n openshift-apiserver because it was missing openshift-service-ca 59m Warning FastControllerResync deployment/service-ca Controller "MutatingWebhookCABundleInjector" resync interval is set to 0s which might lead to client request throttling openshift-cluster-csi-drivers 59m Warning FailedCreate daemonset/aws-ebs-csi-driver-node Error creating: pods "aws-ebs-csi-driver-node-" is forbidden: unable to validate against any security context constraint: [provider "anyuid": Forbidden: not usable by user or serviceaccount, provider restricted-v2: .spec.securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used, spec.volumes[0]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.volumes[2]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.volumes[3]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.volumes[4]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.volumes[5]: Invalid value: "hostPath": hostPath volumes are not allowed to be used, spec.containers[0].securityContext.privileged: Invalid value: true: Privileged containers are not allowed, spec.containers[0].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used, spec.containers[0].securityContext.containers[0].hostPort: Invalid value: 10300: Host ports are not allowed to be used, spec.containers[1].securityContext.privileged: Invalid value: true: Privileged containers are not allowed, spec.containers[1].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used, spec.containers[1].securityContext.containers[0].hostPort: Invalid value: 10300: Host ports are not allowed to be used, spec.containers[2].securityContext.hostNetwork: Invalid value: true: Host network is not allowed to be used, spec.containers[2].securityContext.containers[0].hostPort: Invalid value: 10300: Host ports are not allowed to be used, provider "restricted": Forbidden: not usable by user or serviceaccount, provider "nonroot-v2": Forbidden: not usable by user or serviceaccount, provider "nonroot": Forbidden: not usable by user or serviceaccount, provider "hostmount-anyuid": Forbidden: not usable by user or serviceaccount, provider "machine-api-termination-handler": Forbidden: not usable by user or serviceaccount, provider "hostnetwork-v2": Forbidden: not usable by user or serviceaccount, provider "hostnetwork": Forbidden: not usable by user or serviceaccount, provider "hostaccess": Forbidden: not usable by user or serviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount] openshift-apiserver-operator 59m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: \nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nRevisionControllerDegraded: namespaces \"openshift-apiserver\" not found\nResourceSyncControllerDegraded: namespaces \"openshift-apiserver\" not found" to "APIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: \nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nRevisionControllerDegraded: namespaces \"openshift-apiserver\" not found" openshift-service-ca-operator 59m Normal OperatorVersionChanged deployment/service-ca-operator clusteroperator/service-ca version "operator" changed from "" to "4.13.0-rc.0" openshift-cluster-csi-drivers 59m Normal ClusterRoleBindingCreated deployment/aws-ebs-csi-driver-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/ebs-node-privileged-binding because it was missing openshift-apiserver-operator 59m Normal ConfigMapCreated deployment/openshift-apiserver-operator Created ConfigMap/etcd-serving-ca -n openshift-apiserver because it was missing openshift-machine-config-operator 59m Warning FailedMount pod/machine-config-daemon-s6f62 MountVolume.SetUp failed for volume "proxy-tls" : secret "proxy-tls" not found openshift-cluster-storage-operator 59m Normal ValidatingWebhookConfigurationUpdated deployment/csi-snapshot-controller-operator Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/snapshot.storage.k8s.io because it changed openshift-controller-manager-operator 59m Normal OperatorStatusChanged deployment/openshift-controller-manager-operator Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" openshift-kube-controller-manager-operator 59m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/recycler-config -n openshift-kube-controller-manager because it was missing openshift-apiserver-operator 59m Normal PodDisruptionBudgetCreated deployment/openshift-apiserver-operator Created PodDisruptionBudget.policy/openshift-apiserver-pdb -n openshift-apiserver because it was missing openshift-cluster-csi-drivers 59m Normal RoleCreated deployment/aws-ebs-csi-driver-operator Created Role.rbac.authorization.k8s.io/aws-ebs-csi-driver-prometheus -n openshift-cluster-csi-drivers because it was missing openshift-apiserver 59m Normal NoPods poddisruptionbudget/openshift-apiserver-pdb No matching pods found openshift-authentication-operator 59m Normal SecretCreated deployment/authentication-operator Created Secret/etcd-client -n openshift-oauth-apiserver because it was missing openshift-kube-apiserver-operator 59m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/revision-status-1 -n openshift-kube-apiserver because it was missing openshift-kube-controller-manager-operator 59m Normal ClusterRoleBindingCreated deployment/kube-controller-manager-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-controller-manager-recovery because it was missing openshift-kube-controller-manager-operator 59m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager because it was missing openshift-kube-apiserver-operator 59m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/etcd-serving-ca -n openshift-kube-apiserver because it was missing openshift-kube-controller-manager-operator 59m Normal ServiceAccountCreated deployment/kube-controller-manager-operator Created ServiceAccount/pv-recycler-controller -n openshift-infra because it was missing openshift-authentication-operator 59m Warning ConfigMapCreateFailed deployment/authentication-operator Failed to create ConfigMap/audit -n openshift-oauth-apiserver: namespaces "openshift-oauth-apiserver" not found openshift-cluster-csi-drivers 59m Normal ScalingReplicaSet deployment/aws-ebs-csi-driver-controller Scaled down replica set aws-ebs-csi-driver-controller-75b78f4dd4 to 1 from 2 openshift-authentication-operator 59m Normal NamespaceCreated deployment/authentication-operator Created Namespace/openshift-oauth-apiserver because it was missing openshift-cluster-csi-drivers 59m Normal ClusterRoleBindingCreated deployment/aws-ebs-csi-driver-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/ebs-csi-snapshotter-binding because it was missing openshift-cluster-csi-drivers 59m Normal ServiceCreated deployment/aws-ebs-csi-driver-operator Created Service/aws-ebs-csi-driver-controller-metrics -n openshift-cluster-csi-drivers because it was missing openshift-apiserver-operator 59m Normal ConfigMapCreated deployment/openshift-apiserver-operator Created ConfigMap/trusted-ca-bundle -n openshift-apiserver because it was missing openshift-cluster-csi-drivers 59m Normal SuccessfulCreate daemonset/aws-ebs-csi-driver-node Created pod: aws-ebs-csi-driver-node-nznvd openshift-cluster-csi-drivers 59m Normal Pulling pod/aws-ebs-csi-driver-node-lwrls Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" openshift-apiserver-operator 59m Normal ServiceAccountCreated deployment/openshift-apiserver-operator Created ServiceAccount/openshift-apiserver-sa -n openshift-apiserver because it was missing openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Progressing message changed from "AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverControllerServiceControllerProgressing: Waiting for Deployment to act on changes" to "AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverControllerServiceControllerProgressing: Waiting for Deployment to act on changes\nAWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverNodeServiceControllerProgressing: Waiting for DaemonSet to act on changes",Available message changed from "AWSEBSCSIDriverOperatorCRAvailable: AWSEBSDriverControllerServiceControllerAvailable: Waiting for Deployment" to "AWSEBSCSIDriverOperatorCRAvailable: AWSEBSDriverControllerServiceControllerAvailable: Waiting for Deployment\nAWSEBSCSIDriverOperatorCRAvailable: AWSEBSDriverNodeServiceControllerAvailable: Waiting for the DaemonSet to deploy the CSI Node Service" openshift-cluster-csi-drivers 59m Normal SuccessfulCreate daemonset/aws-ebs-csi-driver-node Created pod: aws-ebs-csi-driver-node-8l9r7 openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Degraded message changed from "All is well" to "AWSEBSCSIDriverOperatorCRDegraded: All is well",Progressing message changed from "AWSEBSCSIDriverOperatorCRProgressing: Waiting for AWSEBS operator to report status" to "AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverControllerServiceControllerProgressing: Waiting for Deployment to act on changes",Available message changed from "AWSEBSCSIDriverOperatorCRAvailable: Waiting for AWSEBS operator to report status" to "AWSEBSCSIDriverOperatorCRAvailable: AWSEBSDriverControllerServiceControllerAvailable: Waiting for Deployment" openshift-cluster-csi-drivers 59m Normal Pulling pod/aws-ebs-csi-driver-node-nznvd Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" openshift-cluster-csi-drivers 59m Normal Pulling pod/aws-ebs-csi-driver-node-8l9r7 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" kube-system 59m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-oauth-apiserver namespace openshift-authentication-operator 59m Normal ClusterRoleBindingCreated deployment/authentication-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-apiserver because it was missing openshift-cluster-csi-drivers 59m Normal SuccessfulCreate daemonset/aws-ebs-csi-driver-node Created pod: aws-ebs-csi-driver-node-lwrls openshift-kube-apiserver-operator 59m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/bound-sa-token-signing-certs -n openshift-kube-apiserver because it was missing openshift-cluster-csi-drivers 59m Normal ClusterRoleCreated deployment/aws-ebs-csi-driver-operator Created ClusterRole.rbac.authorization.k8s.io/ebs-kube-rbac-proxy-role because it was missing openshift-kube-apiserver-operator 59m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-control-plane-signer-ca -n openshift-kube-apiserver-operator because it was missing openshift-authentication-operator 59m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" to "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found" openshift-kube-scheduler-operator 59m Normal ClusterRoleBindingCreated deployment/openshift-kube-scheduler-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler-recovery because it was missing openshift-kube-scheduler-operator 59m Normal ServiceAccountCreated deployment/openshift-kube-scheduler-operator Created ServiceAccount/openshift-kube-scheduler-sa -n openshift-kube-scheduler because it was missing openshift-authentication-operator 59m Normal ServiceCreated deployment/authentication-operator Created Service/api -n openshift-oauth-apiserver because it was missing openshift-kube-apiserver-operator 59m Normal TargetUpdateRequired deployment/kube-apiserver-operator "internal-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: missing notAfter openshift-authentication-operator 59m Normal ClusterRoleBindingCreated deployment/authentication-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-authentication because it was missing openshift-cluster-csi-drivers 59m Normal ScalingReplicaSet deployment/aws-ebs-csi-driver-controller Scaled up replica set aws-ebs-csi-driver-controller-5ff7cf9694 to 1 from 0 openshift-authentication-operator 59m Normal NamespaceCreated deployment/authentication-operator Created Namespace/openshift-authentication because it was missing openshift-cluster-csi-drivers 59m Normal SuccessfulCreate replicaset/aws-ebs-csi-driver-controller-5ff7cf9694 Created pod: aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp openshift-authentication-operator 59m Normal ClientCertificateCreated deployment/authentication-operator A new client certificate for OpenShiftAuthenticatorCertRequester is available openshift-kube-apiserver-operator 59m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-aggregator-client-ca -n openshift-config-managed because it was missing openshift-kube-apiserver-operator 59m Normal TargetUpdateRequired deployment/kube-apiserver-operator "aggregator-client" in "openshift-kube-apiserver" requires a new target cert/key pair: missing notAfter openshift-kube-apiserver-operator 59m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-to-kubelet-client-ca -n openshift-kube-apiserver-operator because it was missing openshift-cluster-csi-drivers 59m Normal RoleBindingCreated deployment/aws-ebs-csi-driver-operator Created RoleBinding.rbac.authorization.k8s.io/aws-ebs-csi-driver-prometheus -n openshift-cluster-csi-drivers because it was missing openshift-kube-apiserver-operator 59m Normal TargetUpdateRequired deployment/kube-apiserver-operator "kubelet-client" in "openshift-kube-apiserver" requires a new target cert/key pair: missing notAfter openshift-cluster-csi-drivers 59m Normal SuccessfulDelete replicaset/aws-ebs-csi-driver-controller-75b78f4dd4 Deleted pod: aws-ebs-csi-driver-controller-75b78f4dd4-m5mpr openshift-cluster-csi-drivers 59m Normal ClusterRoleBindingCreated deployment/aws-ebs-csi-driver-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/ebs-kube-rbac-proxy-binding because it was missing openshift-kube-apiserver-operator 59m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/localhost-serving-ca -n openshift-kube-apiserver-operator because it was missing openshift-kube-apiserver-operator 59m Normal TargetUpdateRequired deployment/kube-apiserver-operator "localhost-serving-cert-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: missing notAfter openshift-kube-apiserver-operator 59m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/loadbalancer-serving-ca -n openshift-kube-apiserver-operator because it was missing openshift-kube-controller-manager-operator 59m Normal ClusterRoleBindingCreated deployment/kube-controller-manager-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:kube-controller-manager:gce-cloud-provider because it was missing openshift-kube-controller-manager-operator 59m Normal ServiceAccountCreated deployment/kube-controller-manager-operator Created ServiceAccount/localhost-recovery-client -n openshift-kube-controller-manager because it was missing openshift-kube-controller-manager-operator 59m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/service-ca -n openshift-kube-controller-manager because it was missing openshift-kube-controller-manager-operator 59m Normal ClusterRoleCreated deployment/kube-controller-manager-operator Created ClusterRole.rbac.authorization.k8s.io/system:openshift:kube-controller-manager:gce-cloud-provider because it was missing openshift-kube-controller-manager-operator 59m Normal SecretCreated deployment/kube-controller-manager-operator Created Secret/localhost-recovery-client-token -n openshift-kube-controller-manager because it was missing openshift-kube-controller-manager-operator 59m Normal ClusterRoleCreated deployment/kube-controller-manager-operator Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing openshift-kube-controller-manager-operator 59m Normal ClusterRoleBindingCreated deployment/kube-controller-manager-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing openshift-cluster-csi-drivers 59m Normal SuccessfulDelete daemonset/aws-ebs-csi-driver-node Deleted pod: aws-ebs-csi-driver-node-nznvd openshift-cluster-csi-drivers 59m Normal SuccessfulDelete daemonset/aws-ebs-csi-driver-node Deleted pod: aws-ebs-csi-driver-node-lwrls openshift-cluster-csi-drivers 59m Normal SuccessfulDelete daemonset/aws-ebs-csi-driver-node Deleted pod: aws-ebs-csi-driver-node-8l9r7 openshift-kube-controller-manager-operator 59m Warning RequiredInstallerResourcesMissing deployment/kube-controller-manager-operator configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0 kube-system 59m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-authentication namespace openshift-kube-apiserver-operator 59m Normal TargetUpdateRequired deployment/kube-apiserver-operator "control-plane-node-admin-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: missing notAfter openshift-kube-scheduler-operator 59m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/scheduler-kubeconfig -n openshift-kube-scheduler because it was missing openshift-cluster-csi-drivers 59m Normal Pulling pod/aws-ebs-csi-driver-node-nznvd Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" openshift-cluster-csi-drivers 59m Normal Pulled pod/aws-ebs-csi-driver-node-nznvd Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" in 2.002775507s (2.002787998s including waiting) openshift-cluster-csi-drivers 59m Normal Created pod/aws-ebs-csi-driver-node-nznvd Created container csi-driver openshift-cluster-csi-drivers 59m Normal Created pod/aws-ebs-csi-driver-node-8l9r7 Created container csi-driver openshift-cluster-csi-drivers 59m Normal Pulled pod/aws-ebs-csi-driver-node-8l9r7 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" in 1.953642375s (1.953651766s including waiting) openshift-cluster-csi-drivers 59m Normal Created pod/aws-ebs-csi-driver-node-lwrls Created container csi-driver openshift-kube-controller-manager-operator 59m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/csr-signer-ca -n openshift-kube-controller-manager-operator because it was missing openshift-kube-controller-manager-operator 59m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/aggregator-client-ca -n openshift-kube-controller-manager because it was missing openshift-cluster-csi-drivers 59m Normal Started pod/aws-ebs-csi-driver-node-8l9r7 Started container csi-driver openshift-kube-apiserver-operator 59m Warning ConfigMapCreateFailed deployment/kube-apiserver-operator Failed to create ConfigMap/loadbalancer-serving-ca -n openshift-kube-apiserver-operator: configmaps "loadbalancer-serving-ca" already exists openshift-kube-controller-manager-operator 59m Warning RequiredInstallerResourcesMissing deployment/kube-controller-manager-operator configmaps: client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0 openshift-cluster-csi-drivers 59m Normal Pulling pod/aws-ebs-csi-driver-node-8l9r7 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" openshift-authentication-operator 59m Normal ConfigMapCreated deployment/authentication-operator Created ConfigMap/v4-0-config-system-trusted-ca-bundle -n openshift-authentication because it was missing openshift-cluster-csi-drivers 59m Normal Started pod/aws-ebs-csi-driver-node-nznvd Started container csi-driver openshift-kube-apiserver-operator 59m Normal ClusterRoleBindingCreated deployment/kube-apiserver-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-apiserver-installer because it was missing openshift-kube-apiserver-operator 59m Normal ServiceAccountCreated deployment/kube-apiserver-operator Created ServiceAccount/installer-sa -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 59m Normal CABundleUpdateRequired deployment/kube-apiserver-operator "localhost-recovery-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert openshift-authentication-operator 59m Normal SecretCreated deployment/authentication-operator Created Secret/v4-0-config-system-ocp-branding-template -n openshift-authentication because it was missing openshift-cluster-csi-drivers 59m Normal Pulled pod/aws-ebs-csi-driver-node-lwrls Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" in 1.825796861s (1.825811163s including waiting) openshift-kube-apiserver-operator 59m Normal TargetUpdateRequired deployment/kube-apiserver-operator "service-network-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: missing notAfter openshift-kube-apiserver-operator 59m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" openshift-cluster-csi-drivers 59m Normal Started pod/aws-ebs-csi-driver-node-lwrls Started container csi-driver openshift-kube-apiserver-operator 59m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/localhost-recovery-serving-signer -n openshift-kube-apiserver-operator because it was missing openshift-cluster-csi-drivers 59m Normal Pulling pod/aws-ebs-csi-driver-node-lwrls Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" openshift-kube-apiserver-operator 59m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/service-network-serving-ca -n openshift-kube-apiserver-operator because it was missing openshift-cluster-csi-drivers 59m Normal Pulled pod/aws-ebs-csi-driver-node-nznvd Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" in 968.991184ms (969.004926ms including waiting) openshift-cluster-csi-drivers 59m Normal Created pod/aws-ebs-csi-driver-node-lwrls Created container csi-node-driver-registrar openshift-cluster-csi-drivers 59m Normal Pulling pod/aws-ebs-csi-driver-node-lwrls Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" openshift-cluster-csi-drivers 59m Normal Pulled pod/aws-ebs-csi-driver-node-lwrls Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" in 965.174691ms (965.187397ms including waiting) openshift-authentication-operator 59m Normal PodDisruptionBudgetCreated deployment/authentication-operator Created PodDisruptionBudget.policy/oauth-apiserver-pdb -n openshift-oauth-apiserver because it was missing openshift-route-controller-manager 59m Warning FailedMount pod/route-controller-manager-7d7696bfd4-zpkmp MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found openshift-cluster-csi-drivers 59m Normal Started pod/aws-ebs-csi-driver-node-lwrls Started container csi-node-driver-registrar openshift-route-controller-manager 59m Warning FailedMount pod/route-controller-manager-7d7696bfd4-z2bjq MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found openshift-authentication-operator 59m Normal ClusterRoleCreated deployment/authentication-operator Created ClusterRole.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing openshift-oauth-apiserver 59m Normal NoPods poddisruptionbudget/oauth-apiserver-pdb No matching pods found openshift-authentication-operator 59m Normal ClusterRoleBindingCreated deployment/authentication-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing openshift-authentication-operator 59m Normal ServiceAccountCreated deployment/authentication-operator Created ServiceAccount/oauth-apiserver-sa -n openshift-oauth-apiserver because it was missing openshift-kube-apiserver-operator 59m Warning RequiredInstallerResourcesMissing deployment/kube-apiserver-operator configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 openshift-authentication-operator 59m Normal ConfigMapCreated deployment/authentication-operator Created ConfigMap/audit -n openshift-oauth-apiserver because it was missing openshift-cluster-csi-drivers 59m Normal Pulled pod/aws-ebs-csi-driver-node-8l9r7 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" in 1.041021483s (1.041031889s including waiting) openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Progressing message changed from "AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverControllerServiceControllerProgressing: Waiting for Deployment to act on changes\nAWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverNodeServiceControllerProgressing: Waiting for DaemonSet to act on changes" to "AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverControllerServiceControllerProgressing: Waiting for Deployment to act on changes\nAWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods" openshift-route-controller-manager 59m Warning FailedMount pod/route-controller-manager-7d7696bfd4-2tvnf MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found openshift-kube-scheduler-operator 59m Normal ServiceAccountCreated deployment/openshift-kube-scheduler-operator Created ServiceAccount/localhost-recovery-client -n openshift-kube-scheduler because it was missing openshift-kube-apiserver-operator 59m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/node-system-admin-signer -n openshift-kube-apiserver-operator because it was missing openshift-kube-apiserver-operator 59m Normal CABundleUpdateRequired deployment/kube-apiserver-operator "node-system-admin-ca" in "openshift-kube-apiserver-operator" requires a new cert openshift-kube-apiserver-operator 59m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver because it was missing openshift-cluster-csi-drivers 59m Normal DaemonSetUpdated deployment/aws-ebs-csi-driver-operator Updated DaemonSet.apps/aws-ebs-csi-driver-node -n openshift-cluster-csi-drivers because it changed openshift-kube-scheduler-operator 59m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-1,kube-scheduler-cert-syncer-kubeconfig-1,kube-scheduler-pod-1,scheduler-kubeconfig-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nRevisionControllerDegraded: configmaps \"kube-scheduler-pod\" not found" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-1,kube-scheduler-cert-syncer-kubeconfig-1,kube-scheduler-pod-1,scheduler-kubeconfig-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nRevisionControllerDegraded: configmaps \"kube-scheduler-pod\" not found" openshift-kube-controller-manager-operator 59m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/csr-controller-ca -n openshift-config-managed because it was missing openshift-kube-controller-manager-operator 59m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/csr-controller-ca -n openshift-kube-controller-manager-operator because it was missing openshift-kube-apiserver-operator 59m Warning ConfigMapCreateFailed deployment/kube-apiserver-operator Failed to create ConfigMap/kube-control-plane-signer-ca -n openshift-kube-apiserver-operator: configmaps "kube-control-plane-signer-ca" already exists openshift-cluster-csi-drivers 59m Normal DeploymentUpdated deployment/aws-ebs-csi-driver-operator Updated Deployment.apps/aws-ebs-csi-driver-controller -n openshift-cluster-csi-drivers because it changed openshift-kube-apiserver-operator 59m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-audit-policies -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 59m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/bound-service-account-signing-key -n openshift-kube-apiserver because it was missing openshift-kube-controller-manager-operator 59m Warning RevisionCreateFailed deployment/kube-controller-manager-operator Failed to create revision 1: configmaps "kube-controller-manager-pod" not found openshift-kube-apiserver-operator 59m Warning RequiredInstallerResourcesMissing deployment/kube-apiserver-operator configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1 openshift-authentication-operator 59m Normal RoleBindingCreated deployment/authentication-operator Created RoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing openshift-cluster-csi-drivers 59m Normal Created pod/aws-ebs-csi-driver-node-nznvd Created container csi-node-driver-registrar openshift-cluster-csi-drivers 59m Normal Pulling pod/aws-ebs-csi-driver-node-nznvd Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Progressing message changed from "AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverControllerServiceControllerProgressing: Waiting for Deployment to act on changes\nAWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods" to "AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods" openshift-kube-scheduler-operator 59m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-1,kube-scheduler-cert-syncer-kubeconfig-1,kube-scheduler-pod-1,scheduler-kubeconfig-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nRevisionControllerDegraded: configmaps \"kube-scheduler-pod\" not found" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-1,kube-scheduler-cert-syncer-kubeconfig-1,kube-scheduler-pod-1,scheduler-kubeconfig-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nRevisionControllerDegraded: configmaps \"kube-scheduler-pod\" not found" openshift-kube-scheduler-operator 59m Normal RevisionTriggered deployment/openshift-kube-scheduler-operator new revision 2 triggered by "configmap \"kube-scheduler-pod\" not found" openshift-kube-scheduler-operator 59m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler because it was missing openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Degraded message changed from "AWSEBSCSIDriverOperatorCRDegraded: All is well" to "AWSEBSCSIDriverOperatorCRDegraded: AWSEBSDriverControllerServiceControllerDegraded: Operation cannot be fulfilled on clustercsidrivers.operator.openshift.io \"ebs.csi.aws.com\": the object has been modified; please apply your changes to the latest version and try again" openshift-kube-scheduler-operator 59m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-1,kube-scheduler-cert-syncer-kubeconfig-1,kube-scheduler-pod-1,scheduler-kubeconfig-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nRevisionControllerDegraded: configmaps \"kube-scheduler-pod\" not found" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-1,kube-scheduler-cert-syncer-kubeconfig-1,kube-scheduler-pod-1,scheduler-kubeconfig-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nRevisionControllerDegraded: configmaps \"kube-scheduler-pod\" not found\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" openshift-cluster-csi-drivers 59m Normal Pulling pod/aws-ebs-csi-driver-node-8l9r7 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" openshift-authentication-operator 59m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" openshift-cluster-csi-drivers 59m Normal Created pod/aws-ebs-csi-driver-node-8l9r7 Created container csi-node-driver-registrar openshift-authentication-operator 59m Normal ServiceAccountCreated deployment/authentication-operator Created ServiceAccount/oauth-openshift -n openshift-authentication because it was missing openshift-kube-controller-manager-operator 59m Normal SecretCreated deployment/kube-controller-manager-operator Created Secret/csr-signer -n openshift-kube-controller-manager because it was missing openshift-cluster-csi-drivers 59m Normal Started pod/aws-ebs-csi-driver-node-nznvd Started container csi-node-driver-registrar openshift-kube-scheduler-operator 59m Normal SecretCreated deployment/openshift-kube-scheduler-operator Created Secret/localhost-recovery-client-token -n openshift-kube-scheduler because it was missing openshift-kube-scheduler-operator 59m Normal RevisionTriggered deployment/openshift-kube-scheduler-operator new revision 2 triggered by "configmap \"kube-scheduler-pod-1\" not found" openshift-authentication-operator 59m Normal ServiceCreated deployment/authentication-operator Created Service/oauth-openshift -n openshift-authentication because it was missing openshift-authentication-operator 59m Normal RoleCreated deployment/authentication-operator Created Role.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing openshift-cluster-csi-drivers 59m Normal Started pod/aws-ebs-csi-driver-node-8l9r7 Started container csi-node-driver-registrar openshift-cluster-csi-drivers 59m Normal Pulled pod/aws-ebs-csi-driver-node-8l9r7 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" in 891.347539ms (891.355178ms including waiting) openshift-cluster-csi-drivers 59m Normal Started pod/aws-ebs-csi-driver-node-lwrls Started container csi-liveness-probe openshift-cluster-csi-drivers 59m Normal Created pod/aws-ebs-csi-driver-node-nznvd Created container csi-liveness-probe openshift-cluster-csi-drivers 59m Normal Pulled pod/aws-ebs-csi-driver-node-nznvd Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" in 895.511244ms (895.524005ms including waiting) openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Degraded message changed from "AWSEBSCSIDriverOperatorCRDegraded: AWSEBSDriverControllerServiceControllerDegraded: Operation cannot be fulfilled on clustercsidrivers.operator.openshift.io \"ebs.csi.aws.com\": the object has been modified; please apply your changes to the latest version and try again" to "AWSEBSCSIDriverOperatorCRDegraded: All is well" openshift-cluster-csi-drivers 59m Normal Started pod/aws-ebs-csi-driver-node-nznvd Started container csi-liveness-probe openshift-cluster-csi-drivers 59m Normal Created pod/aws-ebs-csi-driver-node-8l9r7 Created container csi-liveness-probe openshift-cluster-csi-drivers 59m Normal Started pod/aws-ebs-csi-driver-node-8l9r7 Started container csi-liveness-probe openshift-cluster-csi-drivers 59m Normal Killing pod/aws-ebs-csi-driver-node-8l9r7 Stopping container csi-driver openshift-cluster-csi-drivers 59m Normal Killing pod/aws-ebs-csi-driver-node-8l9r7 Stopping container csi-liveness-probe openshift-cluster-csi-drivers 59m Normal Killing pod/aws-ebs-csi-driver-node-8l9r7 Stopping container csi-node-driver-registrar openshift-etcd-operator 59m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" openshift-kube-scheduler-operator 59m Normal ConfigMapUpdated deployment/openshift-kube-scheduler-operator Updated ConfigMap/revision-status-2 -n openshift-kube-scheduler:... openshift-cluster-csi-drivers 59m Normal Pulled pod/aws-ebs-csi-driver-node-lwrls Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" in 1.256220112s (1.256230814s including waiting) openshift-cluster-csi-drivers 59m Normal Created pod/aws-ebs-csi-driver-node-lwrls Created container csi-liveness-probe openshift-kube-controller-manager-operator 59m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager because it was missing openshift-kube-apiserver-operator 59m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/aggregator-client -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 59m Normal ServiceCreated deployment/kube-apiserver-operator Created Service/apiserver -n openshift-kube-apiserver because it was missing openshift-cluster-csi-drivers 59m Normal SuccessfulCreate daemonset/aws-ebs-csi-driver-node Created pod: aws-ebs-csi-driver-node-zcbkq openshift-cluster-csi-drivers 59m Normal Killing pod/aws-ebs-csi-driver-node-lwrls Stopping container csi-node-driver-registrar openshift-cluster-csi-drivers 59m Normal Killing pod/aws-ebs-csi-driver-node-lwrls Stopping container csi-liveness-probe openshift-cluster-csi-drivers 59m Normal Killing pod/aws-ebs-csi-driver-node-lwrls Stopping container csi-driver openshift-kube-apiserver-operator 59m Warning RequiredInstallerResourcesMissing deployment/kube-apiserver-operator configmaps: check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1 openshift-kube-apiserver-operator 59m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0]\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found",Progressing changed from False to True ("NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 1" openshift-etcd-operator 59m Warning RevisionCreateFailed deployment/etcd-operator Failed to create revision 1: configmaps "etcd-pod" not found openshift-kube-scheduler-operator 59m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/kube-scheduler-pod-2 -n openshift-kube-scheduler because it was missing openshift-kube-apiserver-operator 59m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/aggregator-client-ca -n openshift-kube-apiserver because it was missing openshift-kube-controller-manager-operator 59m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/controller-manager-kubeconfig -n openshift-kube-controller-manager because it was missing openshift-cluster-csi-drivers 59m Normal Killing pod/aws-ebs-csi-driver-node-nznvd Stopping container csi-liveness-probe openshift-cluster-csi-drivers 59m Normal Killing pod/aws-ebs-csi-driver-node-nznvd Stopping container csi-node-driver-registrar openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Progressing message changed from "AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods" to "AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods" openshift-cluster-csi-drivers 59m Normal Killing pod/aws-ebs-csi-driver-node-nznvd Stopping container csi-driver openshift-cluster-csi-drivers 59m Normal Created pod/aws-ebs-csi-driver-node-ts9mc Created container csi-node-driver-registrar openshift-cluster-csi-drivers 59m Normal Pulled pod/aws-ebs-csi-driver-node-ts9mc Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" already present on machine openshift-cluster-csi-drivers 59m Normal Pulled pod/aws-ebs-csi-driver-node-ts9mc Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" already present on machine openshift-cluster-csi-drivers 59m Normal Created pod/aws-ebs-csi-driver-node-ts9mc Created container csi-liveness-probe openshift-cluster-csi-drivers 59m Normal Pulled pod/aws-ebs-csi-driver-node-q9lmf Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" already present on machine openshift-cluster-csi-drivers 59m Normal Started pod/aws-ebs-csi-driver-node-ts9mc Started container csi-driver openshift-cluster-csi-drivers 59m Normal Pulled pod/aws-ebs-csi-driver-node-zcbkq Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" already present on machine openshift-cluster-csi-drivers 59m Normal Created pod/aws-ebs-csi-driver-node-zcbkq Created container csi-driver openshift-cluster-csi-drivers 59m Normal Started pod/aws-ebs-csi-driver-node-zcbkq Started container csi-driver openshift-cluster-csi-drivers 59m Normal Created pod/aws-ebs-csi-driver-node-zcbkq Created container csi-node-driver-registrar openshift-cluster-csi-drivers 59m Normal Started pod/aws-ebs-csi-driver-node-zcbkq Started container csi-node-driver-registrar openshift-cluster-csi-drivers 59m Normal Pulled pod/aws-ebs-csi-driver-node-zcbkq Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" already present on machine openshift-cluster-csi-drivers 59m Normal Created pod/aws-ebs-csi-driver-node-zcbkq Created container csi-liveness-probe openshift-cluster-csi-drivers 59m Normal Created pod/aws-ebs-csi-driver-node-q9lmf Created container csi-driver openshift-cluster-csi-drivers 59m Normal Started pod/aws-ebs-csi-driver-node-q9lmf Started container csi-driver openshift-cluster-csi-drivers 59m Normal Pulled pod/aws-ebs-csi-driver-node-q9lmf Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" already present on machine openshift-cluster-csi-drivers 59m Normal Created pod/aws-ebs-csi-driver-node-q9lmf Created container csi-node-driver-registrar openshift-cluster-csi-drivers 59m Normal SuccessfulCreate daemonset/aws-ebs-csi-driver-node Created pod: aws-ebs-csi-driver-node-ts9mc openshift-cluster-csi-drivers 59m Normal Started pod/aws-ebs-csi-driver-node-q9lmf Started container csi-node-driver-registrar openshift-cluster-csi-drivers 59m Normal Pulled pod/aws-ebs-csi-driver-node-q9lmf Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" already present on machine openshift-cluster-csi-drivers 59m Normal Created pod/aws-ebs-csi-driver-node-ts9mc Created container csi-driver openshift-cluster-csi-drivers 59m Normal Started pod/aws-ebs-csi-driver-node-zcbkq Started container csi-liveness-probe openshift-cluster-csi-drivers 59m Normal Pulled pod/aws-ebs-csi-driver-node-ts9mc Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" already present on machine openshift-cluster-csi-drivers 59m Normal SuccessfulCreate daemonset/aws-ebs-csi-driver-node Created pod: aws-ebs-csi-driver-node-q9lmf openshift-authentication-operator 59m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "WebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found" openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Available message changed from "AWSEBSCSIDriverOperatorCRAvailable: AWSEBSDriverControllerServiceControllerAvailable: Waiting for Deployment" to "AWSEBSCSIDriverOperatorCRAvailable: AWSEBSDriverControllerServiceControllerAvailable: Waiting for Deployment\nAWSEBSCSIDriverOperatorCRAvailable: AWSEBSDriverNodeServiceControllerAvailable: Waiting for the DaemonSet to deploy the CSI Node Service" openshift-kube-apiserver-operator 59m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/kubelet-client -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 59m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/internal-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing openshift-kube-scheduler-operator 59m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/config-2 -n openshift-kube-scheduler because it was missing openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Available message changed from "AWSEBSCSIDriverOperatorCRAvailable: AWSEBSDriverControllerServiceControllerAvailable: Waiting for Deployment\nAWSEBSCSIDriverOperatorCRAvailable: AWSEBSDriverNodeServiceControllerAvailable: Waiting for the DaemonSet to deploy the CSI Node Service" to "AWSEBSCSIDriverOperatorCRAvailable: AWSEBSDriverControllerServiceControllerAvailable: Waiting for Deployment" openshift-kube-apiserver-operator 59m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/localhost-serving-cert-certkey -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 59m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/control-plane-node-admin-client-cert-key -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 59m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/service-network-serving-certkey -n openshift-kube-apiserver because it was missing openshift-cluster-csi-drivers 59m Normal Started pod/aws-ebs-csi-driver-node-ts9mc Started container csi-node-driver-registrar openshift-cluster-csi-drivers 59m Normal Pulled pod/aws-ebs-csi-driver-node-zcbkq Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" already present on machine openshift-kube-apiserver-operator 59m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/node-system-admin-ca -n openshift-kube-apiserver-operator because it was missing openshift-kube-controller-manager-operator 59m Normal ObserveServiceCAConfigMap deployment/kube-controller-manager-operator observed change in config openshift-kube-apiserver-operator 59m Normal ClusterRoleCreated deployment/kube-apiserver-operator Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-crd-reader because it was missing openshift-kube-controller-manager-operator 59m Normal ObservedConfigChanged deployment/kube-controller-manager-operator Writing updated observed config: map[string]any{... openshift-kube-controller-manager-operator 59m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/kube-controller-manager-pod -n openshift-kube-controller-manager because it was missing openshift-kube-apiserver-operator 59m Normal ClusterRoleBindingCreated deployment/kube-apiserver-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator because it was missing openshift-kube-apiserver-operator 59m Normal ClusterRoleBindingCreated deployment/kube-apiserver-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-node-reader because it was missing openshift-kube-apiserver-operator 59m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/revision-status-2 -n openshift-kube-apiserver because it was missing openshift-kube-scheduler-operator 59m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-1,kube-scheduler-cert-syncer-kubeconfig-1,kube-scheduler-pod-1,scheduler-kubeconfig-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nRevisionControllerDegraded: configmaps \"kube-scheduler-pod\" not found\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-1,kube-scheduler-cert-syncer-kubeconfig-1,kube-scheduler-pod-1,scheduler-kubeconfig-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nRevisionControllerDegraded: configmaps \"kube-scheduler-pod\" not found\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" openshift-kube-scheduler-operator 59m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-1,kube-scheduler-cert-syncer-kubeconfig-1,kube-scheduler-pod-1,scheduler-kubeconfig-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nRevisionControllerDegraded: configmaps \"kube-scheduler-pod\" not found\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-1,kube-scheduler-cert-syncer-kubeconfig-1,kube-scheduler-pod-1,scheduler-kubeconfig-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nRevisionControllerDegraded: configmaps \"kube-scheduler-pod\" not found\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" openshift-cluster-csi-drivers 59m Normal Created pod/aws-ebs-csi-driver-node-q9lmf Created container csi-liveness-probe openshift-cluster-csi-drivers 59m Normal Started pod/aws-ebs-csi-driver-node-q9lmf Started container csi-liveness-probe openshift-kube-apiserver-operator 59m Normal ClusterRoleCreated deployment/kube-apiserver-operator Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-node-reader because it was missing openshift-kube-apiserver-operator 59m Normal TargetUpdateRequired deployment/kube-apiserver-operator "node-system-admin-client" in "openshift-kube-apiserver-operator" requires a new target cert/key pair: missing notAfter openshift-kube-apiserver-operator 59m Normal ClusterRoleCreated deployment/kube-apiserver-operator Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints because it was missing openshift-authentication-operator 59m Warning RevisionCreateFailed deployment/authentication-operator Failed to create revision 1: namespaces "openshift-oauth-apiserver" not found openshift-kube-apiserver-operator 59m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig -n openshift-kube-apiserver because it was missing openshift-cluster-csi-drivers 59m Normal Started pod/aws-ebs-csi-driver-node-ts9mc Started container csi-liveness-probe openshift-kube-apiserver-operator 59m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kubelet-serving-ca -n openshift-kube-apiserver because it was missing openshift-kube-scheduler-operator 59m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-scheduler because it was missing openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Progressing message changed from "AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods" to "AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods" openshift-kube-apiserver-operator 59m Normal TargetUpdateRequired deployment/kube-apiserver-operator "localhost-recovery-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: missing notAfter openshift-cluster-csi-drivers 59m Warning FailedMount pod/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp MountVolume.SetUp failed for volume "metrics-serving-cert" : secret "aws-ebs-csi-driver-controller-metrics-serving-cert" not found openshift-kube-apiserver-operator 59m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/localhost-recovery-serving-ca -n openshift-kube-apiserver-operator because it was missing openshift-kube-apiserver-operator 59m Normal RoleBindingCreated deployment/kube-apiserver-operator Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints -n openshift-kube-apiserver because it was missing openshift-kube-scheduler-operator 59m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/scheduler-kubeconfig-2 -n openshift-kube-scheduler because it was missing openshift-kube-apiserver-operator 59m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kubelet-serving-ca -n openshift-config-managed because it was missing openshift-cluster-storage-operator 59m Warning FailedMount pod/csi-snapshot-webhook-75476bf784-7vh6f MountVolume.SetUp failed for volume "certs" : secret "csi-snapshot-webhook-secret" not found openshift-kube-apiserver-operator 59m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/check-endpoints-kubeconfig -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 59m Normal ClusterRoleBindingCreated deployment/kube-apiserver-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader because it was missing openshift-kube-apiserver-operator 59m Normal RoleBindingCreated deployment/kube-apiserver-operator Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints -n kube-system because it was missing openshift-cluster-storage-operator 59m Warning FailedMount pod/csi-snapshot-webhook-75476bf784-7z4rl MountVolume.SetUp failed for volume "certs" : secret "csi-snapshot-webhook-secret" not found openshift-kube-scheduler-operator 59m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-2 -n openshift-kube-scheduler because it was missing openshift-kube-apiserver-operator 59m Normal RoleBindingCreated deployment/kube-apiserver-operator Created RoleBinding.rbac.authorization.k8s.io/authentication-reader-for-authenticated-users -n kube-system because it was missing openshift-kube-apiserver-operator 59m Normal ClusterRoleBindingCreated deployment/kube-apiserver-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-apiserver-recovery because it was missing openshift-machine-api 59m Warning FailedMount pod/cluster-baremetal-operator-cb6794dd9-8bqk2 MountVolume.SetUp failed for volume "cert" : secret "cluster-baremetal-webhook-server-cert" not found openshift-multus 59m Normal Pulling pod/multus-admission-controller-6f95d97cb6-7wv72 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c3cca6e2da92a6cd38e7f20f77bffc675895bd800157fdb50261b7f7ea9fc90" openshift-multus 59m Normal AddedInterface pod/multus-admission-controller-6f95d97cb6-7wv72 Add eth0 [10.130.0.35/23] from ovn-kubernetes openshift-monitoring 59m Warning FailedMount pod/cluster-monitoring-operator-78777bc588-rhggh MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" : secret "cluster-monitoring-operator-tls" not found openshift-machine-api 59m Warning FailedMount pod/cluster-baremetal-operator-cb6794dd9-8bqk2 MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" : secret "cluster-baremetal-operator-tls" not found openshift-image-registry 59m Warning FailedMount pod/cluster-image-registry-operator-868788f8c6-frhj8 MountVolume.SetUp failed for volume "image-registry-operator-tls" : secret "image-registry-operator-tls" not found openshift-operator-lifecycle-manager 59m Warning FailedMount pod/catalog-operator-567d5cdcc9-gwwnx MountVolume.SetUp failed for volume "srv-cert" : secret "catalog-operator-serving-cert" not found openshift-kube-apiserver-operator 59m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/etcd-client -n openshift-kube-apiserver because it was missing openshift-machine-api 59m Warning FailedMount pod/cluster-autoscaler-operator-7fcffdb7c8-g4w4m MountVolume.SetUp failed for volume "cert" : secret "cluster-autoscaler-operator-cert" not found openshift-cluster-machine-approver 59m Normal Pulled pod/machine-approver-5cd47987c9-96cvq Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-kube-apiserver-operator 59m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/localhost-recovery-serving-certkey -n openshift-kube-apiserver because it was missing openshift-cluster-machine-approver 59m Normal Created pod/machine-approver-5cd47987c9-96cvq Created container kube-rbac-proxy openshift-cluster-machine-approver 59m Normal Started pod/machine-approver-5cd47987c9-96cvq Started container kube-rbac-proxy openshift-marketplace 59m Warning FailedMount pod/marketplace-operator-554c77d6df-2q9k5 MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found openshift-cluster-machine-approver 59m Normal Pulling pod/machine-approver-5cd47987c9-96cvq Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:90fd3983343e366cb4df6f35efa1527e4b5da93e90558f23aa416cb9c453375e" openshift-kube-apiserver-operator 59m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/control-plane-node-kubeconfig -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 59m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [configmaps: check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" openshift-kube-apiserver-operator 59m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/node-system-admin-client -n openshift-kube-apiserver-operator because it was missing openshift-kube-controller-manager-operator 59m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]" to "InstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" openshift-dns-operator 59m Normal AddedInterface pod/dns-operator-656b9bd9f9-lb9ps Add eth0 [10.130.0.34/23] from ovn-kubernetes openshift-dns-operator 59m Normal Pulling pod/dns-operator-656b9bd9f9-lb9ps Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ecfd2df94486e0570eeb0f88a5696ecaa0e1e54bc67d342aab3a6167863175fe" openshift-kube-apiserver-operator 59m Warning RequiredInstallerResourcesMissing deployment/kube-apiserver-operator configmaps: client-ca,control-plane-node-kubeconfig, secrets: check-endpoints-client-cert-key,external-loadbalancer-serving-certkey,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1 openshift-machine-api 59m Normal AddedInterface pod/control-plane-machine-set-operator-77b4c948f8-s7qsh Add eth0 [10.130.0.16/23] from ovn-kubernetes openshift-authentication-operator 59m Normal ConfigMapCreated deployment/authentication-operator Created ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver because it was missing openshift-cloud-credential-operator 59m Normal Started pod/cloud-credential-operator-7fffc6cb67-gkvnc Started container kube-rbac-proxy openshift-cloud-credential-operator 59m Normal Created pod/cloud-credential-operator-7fffc6cb67-gkvnc Created container kube-rbac-proxy openshift-controller-manager 59m Warning FailedMount pod/controller-manager-64556d4c99-8fw47 MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found openshift-kube-controller-manager-operator 59m Warning RequiredInstallerResourcesMissing deployment/kube-controller-manager-operator configmaps: client-ca, secrets: kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0 openshift-cloud-credential-operator 59m Normal Pulled pod/cloud-credential-operator-7fffc6cb67-gkvnc Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-cloud-credential-operator 59m Normal AddedInterface pod/cloud-credential-operator-7fffc6cb67-gkvnc Add eth0 [10.130.0.10/23] from ovn-kubernetes openshift-cloud-credential-operator 59m Normal Pulling pod/cloud-credential-operator-7fffc6cb67-gkvnc Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:023392c216b04a82b69315c210827b2776d95583bee16754f55577573553cad4" openshift-machine-api 59m Normal Started pod/machine-api-operator-564474f8c6-284hs Started container kube-rbac-proxy openshift-authentication-operator 59m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " openshift-controller-manager 59m Warning FailedMount pod/controller-manager-64556d4c99-kxhn7 MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found openshift-controller-manager 59m Warning FailedMount pod/controller-manager-64556d4c99-46tn2 MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found openshift-operator-lifecycle-manager 59m Normal Started pod/olm-operator-647f89bf4f-rgnx9 Started container olm-operator openshift-operator-lifecycle-manager 59m Normal Created pod/olm-operator-647f89bf4f-rgnx9 Created container olm-operator openshift-operator-lifecycle-manager 59m Normal AddedInterface pod/olm-operator-647f89bf4f-rgnx9 Add eth0 [10.130.0.25/23] from ovn-kubernetes openshift-machine-api 59m Normal AddedInterface pod/machine-api-operator-564474f8c6-284hs Add eth0 [10.130.0.5/23] from ovn-kubernetes openshift-cluster-node-tuning-operator 59m Normal Pulling pod/cluster-node-tuning-operator-5886c76fd4-7qpt5 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" openshift-cluster-node-tuning-operator 59m Normal AddedInterface pod/cluster-node-tuning-operator-5886c76fd4-7qpt5 Add eth0 [10.130.0.22/23] from ovn-kubernetes openshift-machine-api 59m Normal Pulling pod/control-plane-machine-set-operator-77b4c948f8-s7qsh Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:278a7aba8f50daaaa56984563a5ca591493989e3353eda2da9516f45a35ee7ed" openshift-ingress-operator 59m Normal Pulling pod/ingress-operator-6486794b49-42ddh Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6de5d1e6d774c20a0c3bd4d4c544234519bc4528e5535f24d0cf78a6788b4056" openshift-multus 59m Normal AddedInterface pod/multus-admission-controller-6f95d97cb6-x5s87 Add eth0 [10.130.0.20/23] from ovn-kubernetes openshift-machine-api 59m Normal Pulling pod/machine-api-operator-564474f8c6-284hs Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fae409a0e6467f2d4e5e1cd0974a33f71fddf6f3b567c278b3a9aad56aa0f089" openshift-multus 59m Normal Pulling pod/multus-admission-controller-6f95d97cb6-x5s87 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c3cca6e2da92a6cd38e7f20f77bffc675895bd800157fdb50261b7f7ea9fc90" openshift-authentication-operator 59m Normal ConfigMapCreated deployment/authentication-operator Created ConfigMap/audit -n openshift-authentication because it was missing openshift-machine-api 59m Normal Pulled pod/machine-api-operator-564474f8c6-284hs Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-kube-apiserver-operator 59m Normal ServiceAccountCreated deployment/kube-apiserver-operator Created ServiceAccount/localhost-recovery-client -n openshift-kube-apiserver because it was missing openshift-machine-api 59m Normal Created pod/machine-api-operator-564474f8c6-284hs Created container kube-rbac-proxy openshift-kube-apiserver-operator 59m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/localhost-recovery-client-token -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 59m Normal StorageVersionMigrationCreated deployment/kube-apiserver-operator Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration because it was missing openshift-kube-apiserver-operator 59m Normal StorageVersionMigrationCreated deployment/kube-apiserver-operator Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration because it was missing openshift-operator-lifecycle-manager 59m Normal Pulled pod/olm-operator-647f89bf4f-rgnx9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:497121bfdb3b293af2c939af393bfbda39aeace9d754f7288a244cd7ec2662d7" already present on machine openshift-ingress-operator 59m Normal AddedInterface pod/ingress-operator-6486794b49-42ddh Add eth0 [10.130.0.15/23] from ovn-kubernetes openshift-kube-apiserver-operator 59m Normal PrometheusRuleCreated deployment/kube-apiserver-operator Created PrometheusRule.monitoring.coreos.com/v1 because it was missing openshift-etcd-operator 59m Warning ReportEtcdMembersErrorUpdatingStatus deployment/etcd-operator Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again openshift-kube-scheduler-operator 59m Warning RequiredInstallerResourcesMissing deployment/openshift-kube-scheduler-operator secrets: kube-scheduler-client-cert-key, configmaps: config-1,kube-scheduler-cert-syncer-kubeconfig-1,kube-scheduler-pod-1,scheduler-kubeconfig-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1 openshift-kube-scheduler-operator 59m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-1,kube-scheduler-cert-syncer-kubeconfig-1,kube-scheduler-pod-1,scheduler-kubeconfig-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nRevisionControllerDegraded: configmaps \"kube-scheduler-pod\" not found\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-1,kube-scheduler-cert-syncer-kubeconfig-1,kube-scheduler-pod-1,scheduler-kubeconfig-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nRevisionControllerDegraded: configmaps \"kube-scheduler-pod\" not found" openshift-operator-lifecycle-manager 59m Normal InstallSucceeded clusterserviceversion/packageserver waiting for install components to report healthy openshift-kube-scheduler-operator 59m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-1,kube-scheduler-cert-syncer-kubeconfig-1,kube-scheduler-pod-1,scheduler-kubeconfig-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" to "InstallerControllerDegraded: missing required resources: secrets: kube-scheduler-client-cert-key\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]",Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 2" openshift-operator-lifecycle-manager 59m Normal RequirementsUnknown clusterserviceversion/packageserver requirements not yet checked openshift-kube-scheduler-operator 59m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-1,kube-scheduler-cert-syncer-kubeconfig-1,kube-scheduler-pod-1,scheduler-kubeconfig-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nRevisionControllerDegraded: configmaps \"kube-scheduler-pod\" not found" to "InstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-1,kube-scheduler-cert-syncer-kubeconfig-1,kube-scheduler-pod-1,scheduler-kubeconfig-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1]\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" openshift-kube-apiserver-operator 59m Normal TargetUpdateRequired deployment/kube-apiserver-operator "check-endpoints-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: missing notAfter openshift-kube-scheduler-operator 59m Normal SecretCreated deployment/openshift-kube-scheduler-operator Created Secret/localhost-recovery-client-token-2 -n openshift-kube-scheduler because it was missing openshift-kube-scheduler-operator 59m Normal RevisionCreate deployment/openshift-kube-scheduler-operator Revision 1 created because configmap "kube-scheduler-pod-1" not found openshift-kube-scheduler-operator 59m Normal RevisionTriggered deployment/openshift-kube-scheduler-operator new revision 3 triggered by "secret/serving-cert has changed" openshift-operator-lifecycle-manager 59m Normal ScalingReplicaSet deployment/packageserver Scaled up replica set packageserver-7c998868c6 to 2 openshift-operator-lifecycle-manager 59m Normal AllRequirementsMet clusterserviceversion/packageserver all requirements found, attempting install openshift-etcd-operator 59m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nRevisionControllerDegraded: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" openshift-operator-lifecycle-manager 59m Normal SuccessfulCreate replicaset/packageserver-7c998868c6 Created pod: packageserver-7c998868c6-mxs6q openshift-operator-lifecycle-manager 59m Normal SuccessfulCreate replicaset/packageserver-7c998868c6 Created pod: packageserver-7c998868c6-vtkkk openshift-authentication-operator 59m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: \"oauth-openshift/audit-policy.yaml\" (string): namespaces \"openshift-authentication\" not found\nOpenshiftAuthenticationStaticResourcesDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found" openshift-operator-lifecycle-manager 59m Normal Pulling pod/packageserver-7c998868c6-vtkkk Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:497121bfdb3b293af2c939af393bfbda39aeace9d754f7288a244cd7ec2662d7" openshift-route-controller-manager 59m Normal SuccessfulDelete replicaset/route-controller-manager-7d7696bfd4 Deleted pod: route-controller-manager-7d7696bfd4-z2bjq openshift-controller-manager 59m Normal SuccessfulCreate replicaset/controller-manager-5ff6588dbb Created pod: controller-manager-5ff6588dbb-fwcgz openshift-kube-scheduler-operator 59m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/revision-status-3 -n openshift-kube-scheduler because it was missing openshift-route-controller-manager 59m Normal SuccessfulCreate replicaset/route-controller-manager-678c989865 Created pod: route-controller-manager-678c989865-fj78v openshift-operator-lifecycle-manager 59m Normal AddedInterface pod/packageserver-7c998868c6-mxs6q Add eth0 [10.129.0.11/23] from ovn-kubernetes openshift-cluster-csi-drivers 59m Warning FailedMount pod/aws-ebs-csi-driver-controller-75b78f4dd4-m5mpr MountVolume.SetUp failed for volume "metrics-serving-cert" : secret "aws-ebs-csi-driver-controller-metrics-serving-cert" not found openshift-kube-apiserver-operator 59m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/check-endpoints-client-cert-key -n openshift-kube-apiserver because it was missing openshift-controller-manager-operator 59m Normal ConfigMapUpdated deployment/openshift-controller-manager-operator Updated ConfigMap/config -n openshift-controller-manager:... openshift-controller-manager 59m Normal ScalingReplicaSet deployment/controller-manager Scaled down replica set controller-manager-64556d4c99 to 0 from 1 openshift-route-controller-manager 59m Normal ScalingReplicaSet deployment/route-controller-manager Scaled up replica set route-controller-manager-678c989865 to 1 from 0 openshift-controller-manager-operator 59m Normal ConfigMapUpdated deployment/openshift-controller-manager-operator Updated ConfigMap/config -n openshift-route-controller-manager:... openshift-controller-manager-operator 59m Normal OperatorStatusChanged deployment/openshift-controller-manager-operator Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" to "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" openshift-operator-lifecycle-manager 59m Normal AddedInterface pod/packageserver-7c998868c6-vtkkk Add eth0 [10.128.0.12/23] from ovn-kubernetes openshift-kube-apiserver-operator 59m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [configmaps: check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca,control-plane-node-kubeconfig, secrets: check-endpoints-client-cert-key,external-loadbalancer-serving-certkey,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" openshift-controller-manager 59m Normal ScalingReplicaSet deployment/controller-manager Scaled up replica set controller-manager-5ff6588dbb to 1 from 0 openshift-route-controller-manager 59m Normal ScalingReplicaSet deployment/route-controller-manager Scaled down replica set route-controller-manager-7d7696bfd4 to 1 from 2 openshift-kube-apiserver-operator 59m Warning RequiredInstallerResourcesMissing deployment/kube-apiserver-operator configmaps: client-ca, secrets: external-loadbalancer-serving-certkey,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1 openshift-operator-lifecycle-manager 59m Normal InstallWaiting clusterserviceversion/packageserver apiServices not installed openshift-cluster-csi-drivers 59m Warning FailedMount pod/aws-ebs-csi-driver-controller-75b78f4dd4-hcwbd MountVolume.SetUp failed for volume "metrics-serving-cert" : secret "aws-ebs-csi-driver-controller-metrics-serving-cert" not found openshift-controller-manager 59m Normal SuccessfulDelete replicaset/controller-manager-64556d4c99 Deleted pod: controller-manager-64556d4c99-kxhn7 openshift-operator-lifecycle-manager 59m Normal Pulling pod/packageserver-7c998868c6-mxs6q Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:497121bfdb3b293af2c939af393bfbda39aeace9d754f7288a244cd7ec2662d7" openshift-kube-controller-manager-operator 59m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" to "InstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" openshift-machine-config-operator 59m Normal Started pod/machine-config-daemon-s6f62 Started container machine-config-daemon openshift-machine-config-operator 59m Normal Pulling pod/machine-config-daemon-s6f62 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" openshift-kube-apiserver-operator 59m Normal TargetUpdateRequired deployment/kube-apiserver-operator "external-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: missing notAfter openshift-kube-apiserver-operator 59m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/external-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing openshift-machine-config-operator 59m Normal Pulled pod/machine-config-daemon-s6f62 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" already present on machine openshift-kube-scheduler-operator 59m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: secrets: kube-scheduler-client-cert-key\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" to "InstallerControllerDegraded: missing required resources: secrets: kube-scheduler-client-cert-key\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" openshift-machine-config-operator 59m Normal Created pod/machine-config-daemon-s6f62 Created container machine-config-daemon openshift-kube-scheduler-operator 59m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/kube-scheduler-pod-3 -n openshift-kube-scheduler because it was missing openshift-machine-config-operator 59m Normal Pulled pod/machine-config-daemon-zlzm2 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" already present on machine openshift-machine-config-operator 59m Normal Pulling pod/machine-config-daemon-zlzm2 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" openshift-machine-config-operator 59m Normal Created pod/machine-config-daemon-zlzm2 Created container machine-config-daemon openshift-machine-config-operator 59m Normal Started pod/machine-config-daemon-zlzm2 Started container machine-config-daemon openshift-kube-scheduler-operator 59m Warning RequiredInstallerResourcesMissing deployment/openshift-kube-scheduler-operator secrets: kube-scheduler-client-cert-key openshift-multus 59m Normal Pulled pod/multus-admission-controller-6f95d97cb6-x5s87 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c3cca6e2da92a6cd38e7f20f77bffc675895bd800157fdb50261b7f7ea9fc90" in 5.150287548s (5.150301596s including waiting) openshift-kube-scheduler-operator 59m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/config-3 -n openshift-kube-scheduler because it was missing openshift-multus 59m Normal Pulled pod/multus-admission-controller-6f95d97cb6-7wv72 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c3cca6e2da92a6cd38e7f20f77bffc675895bd800157fdb50261b7f7ea9fc90" in 5.601420743s (5.601428674s including waiting) openshift-kube-apiserver-operator 59m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/kube-scheduler-client-cert-key -n openshift-config-managed because it was missing openshift-machine-config-operator 59m Normal Pulled pod/machine-config-daemon-zlzm2 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" in 1.436169524s (1.436182963s including waiting) openshift-machine-config-operator 59m Normal Created pod/machine-config-daemon-s6f62 Created container oauth-proxy openshift-kube-scheduler-operator 59m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-scheduler because it was missing openshift-machine-config-operator 59m Normal Created pod/machine-config-daemon-zlzm2 Created container oauth-proxy openshift-kube-controller-manager-operator 59m Warning ObservedConfigWriteError deployment/kube-controller-manager-operator Failed to write observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again openshift-kube-apiserver-operator 59m Normal TargetUpdateRequired deployment/kube-apiserver-operator "kube-scheduler-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: missing notAfter openshift-machine-config-operator 59m Normal Started pod/machine-config-daemon-s6f62 Started container oauth-proxy openshift-machine-config-operator 59m Normal Pulled pod/machine-config-daemon-s6f62 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" in 1.441955938s (1.441969497s including waiting) openshift-etcd-operator 59m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nRevisionControllerDegraded: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nRevisionControllerDegraded: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nEnvVarControllerDegraded: empty NodeStatuses, can't generate environment for getEscapedIPAddress" openshift-machine-config-operator 59m Normal Started pod/machine-config-daemon-zlzm2 Started container oauth-proxy openshift-kube-apiserver-operator 59m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/kube-controller-manager-client-cert-key -n openshift-config-managed because it was missing openshift-kube-controller-manager-operator 59m Normal SecretCreated deployment/kube-controller-manager-operator Created Secret/kube-controller-manager-client-cert-key -n openshift-kube-controller-manager because it was missing openshift-kube-apiserver-operator 59m Normal TargetUpdateRequired deployment/kube-apiserver-operator "kube-controller-manager-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: missing notAfter openshift-kube-scheduler-operator 59m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/scheduler-kubeconfig-3 -n openshift-kube-scheduler because it was missing openshift-cluster-machine-approver 59m Normal Pulled pod/machine-approver-5cd47987c9-96cvq Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:90fd3983343e366cb4df6f35efa1527e4b5da93e90558f23aa416cb9c453375e" in 9.000456007s (9.000465186s including waiting) openshift-kube-scheduler-operator 59m Normal SecretCreated deployment/openshift-kube-scheduler-operator Created Secret/kube-scheduler-client-cert-key -n openshift-kube-scheduler because it was missing openshift-kube-scheduler-operator 59m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-3 -n openshift-kube-scheduler because it was missing openshift-dns-operator 59m Normal Pulled pod/dns-operator-656b9bd9f9-lb9ps Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ecfd2df94486e0570eeb0f88a5696ecaa0e1e54bc67d342aab3a6167863175fe" in 8.929463077s (8.92950535s including waiting) openshift-kube-apiserver-operator 59m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca,control-plane-node-kubeconfig, secrets: check-endpoints-client-cert-key,external-loadbalancer-serving-certkey,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: external-loadbalancer-serving-certkey,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" openshift-machine-api 59m Normal Pulled pod/control-plane-machine-set-operator-77b4c948f8-s7qsh Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:278a7aba8f50daaaa56984563a5ca591493989e3353eda2da9516f45a35ee7ed" in 8.627280665s (8.627297086s including waiting) openshift-machine-config-operator 59m Normal Started pod/machine-config-daemon-ll5kq Started container machine-config-daemon openshift-cloud-credential-operator 59m Normal Started pod/cloud-credential-operator-7fffc6cb67-gkvnc Started container cloud-credential-operator openshift-cluster-node-tuning-operator 59m Normal Started pod/cluster-node-tuning-operator-5886c76fd4-7qpt5 Started container cluster-node-tuning-operator openshift-cluster-csi-drivers 59m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-cluster-csi-drivers 59m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp Started container csi-driver openshift-cluster-csi-drivers 59m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp Created container csi-driver openshift-cluster-csi-drivers 59m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" already present on machine openshift-cluster-csi-drivers 59m Normal AddedInterface pod/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp Add eth0 [10.130.0.38/23] from ovn-kubernetes openshift-multus 59m Normal Created pod/multus-admission-controller-6f95d97cb6-7wv72 Created container multus-admission-controller openshift-kube-scheduler-operator 59m Normal ConfigMapUpdated deployment/openshift-kube-scheduler-operator Updated ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler:... openshift-cluster-node-tuning-operator 59m Normal Created pod/cluster-node-tuning-operator-5886c76fd4-7qpt5 Created container cluster-node-tuning-operator openshift-ingress-operator 59m Normal Created pod/ingress-operator-6486794b49-42ddh Created container kube-rbac-proxy openshift-ingress-operator 59m Normal Pulled pod/ingress-operator-6486794b49-42ddh Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-cluster-node-tuning-operator 59m Normal Pulled pod/cluster-node-tuning-operator-5886c76fd4-7qpt5 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" in 8.898182712s (8.898189842s including waiting) openshift-ingress-operator 59m Normal Pulled pod/ingress-operator-6486794b49-42ddh Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6de5d1e6d774c20a0c3bd4d4c544234519bc4528e5535f24d0cf78a6788b4056" in 8.796386693s (8.796399829s including waiting) openshift-kube-scheduler-operator 59m Normal SecretCreated deployment/openshift-kube-scheduler-operator Created Secret/serving-cert-3 -n openshift-kube-scheduler because it was missing openshift-multus 59m Normal Started pod/multus-admission-controller-6f95d97cb6-7wv72 Started container multus-admission-controller openshift-multus 59m Normal Pulled pod/multus-admission-controller-6f95d97cb6-7wv72 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-multus 59m Normal Created pod/multus-admission-controller-6f95d97cb6-7wv72 Created container kube-rbac-proxy openshift-multus 59m Normal Started pod/multus-admission-controller-6f95d97cb6-7wv72 Started container kube-rbac-proxy openshift-cloud-credential-operator 59m Normal Created pod/cloud-credential-operator-7fffc6cb67-gkvnc Created container cloud-credential-operator openshift-kube-controller-manager-operator 59m Normal ConfigMapUpdated deployment/kube-controller-manager-operator Updated ConfigMap/config -n openshift-kube-controller-manager:... openshift-cloud-credential-operator 59m Normal Pulled pod/cloud-credential-operator-7fffc6cb67-gkvnc Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:023392c216b04a82b69315c210827b2776d95583bee16754f55577573553cad4" in 8.759958478s (8.759965829s including waiting) openshift-machine-config-operator 59m Normal Pulled pod/machine-config-daemon-ll5kq Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" already present on machine openshift-machine-config-operator 59m Normal Created pod/machine-config-daemon-ll5kq Created container machine-config-daemon openshift-machine-api 59m Normal Created pod/control-plane-machine-set-operator-77b4c948f8-s7qsh Created container control-plane-machine-set-operator openshift-dns-operator 59m Normal Started pod/dns-operator-656b9bd9f9-lb9ps Started container kube-rbac-proxy openshift-dns-operator 59m Normal Created pod/dns-operator-656b9bd9f9-lb9ps Created container kube-rbac-proxy openshift-machine-api 59m Normal Started pod/control-plane-machine-set-operator-77b4c948f8-s7qsh Started container control-plane-machine-set-operator openshift-machine-api 59m Normal Started pod/machine-api-operator-564474f8c6-284hs Started container machine-api-operator openshift-machine-api 59m Normal Pulled pod/machine-api-operator-564474f8c6-284hs Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fae409a0e6467f2d4e5e1cd0974a33f71fddf6f3b567c278b3a9aad56aa0f089" in 8.844593057s (8.844609462s including waiting) openshift-multus 59m Normal Created pod/multus-admission-controller-6f95d97cb6-x5s87 Created container multus-admission-controller openshift-machine-api 59m Normal ScalingReplicaSet deployment/machine-api-controllers Scaled up replica set machine-api-controllers-674d9f54f6 to 1 openshift-cluster-machine-approver 59m Normal LeaderElection lease/cluster-machine-approver-leader ip-10-0-197-197_ee810cd0-fcc2-48ba-ab00-cb74037c683a became leader openshift-ingress-operator 59m Normal Started pod/ingress-operator-6486794b49-42ddh Started container kube-rbac-proxy openshift-multus 59m Normal Started pod/multus-admission-controller-6f95d97cb6-x5s87 Started container multus-admission-controller openshift-multus 59m Normal Pulled pod/multus-admission-controller-6f95d97cb6-x5s87 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-kube-controller-manager-operator 59m Normal ConfigMapUpdated deployment/kube-controller-manager-operator Updated ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager:... openshift-multus 59m Normal Created pod/multus-admission-controller-6f95d97cb6-x5s87 Created container kube-rbac-proxy openshift-multus 59m Normal Started pod/multus-admission-controller-6f95d97cb6-x5s87 Started container kube-rbac-proxy openshift-dns-operator 59m Normal Pulled pod/dns-operator-656b9bd9f9-lb9ps Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-dns-operator 59m Normal Started pod/dns-operator-656b9bd9f9-lb9ps Started container dns-operator openshift-machine-config-operator 59m Normal Pulling pod/machine-config-daemon-ll5kq Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" openshift-cluster-machine-approver 59m Normal Created pod/machine-approver-5cd47987c9-96cvq Created container machine-approver-controller openshift-cluster-machine-approver 59m Normal Started pod/machine-approver-5cd47987c9-96cvq Started container machine-approver-controller openshift-machine-api 59m Normal SuccessfulCreate replicaset/machine-api-controllers-674d9f54f6 Created pod: machine-api-controllers-674d9f54f6-r6g9g openshift-machine-api 59m Normal Created pod/machine-api-operator-564474f8c6-284hs Created container machine-api-operator openshift-dns-operator 59m Normal Created pod/dns-operator-656b9bd9f9-lb9ps Created container dns-operator openshift-authentication-operator 59m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" openshift-kube-scheduler-operator 59m Normal RevisionCreate deployment/openshift-kube-scheduler-operator Revision 2 created because secret/serving-cert has changed openshift-cluster-csi-drivers 59m Normal Pulling pod/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77941761aca0cba770d56fcf4d213512b4dd959aa49d3f50c9da02a7aee8d62e" openshift-kube-scheduler-operator 59m Normal SecretCreated deployment/openshift-kube-scheduler-operator Created Secret/localhost-recovery-client-token-3 -n openshift-kube-scheduler because it was missing openshift-cluster-csi-drivers 59m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp Started container driver-kube-rbac-proxy openshift-etcd-operator 59m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nRevisionControllerDegraded: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nEnvVarControllerDegraded: empty NodeStatuses, can't generate environment for getEscapedIPAddress" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nRevisionControllerDegraded: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nEnvVarControllerDegraded: empty NodeStatuses, can't generate environment for getEscapedIPAddress" openshift-cluster-csi-drivers 59m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp Created container driver-kube-rbac-proxy openshift-etcd-operator 59m Warning ObservedConfigWriteError deployment/etcd-operator Failed to write observed config: Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again openshift-kube-scheduler-operator 59m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: secrets: kube-scheduler-client-cert-key\nNodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" openshift-operator-lifecycle-manager 59m Normal Started pod/packageserver-7c998868c6-mxs6q Started container packageserver openshift-kube-scheduler-operator 59m Normal NodeTargetRevisionChanged deployment/openshift-kube-scheduler-operator Updating node "ip-10-0-239-132.ec2.internal" from revision 0 to 3 because node ip-10-0-239-132.ec2.internal static pod not found openshift-machine-config-operator 59m Normal Pulled pod/machine-config-daemon-ll5kq Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" in 2.215053821s (2.215084625s including waiting) openshift-operator-lifecycle-manager 59m Normal Pulled pod/packageserver-7c998868c6-mxs6q Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:497121bfdb3b293af2c939af393bfbda39aeace9d754f7288a244cd7ec2662d7" in 8.426571491s (8.426601174s including waiting) openshift-kube-scheduler-operator 59m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 3" openshift-kube-controller-manager-operator 59m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/revision-status-2 -n openshift-kube-controller-manager because it was missing openshift-kube-scheduler-operator 59m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/revision-status-4 -n openshift-kube-scheduler because it was missing openshift-multus 59m Normal AddedInterface pod/network-metrics-daemon-7vpmf Add eth0 [10.129.0.4/23] from ovn-kubernetes openshift-kube-controller-manager-operator 59m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" to "InstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" openshift-operator-lifecycle-manager 59m Normal Created pod/packageserver-7c998868c6-mxs6q Created container packageserver openshift-kube-controller-manager-operator 59m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" to "InstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" openshift-kube-controller-manager-operator 59m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/kube-controller-manager-pod-2 -n openshift-kube-controller-manager because it was missing openshift-cluster-csi-drivers 59m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77941761aca0cba770d56fcf4d213512b4dd959aa49d3f50c9da02a7aee8d62e" in 2.031423122s (2.031460586s including waiting) openshift-cluster-csi-drivers 59m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp Created container csi-provisioner openshift-cluster-csi-drivers 59m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-cluster-csi-drivers 59m Normal LeaderElection lease/ebs-csi-aws-com 1679400900354-8081-ebs-csi-aws-com became leader openshift-cluster-csi-drivers 59m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp Started container csi-provisioner openshift-machine-config-operator 59m Normal Started pod/machine-config-daemon-ll5kq Started container oauth-proxy openshift-machine-config-operator 59m Normal ServiceAccountCreated deployment/machine-config-operator Created ServiceAccount/machine-config-controller -n openshift-machine-config-operator because it was missing openshift-machine-config-operator 59m Normal Created pod/machine-config-daemon-ll5kq Created container oauth-proxy openshift-machine-config-operator 59m Normal SuccessfulCreate replicaset/machine-config-controller-7f488c778d Created pod: machine-config-controller-7f488c778d-fvfx4 openshift-machine-config-operator 59m Normal ScalingReplicaSet deployment/machine-config-controller Scaled up replica set machine-config-controller-7f488c778d to 1 openshift-cluster-csi-drivers 59m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp Created container provisioner-kube-rbac-proxy openshift-machine-config-operator 59m Normal ClusterRoleBindingCreated deployment/machine-config-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-controller because it was missing openshift-machine-config-operator 59m Normal RoleBindingCreated deployment/machine-config-operator Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n openshift-machine-config-operator because it was missing openshift-operator-lifecycle-manager 59m Normal SuccessfulCreate cronjob/collect-profiles Created job collect-profiles-27990015 openshift-cluster-csi-drivers 59m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp Started container provisioner-kube-rbac-proxy openshift-machine-config-operator 59m Normal RoleBindingCreated deployment/machine-config-operator Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n default because it was missing openshift-cluster-csi-drivers 59m Normal Pulling pod/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:726ed98ed8df6da72ea0aaecf62714470ad60d9e5665b65286271e92e4f46c1d" openshift-machine-config-operator 59m Normal ClusterRoleCreated deployment/machine-config-operator Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller because it was missing openshift-machine-config-operator 59m Normal ClusterRoleCreated deployment/machine-config-operator Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller-events because it was missing openshift-operator-lifecycle-manager 59m Normal SuccessfulCreate job/collect-profiles-27990015 Created pod: collect-profiles-27990015-4vlzz openshift-kube-scheduler-operator 59m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/kube-scheduler-pod-4 -n openshift-kube-scheduler because it was missing openshift-kube-scheduler 59m Normal AddedInterface pod/installer-3-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.13/23] from ovn-kubernetes openshift-machine-api 59m Normal AddedInterface pod/machine-api-controllers-674d9f54f6-r6g9g Add eth0 [10.128.0.13/23] from ovn-kubernetes openshift-operator-lifecycle-manager 59m Normal Created pod/packageserver-7c998868c6-vtkkk Created container packageserver openshift-operator-lifecycle-manager 59m Normal Pulled pod/packageserver-7c998868c6-vtkkk Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:497121bfdb3b293af2c939af393bfbda39aeace9d754f7288a244cd7ec2662d7" in 9.792623451s (9.792634986s including waiting) openshift-machine-config-operator 59m Normal Started pod/machine-config-controller-7f488c778d-fvfx4 Started container oauth-proxy openshift-machine-config-operator 59m Normal Created pod/machine-config-controller-7f488c778d-fvfx4 Created container oauth-proxy openshift-kube-scheduler-operator 59m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/config-4 -n openshift-kube-scheduler because it was missing openshift-kube-scheduler-operator 59m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/installer-3-ip-10-0-239-132.ec2.internal -n openshift-kube-scheduler because it was missing openshift-multus 59m Normal AddedInterface pod/network-metrics-daemon-v6lsv Add eth0 [10.128.0.4/23] from ovn-kubernetes openshift-kube-apiserver-operator 59m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: external-loadbalancer-serving-certkey,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" openshift-authentication-operator 59m Normal ConfigMapCreated deployment/authentication-operator Created ConfigMap/revision-status-1 -n openshift-oauth-apiserver because it was missing openshift-machine-config-operator 59m Normal Pulled pod/machine-config-controller-7f488c778d-fvfx4 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" already present on machine openshift-machine-config-operator 59m Normal Pulled pod/machine-config-controller-7f488c778d-fvfx4 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-machine-config-operator 59m Normal Started pod/machine-config-controller-7f488c778d-fvfx4 Started container machine-config-controller openshift-machine-config-operator 59m Normal Created pod/machine-config-controller-7f488c778d-fvfx4 Created container machine-config-controller openshift-kube-controller-manager-operator 59m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/config-2 -n openshift-kube-controller-manager because it was missing openshift-authentication-operator 59m Normal RevisionTriggered deployment/authentication-operator new revision 1 triggered by "configmap \"audit-0\" not found" openshift-etcd-operator 59m Normal MasterNodeObserved deployment/etcd-operator Observed new master node ip-10-0-239-132.ec2.internal openshift-machine-api 59m Normal Pulling pod/machine-api-controllers-674d9f54f6-r6g9g Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fae409a0e6467f2d4e5e1cd0974a33f71fddf6f3b567c278b3a9aad56aa0f089" openshift-authentication-operator 59m Normal ConfigMapCreated deployment/authentication-operator Created ConfigMap/audit-1 -n openshift-oauth-apiserver because it was missing openshift-authentication-operator 59m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Progressing changed from Unknown to False ("All is well") openshift-etcd-operator 59m Normal MasterNodeObserved deployment/etcd-operator Observed new master node ip-10-0-140-6.ec2.internal openshift-etcd-operator 59m Normal MasterNodeObserved deployment/etcd-operator Observed new master node ip-10-0-197-197.ec2.internal openshift-machine-config-operator 59m Normal AddedInterface pod/machine-config-controller-7f488c778d-fvfx4 Add eth0 [10.129.0.12/23] from ovn-kubernetes openshift-etcd-operator 59m Warning RequiredInstallerResourcesMissing deployment/etcd-operator configmaps: etcd-scripts,restore-etcd-pod, configmaps: etcd-endpoints-0,etcd-metrics-proxy-client-ca-0,etcd-metrics-proxy-serving-ca-0,etcd-peer-client-ca-0,etcd-pod-0,etcd-serving-ca-0, secrets: etcd-all-certs-0 openshift-etcd-operator 59m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nRevisionControllerDegraded: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nEnvVarControllerDegraded: empty NodeStatuses, can't generate environment for getEscapedIPAddress" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nRevisionControllerDegraded: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nEnvVarControllerDegraded: empty NodeStatuses, can't generate environment for getEscapedIPAddress\nNodeControllerDegraded: All master nodes are ready" openshift-operator-lifecycle-manager 59m Normal Started pod/packageserver-7c998868c6-vtkkk Started container packageserver openshift-kube-scheduler 59m Normal Pulling pod/installer-3-ip-10-0-239-132.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" openshift-machine-api 59m Normal LeaderElection lease/control-plane-machine-set-leader control-plane-machine-set-operator-77b4c948f8-s7qsh_c64177e6-774c-4ec4-b8f5-f13c1005588c became leader openshift-kube-controller-manager-operator 59m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/cluster-policy-controller-config-2 -n openshift-kube-controller-manager because it was missing openshift-cluster-storage-operator 59m Normal Pulling pod/csi-snapshot-webhook-75476bf784-7vh6f Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e7a32310238d69d56d35be8f7de426bdbedf96ff73edcd198698ac174c6d3c34" openshift-cluster-csi-drivers 59m Normal LeaderElection lease/external-attacher-leader-ebs-csi-aws-com aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp became leader openshift-cluster-storage-operator 59m Normal AddedInterface pod/csi-snapshot-webhook-75476bf784-7vh6f Add eth0 [10.129.0.6/23] from ovn-kubernetes default 59m Warning FailedToCreateEndpoint endpoints/dns-default Failed to create endpoint for service openshift-dns/dns-default: endpoints "dns-default" already exists openshift-cluster-csi-drivers 59m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:726ed98ed8df6da72ea0aaecf62714470ad60d9e5665b65286271e92e4f46c1d" in 1.742775951s (1.74278913s including waiting) openshift-dns 59m Normal SuccessfulCreate daemonset/node-resolver Created pod: node-resolver-ndpz5 openshift-dns 59m Normal SuccessfulCreate daemonset/node-resolver Created pod: node-resolver-t57dw openshift-cluster-csi-drivers 59m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp Created container csi-attacher openshift-dns 59m Normal SuccessfulCreate daemonset/node-resolver Created pod: node-resolver-dqg6k kube-system 59m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-dns namespace openshift-cluster-csi-drivers 59m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp Started container csi-attacher openshift-dns 59m Normal SuccessfulCreate daemonset/dns-default Created pod: dns-default-tnhzk openshift-dns 59m Normal SuccessfulCreate daemonset/dns-default Created pod: dns-default-wnmv8 openshift-cluster-csi-drivers 59m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp Started container attacher-kube-rbac-proxy openshift-cluster-csi-drivers 59m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp Created container attacher-kube-rbac-proxy openshift-cluster-csi-drivers 59m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-kube-controller-manager-operator 59m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" to "InstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" openshift-dns 59m Normal SuccessfulCreate daemonset/dns-default Created pod: dns-default-vlp6d openshift-etcd-operator 59m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/revision-status-2 -n openshift-etcd because it was missing openshift-dns 59m Normal Pulling pod/node-resolver-dqg6k Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" openshift-kube-scheduler-operator 59m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-scheduler because it was missing openshift-dns 59m Normal Pulling pod/node-resolver-ndpz5 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" openshift-machine-config-operator 59m Normal ClusterRoleCreated deployment/machine-config-operator Created ClusterRole.rbac.authorization.k8s.io/machine-config-server because it was missing openshift-kube-scheduler 59m Normal Pulled pod/installer-3-ip-10-0-239-132.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" in 1.960622545s (1.960634245s including waiting) openshift-dns 59m Normal Pulling pod/dns-default-wnmv8 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a604c96952da5e59f104a305e0c8303d474e33ef345b8c534fd677f189d05299" openshift-dns 59m Normal AddedInterface pod/dns-default-vlp6d Add eth0 [10.130.0.39/23] from ovn-kubernetes openshift-cluster-storage-operator 59m Normal Pulling pod/csi-snapshot-webhook-75476bf784-7z4rl Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e7a32310238d69d56d35be8f7de426bdbedf96ff73edcd198698ac174c6d3c34" openshift-kube-controller-manager-operator 59m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/kube-controller-cert-syncer-kubeconfig-2 -n openshift-kube-controller-manager because it was missing openshift-kube-controller-manager-operator 59m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/controller-manager-kubeconfig-2 -n openshift-kube-controller-manager because it was missing openshift-cluster-storage-operator 59m Normal AddedInterface pod/csi-snapshot-webhook-75476bf784-7z4rl Add eth0 [10.128.0.5/23] from ovn-kubernetes openshift-machine-config-operator 59m Normal ClusterRoleBindingCreated deployment/machine-config-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/system-bootstrap-node-renewal because it was missing openshift-machine-config-operator 59m Normal SecretCreated deployment/machine-config-operator Created Secret/node-bootstrapper-token -n openshift-machine-config-operator because it was missing openshift-machine-config-operator 59m Normal ServiceAccountCreated deployment/machine-config-operator Created ServiceAccount/node-bootstrapper -n openshift-machine-config-operator because it was missing openshift-cluster-storage-operator 59m Normal Pulled pod/csi-snapshot-webhook-75476bf784-7vh6f Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e7a32310238d69d56d35be8f7de426bdbedf96ff73edcd198698ac174c6d3c34" in 1.403472009s (1.403485181s including waiting) openshift-machine-config-operator 59m Normal ServiceAccountCreated deployment/machine-config-operator Created ServiceAccount/machine-config-server -n openshift-machine-config-operator because it was missing openshift-dns 59m Normal AddedInterface pod/dns-default-wnmv8 Add eth0 [10.128.0.14/23] from ovn-kubernetes openshift-machine-config-operator 59m Normal ClusterRoleBindingCreated deployment/machine-config-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-server because it was missing openshift-controller-manager 59m Warning FailedMount pod/controller-manager-64556d4c99-8fw47 MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found openshift-machine-config-operator 59m Normal SuccessfulCreate daemonset/machine-config-server Created pod: machine-config-server-8rhkb openshift-kube-controller-manager-operator 59m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/service-ca-2 -n openshift-kube-controller-manager because it was missing openshift-authentication-operator 59m Normal ObserveTLSSecurityProfile deployment/authentication-operator minTLSVersion changed to VersionTLS12 openshift-authentication-operator 59m Normal ObserveTLSSecurityProfile deployment/authentication-operator cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] openshift-controller-manager 59m Warning FailedMount pod/controller-manager-64556d4c99-kxhn7 MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found openshift-etcd-operator 59m Normal ObservedConfigChanged deployment/etcd-operator Writing updated observed config: map[string]any{... openshift-etcd-operator 59m Normal ObserveTLSSecurityProfile deployment/etcd-operator cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] openshift-etcd-operator 59m Normal ObserveTLSSecurityProfile deployment/etcd-operator minTLSVersion changed to VersionTLS12 openshift-cluster-storage-operator 59m Normal Pulled pod/csi-snapshot-webhook-75476bf784-7z4rl Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e7a32310238d69d56d35be8f7de426bdbedf96ff73edcd198698ac174c6d3c34" in 1.377790524s (1.377797855s including waiting) openshift-etcd-operator 59m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nRevisionControllerDegraded: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nEnvVarControllerDegraded: empty NodeStatuses, can't generate environment for getEscapedIPAddress\nNodeControllerDegraded: All master nodes are ready" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nRevisionControllerDegraded: configmaps \"etcd-pod\" not found\nEnvVarControllerDegraded: empty NodeStatuses, can't generate environment for getEscapedIPAddress\nNodeControllerDegraded: All master nodes are ready" openshift-route-controller-manager 59m Warning FailedMount pod/route-controller-manager-7d7696bfd4-2tvnf MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found openshift-kube-controller-manager-operator 59m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" to "InstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]" openshift-machine-config-operator 59m Normal SuccessfulCreate daemonset/machine-config-server Created pod: machine-config-server-4bmnx openshift-machine-config-operator 59m Normal Started pod/machine-config-server-9k88t Started container machine-config-server openshift-authentication-operator 59m Normal ObserveTemplates deployment/authentication-operator templates changed to map["error":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/errors.html" "login":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/login.html" "providerSelection":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/providers.html"] openshift-kube-scheduler 59m Normal Started pod/installer-3-ip-10-0-239-132.ec2.internal Started container installer openshift-kube-scheduler 59m Normal Created pod/installer-3-ip-10-0-239-132.ec2.internal Created container installer openshift-machine-config-operator 59m Normal Created pod/machine-config-server-9k88t Created container machine-config-server openshift-machine-config-operator 59m Normal Pulled pod/machine-config-server-9k88t Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" already present on machine openshift-machine-config-operator 59m Normal Started pod/machine-config-server-8rhkb Started container machine-config-server openshift-machine-config-operator 59m Normal Created pod/machine-config-server-8rhkb Created container machine-config-server openshift-kube-scheduler-operator 59m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/scheduler-kubeconfig-4 -n openshift-kube-scheduler because it was missing openshift-cluster-storage-operator 59m Normal Started pod/csi-snapshot-webhook-75476bf784-7vh6f Started container webhook openshift-cluster-storage-operator 59m Normal Created pod/csi-snapshot-webhook-75476bf784-7vh6f Created container webhook openshift-machine-config-operator 59m Normal Pulled pod/machine-config-server-8rhkb Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" already present on machine openshift-machine-config-operator 59m Normal SuccessfulCreate daemonset/machine-config-server Created pod: machine-config-server-9k88t openshift-dns 59m Normal Pulling pod/dns-default-tnhzk Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a604c96952da5e59f104a305e0c8303d474e33ef345b8c534fd677f189d05299" openshift-route-controller-manager 59m Warning FailedMount pod/route-controller-manager-7d7696bfd4-zpkmp MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found openshift-authentication-operator 59m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" openshift-kube-controller-manager-operator 59m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-controller-manager because it was missing openshift-cluster-storage-operator 59m Normal OperatorStatusChanged deployment/csi-snapshot-controller-operator Status for clusteroperator/csi-snapshot-controller changed: Available changed from False to True ("All is well") openshift-route-controller-manager 59m Warning FailedMount pod/route-controller-manager-7d7696bfd4-z2bjq MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found openshift-controller-manager 59m Warning FailedMount pod/controller-manager-64556d4c99-46tn2 MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found openshift-dns 59m Normal AddedInterface pod/dns-default-tnhzk Add eth0 [10.129.0.14/23] from ovn-kubernetes openshift-authentication-operator 59m Normal ObserveTokenConfig deployment/authentication-operator accessTokenMaxAgeSeconds changed from %!d(float64=0) to %!d(float64=86400) openshift-kube-controller-manager-operator 59m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]" to "InstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" openshift-etcd-operator 59m Warning EnvVarControllerUpdatingStatus deployment/etcd-operator Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again openshift-kube-controller-manager-operator 59m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/recycler-config-2 -n openshift-kube-controller-manager because it was missing openshift-kube-controller-manager-operator 58m Normal SecretCreated deployment/kube-controller-manager-operator Created Secret/serving-cert-2 -n openshift-kube-controller-manager because it was missing openshift-dns 58m Normal Created pod/dns-default-tnhzk Created container dns openshift-etcd-operator 58m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-pod -n openshift-etcd because it was missing openshift-dns 58m Normal Pulled pod/dns-default-tnhzk Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-dns 58m Normal Created pod/dns-default-tnhzk Created container kube-rbac-proxy openshift-dns 58m Normal Started pod/dns-default-tnhzk Started container kube-rbac-proxy openshift-kube-controller-manager-operator 58m Normal SecretCreated deployment/kube-controller-manager-operator Created Secret/localhost-recovery-client-token-2 -n openshift-kube-controller-manager because it was missing openshift-dns 58m Normal Pulled pod/dns-default-tnhzk Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a604c96952da5e59f104a305e0c8303d474e33ef345b8c534fd677f189d05299" in 2.353656605s (2.35367162s including waiting) openshift-kube-scheduler-operator 58m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-4 -n openshift-kube-scheduler because it was missing default 58m Normal RenderedConfigGenerated machineconfigpool/master rendered-master-ff215a8818ae4a038b75bd4a838f2d00 successfully generated (release version: 4.13.0-rc.0, controller version: 40575b862f7bd42a2c40c8e6b7203cd4c29b0021) openshift-etcd-operator 58m Normal RevisionTriggered deployment/etcd-operator new revision 2 triggered by "configmap \"etcd-pod-1\" not found" openshift-etcd-operator 58m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nRevisionControllerDegraded: configmaps \"etcd-pod\" not found\nEnvVarControllerDegraded: empty NodeStatuses, can't generate environment for getEscapedIPAddress\nNodeControllerDegraded: All master nodes are ready" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nRevisionControllerDegraded: configmaps \"etcd-pod\" not found\nNodeControllerDegraded: All master nodes are ready" openshift-etcd-operator 58m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-scripts -n openshift-etcd because it was missing openshift-etcd-operator 58m Warning RequiredInstallerResourcesMissing deployment/etcd-operator configmaps: etcd-scripts,restore-etcd-pod, configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1 openshift-dns 58m Normal Started pod/node-resolver-dqg6k Started container dns-node-resolver openshift-etcd-operator 58m Normal RevisionTriggered deployment/etcd-operator new revision 2 triggered by "configmap \"etcd-pod\" not found" default 58m Normal RenderedConfigGenerated machineconfigpool/worker rendered-worker-e5630006427036c937f2156f999e7beb successfully generated (release version: 4.13.0-rc.0, controller version: 40575b862f7bd42a2c40c8e6b7203cd4c29b0021) openshift-dns 58m Normal Created pod/node-resolver-dqg6k Created container dns-node-resolver openshift-dns 58m Normal Pulled pod/node-resolver-dqg6k Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" in 2.586494603s (2.586509118s including waiting) openshift-dns 58m Normal Started pod/dns-default-tnhzk Started container dns openshift-kube-controller-manager-operator 58m Normal SecretCreated deployment/kube-controller-manager-operator Created Secret/service-account-private-key-2 -n openshift-kube-controller-manager because it was missing openshift-dns 58m Normal Started pod/node-resolver-ndpz5 Started container dns-node-resolver openshift-dns 58m Normal Started pod/dns-default-wnmv8 Started container dns openshift-dns 58m Normal Pulled pod/dns-default-wnmv8 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-dns 58m Normal Created pod/dns-default-wnmv8 Created container kube-rbac-proxy openshift-dns 58m Normal Started pod/dns-default-wnmv8 Started container kube-rbac-proxy openshift-authentication-operator 58m Normal ObserveAPIServerURL deployment/authentication-operator loginURL changed from to https://api.qeaisrhods-c13.abmw.s1.devshift.org:6443 openshift-dns 58m Normal Pulled pod/dns-default-wnmv8 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a604c96952da5e59f104a305e0c8303d474e33ef345b8c534fd677f189d05299" in 4.14949467s (4.14951781s including waiting) openshift-authentication-operator 58m Normal ObserveAuditProfile deployment/authentication-operator AuditProfile changed from '%!s()' to 'map[audit-log-format:[json] audit-log-maxbackup:[10] audit-log-maxsize:[100] audit-log-path:[/var/log/oauth-server/audit.log] audit-policy-file:[/var/run/configmaps/audit/audit.yaml]]' openshift-authentication-operator 58m Normal ObservedConfigChanged deployment/authentication-operator Writing updated section ("oauthServer") of observed config: " map[string]any(\n- \tnil,\n+ \t{\n+ \t\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+ \t\t\"oauthConfig\": map[string]any{\n+ \t\t\t\"assetPublicURL\": string(\"\"),\n+ \t\t\t\"loginURL\": string(\"https://api.qeaisrhods-c13.abmw.s1.devshift.org:6443\"),\n+ \t\t\t\"templates\": map[string]any{\n+ \t\t\t\t\"error\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t\t\"login\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t\t\"providerSelection\": string(\"/var/config/system/secrets/v4-0-\"...),\n+ \t\t\t},\n+ \t\t\t\"tokenConfig\": map[string]any{\n+ \t\t\t\t\"accessTokenMaxAgeSeconds\": float64(86400),\n+ \t\t\t\t\"authorizeTokenMaxAgeSeconds\": float64(300),\n+ \t\t\t},\n+ \t\t},\n+ \t\t\"serverArguments\": map[string]any{\n+ \t\t\t\"audit-log-format\": []any{string(\"json\")},\n+ \t\t\t\"audit-log-maxbackup\": []any{string(\"10\")},\n+ \t\t\t\"audit-log-maxsize\": []any{string(\"100\")},\n+ \t\t\t\"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")},\n+ \t\t\t\"audit-policy-file\": []any{string(\"/var/run/configmaps/audit/audit.\"...)},\n+ \t\t},\n+ \t\t\"servingInfo\": map[string]any{\n+ \t\t\t\"cipherSuites\": []any{\n+ \t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...),\n+ \t\t\t\tstring(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_S\"...),\n+ \t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM\"...),\n+ \t\t\t\tstring(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_S\"...), ...,\n+ \t\t\t},\n+ \t\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+ \t\t},\n+ \t\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n+ \t},\n )\n" openshift-cluster-storage-operator 58m Normal Started pod/csi-snapshot-webhook-75476bf784-7z4rl Started container webhook openshift-cluster-csi-drivers 58m Normal AddedInterface pod/aws-ebs-csi-driver-controller-75b78f4dd4-m5mpr Add eth0 [10.128.0.11/23] from ovn-kubernetes openshift-cluster-storage-operator 58m Normal Created pod/csi-snapshot-webhook-75476bf784-7z4rl Created container webhook openshift-kube-apiserver-operator 58m Normal ObserveTLSSecurityProfile deployment/kube-apiserver-operator minTLSVersion changed to VersionTLS12 openshift-dns 58m Normal Pulled pod/node-resolver-ndpz5 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" in 4.260350463s (4.260363518s including waiting) openshift-dns 58m Normal Created pod/node-resolver-ndpz5 Created container dns-node-resolver openshift-kube-scheduler-operator 58m Normal SecretCreated deployment/openshift-kube-scheduler-operator Created Secret/serving-cert-4 -n openshift-kube-scheduler because it was missing openshift-machine-api 58m Normal Created pod/machine-api-controllers-674d9f54f6-r6g9g Created container machineset-controller openshift-machine-api 58m Normal Started pod/machine-api-controllers-674d9f54f6-r6g9g Started container machineset-controller openshift-kube-apiserver-operator 58m Normal ObserveFeatureFlagsUpdated deployment/kube-apiserver-operator Updated apiServerArguments.feature-gates to APIPriorityAndFairness=true,RotateKubeletServerCertificate=true,DownwardAPIHugePages=true,OpenShiftPodSecurityAdmission=true,RetroactiveDefaultStorageClass=false openshift-etcd-operator 58m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nRevisionControllerDegraded: configmaps \"etcd-pod\" not found\nNodeControllerDegraded: All master nodes are ready" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nRevisionControllerDegraded: configmaps \"etcd-pod\" not found\nNodeControllerDegraded: All master nodes are ready" openshift-kube-apiserver-operator 58m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/bound-sa-token-signing-certs -n openshift-config-managed because it was missing openshift-machine-api 58m Normal Pulling pod/machine-api-controllers-674d9f54f6-r6g9g Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9f622d5408462011492d823946b98c1043c08d2ecf2a264dc9d90f48084a9c8" openshift-etcd-operator 58m Normal ConfigMapUpdated deployment/etcd-operator Updated ConfigMap/revision-status-2 -n openshift-etcd:... openshift-cluster-csi-drivers 58m Normal Pulled pod/aws-ebs-csi-driver-controller-75b78f4dd4-m5mpr Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" already present on machine openshift-dns 58m Normal Created pod/dns-default-wnmv8 Created container dns openshift-kube-apiserver-operator 58m Normal ObserveTLSSecurityProfile deployment/kube-apiserver-operator cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] openshift-machine-api 58m Normal Pulled pod/machine-api-controllers-674d9f54f6-r6g9g Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fae409a0e6467f2d4e5e1cd0974a33f71fddf6f3b567c278b3a9aad56aa0f089" in 6.127130427s (6.127138606s including waiting) openshift-etcd-operator 58m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-pod-2 -n openshift-etcd because it was missing openshift-cluster-csi-drivers 58m Normal Pulling pod/aws-ebs-csi-driver-controller-75b78f4dd4-m5mpr Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77941761aca0cba770d56fcf4d213512b4dd959aa49d3f50c9da02a7aee8d62e" openshift-etcd-operator 58m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/restore-etcd-pod -n openshift-etcd because it was missing openshift-etcd-operator 58m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nRevisionControllerDegraded: configmaps \"etcd-pod\" not found\nNodeControllerDegraded: All master nodes are ready" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nRevisionControllerDegraded: configmaps \"etcd-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-scripts,restore-etcd-pod, configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1]",Progressing changed from Unknown to True ("NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 1"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 1") openshift-cluster-csi-drivers 58m Normal Pulling pod/aws-ebs-csi-driver-controller-75b78f4dd4-hcwbd Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77941761aca0cba770d56fcf4d213512b4dd959aa49d3f50c9da02a7aee8d62e" openshift-cluster-csi-drivers 58m Normal Started pod/aws-ebs-csi-driver-controller-75b78f4dd4-m5mpr Started container csi-driver openshift-cluster-csi-drivers 58m Normal Started pod/aws-ebs-csi-driver-controller-75b78f4dd4-m5mpr Started container driver-kube-rbac-proxy openshift-cluster-csi-drivers 58m Normal Started pod/aws-ebs-csi-driver-controller-75b78f4dd4-hcwbd Started container driver-kube-rbac-proxy openshift-etcd-operator 58m Warning RequiredInstallerResourcesMissing deployment/etcd-operator configmaps: restore-etcd-pod, configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1 openshift-kube-scheduler-operator 58m Normal RevisionCreate deployment/openshift-kube-scheduler-operator Revision 3 created because configmap/kube-scheduler-pod has changed openshift-cluster-csi-drivers 58m Normal Created pod/aws-ebs-csi-driver-controller-75b78f4dd4-hcwbd Created container driver-kube-rbac-proxy openshift-cluster-csi-drivers 58m Normal Pulled pod/aws-ebs-csi-driver-controller-75b78f4dd4-hcwbd Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-cluster-csi-drivers 58m Normal Created pod/aws-ebs-csi-driver-controller-75b78f4dd4-hcwbd Created container csi-driver openshift-kube-scheduler-operator 58m Normal SecretCreated deployment/openshift-kube-scheduler-operator Created Secret/localhost-recovery-client-token-4 -n openshift-kube-scheduler because it was missing openshift-cluster-csi-drivers 58m Normal AddedInterface pod/aws-ebs-csi-driver-controller-75b78f4dd4-hcwbd Add eth0 [10.129.0.10/23] from ovn-kubernetes openshift-kube-scheduler-operator 58m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nRevisionControllerDegraded: conflicting latestAvailableRevision 4" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]" openshift-kube-scheduler-operator 58m Normal RevisionTriggered deployment/openshift-kube-scheduler-operator new revision 4 triggered by "configmap/kube-scheduler-pod has changed" openshift-cluster-csi-drivers 58m Normal Pulled pod/aws-ebs-csi-driver-controller-75b78f4dd4-m5mpr Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-cluster-csi-drivers 58m Normal Pulled pod/aws-ebs-csi-driver-controller-75b78f4dd4-hcwbd Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" already present on machine openshift-kube-scheduler-operator 58m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nRevisionControllerDegraded: conflicting latestAvailableRevision 4" openshift-cluster-csi-drivers 58m Normal Created pod/aws-ebs-csi-driver-controller-75b78f4dd4-m5mpr Created container driver-kube-rbac-proxy openshift-cluster-csi-drivers 58m Normal Created pod/aws-ebs-csi-driver-controller-75b78f4dd4-m5mpr Created container csi-driver openshift-cluster-csi-drivers 58m Normal Started pod/aws-ebs-csi-driver-controller-75b78f4dd4-hcwbd Started container csi-driver openshift-network-diagnostics 58m Normal AddedInterface pod/network-check-target-tmbg6 Add eth0 [10.128.0.3/23] from ovn-kubernetes openshift-network-diagnostics 58m Normal AddedInterface pod/network-check-target-v92f6 Add eth0 [10.129.0.3/23] from ovn-kubernetes openshift-cluster-csi-drivers 58m Normal Pulled pod/aws-ebs-csi-driver-controller-75b78f4dd4-hcwbd Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77941761aca0cba770d56fcf4d213512b4dd959aa49d3f50c9da02a7aee8d62e" in 1.329804176s (1.329815924s including waiting) openshift-etcd-operator 58m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-serving-ca-2 -n openshift-etcd because it was missing openshift-etcd-operator 58m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nRevisionControllerDegraded: configmaps \"etcd-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-scripts,restore-etcd-pod, configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1]" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nRevisionControllerDegraded: configmaps \"etcd-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: restore-etcd-pod, configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1]" openshift-kube-scheduler-operator 58m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 4" openshift-cluster-csi-drivers 58m Normal Started pod/aws-ebs-csi-driver-controller-75b78f4dd4-m5mpr Started container provisioner-kube-rbac-proxy openshift-cluster-csi-drivers 58m Normal Pulling pod/aws-ebs-csi-driver-controller-75b78f4dd4-m5mpr Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:726ed98ed8df6da72ea0aaecf62714470ad60d9e5665b65286271e92e4f46c1d" openshift-machine-api 58m Normal Started pod/machine-api-controllers-674d9f54f6-r6g9g Started container kube-rbac-proxy-machineset-mtrc openshift-machine-api 58m Normal Pulled pod/machine-api-controllers-674d9f54f6-r6g9g Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-machine-api 58m Normal Pulled pod/machine-api-controllers-674d9f54f6-r6g9g Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-machine-api 58m Normal Started pod/machine-api-controllers-674d9f54f6-r6g9g Started container machine-healthcheck-controller openshift-cluster-csi-drivers 58m Normal Created pod/aws-ebs-csi-driver-controller-75b78f4dd4-hcwbd Created container csi-provisioner openshift-cluster-csi-drivers 58m Normal Started pod/aws-ebs-csi-driver-controller-75b78f4dd4-hcwbd Started container csi-provisioner openshift-cluster-csi-drivers 58m Normal Pulled pod/aws-ebs-csi-driver-controller-75b78f4dd4-hcwbd Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-cluster-csi-drivers 58m Normal Created pod/aws-ebs-csi-driver-controller-75b78f4dd4-hcwbd Created container provisioner-kube-rbac-proxy openshift-cluster-csi-drivers 58m Normal Started pod/aws-ebs-csi-driver-controller-75b78f4dd4-hcwbd Started container provisioner-kube-rbac-proxy openshift-cluster-csi-drivers 58m Normal Created pod/aws-ebs-csi-driver-controller-75b78f4dd4-m5mpr Created container provisioner-kube-rbac-proxy openshift-machine-api 58m Normal Created pod/machine-api-controllers-674d9f54f6-r6g9g Created container machine-healthcheck-controller openshift-cluster-csi-drivers 58m Normal Pulling pod/aws-ebs-csi-driver-controller-75b78f4dd4-hcwbd Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:726ed98ed8df6da72ea0aaecf62714470ad60d9e5665b65286271e92e4f46c1d" openshift-machine-api 58m Normal Created pod/machine-api-controllers-674d9f54f6-r6g9g Created container kube-rbac-proxy-machineset-mtrc openshift-cluster-csi-drivers 58m Normal Pulled pod/aws-ebs-csi-driver-controller-75b78f4dd4-m5mpr Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-cluster-csi-drivers 58m Normal Started pod/aws-ebs-csi-driver-controller-75b78f4dd4-m5mpr Started container csi-provisioner openshift-etcd-operator 58m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-peer-client-ca-2 -n openshift-etcd because it was missing openshift-etcd-operator 58m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nRevisionControllerDegraded: configmaps \"etcd-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: restore-etcd-pod, configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1]" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nRevisionControllerDegraded: configmaps \"etcd-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1]" openshift-cluster-csi-drivers 58m Normal Pulled pod/aws-ebs-csi-driver-controller-75b78f4dd4-m5mpr Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77941761aca0cba770d56fcf4d213512b4dd959aa49d3f50c9da02a7aee8d62e" in 2.121966914s (2.121982657s including waiting) openshift-machine-api 58m Normal Pulled pod/machine-api-controllers-674d9f54f6-r6g9g Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fae409a0e6467f2d4e5e1cd0974a33f71fddf6f3b567c278b3a9aad56aa0f089" already present on machine openshift-cluster-csi-drivers 58m Normal Created pod/aws-ebs-csi-driver-controller-75b78f4dd4-m5mpr Created container csi-provisioner openshift-kube-scheduler 58m Normal Killing pod/installer-3-ip-10-0-239-132.ec2.internal Stopping container installer openshift-machine-api 58m Normal Pulled pod/machine-api-controllers-674d9f54f6-r6g9g Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9f622d5408462011492d823946b98c1043c08d2ecf2a264dc9d90f48084a9c8" in 2.72534476s (2.725353382s including waiting) openshift-machine-api 58m Normal Pulled pod/machine-api-controllers-674d9f54f6-r6g9g Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fae409a0e6467f2d4e5e1cd0974a33f71fddf6f3b567c278b3a9aad56aa0f089" already present on machine openshift-machine-api 58m Normal Started pod/machine-api-controllers-674d9f54f6-r6g9g Started container nodelink-controller openshift-machine-api 58m Normal Created pod/machine-api-controllers-674d9f54f6-r6g9g Created container nodelink-controller openshift-machine-api 58m Normal Started pod/machine-api-controllers-674d9f54f6-r6g9g Started container machine-controller openshift-machine-api 58m Normal Created pod/machine-api-controllers-674d9f54f6-r6g9g Created container machine-controller openshift-controller-manager-operator 58m Normal OperatorStatusChanged deployment/openshift-controller-manager-operator Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" openshift-cluster-node-tuning-operator 58m Normal SuccessfulCreate daemonset/tuned Created pod: tuned-pbkvf openshift-machine-api 58m Normal Started pod/machine-api-controllers-674d9f54f6-r6g9g Started container kube-rbac-proxy-mhc-mtrc openshift-cluster-node-tuning-operator 58m Normal SuccessfulCreate daemonset/tuned Created pod: tuned-x9jkg openshift-machine-api 58m Warning ProbeError pod/machine-api-controllers-674d9f54f6-r6g9g Readiness probe error: Get "http://10.128.0.13:9441/healthz": dial tcp 10.128.0.13:9441: connect: connection refused... openshift-cluster-csi-drivers 58m Normal Created pod/aws-ebs-csi-driver-controller-75b78f4dd4-hcwbd Created container attacher-kube-rbac-proxy openshift-cluster-csi-drivers 58m Normal Pulled pod/aws-ebs-csi-driver-controller-75b78f4dd4-hcwbd Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-cluster-csi-drivers 58m Normal Started pod/aws-ebs-csi-driver-controller-75b78f4dd4-hcwbd Started container csi-attacher openshift-cluster-csi-drivers 58m Normal Created pod/aws-ebs-csi-driver-controller-75b78f4dd4-hcwbd Created container csi-attacher openshift-cluster-node-tuning-operator 58m Normal LeaderElection lease/node-tuning-operator-lock cluster-node-tuning-operator-5886c76fd4-7qpt5_a79d3a7c-813b-471f-aef9-9ceca8d7f970 became leader openshift-etcd-operator 58m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-metrics-proxy-client-ca-2 -n openshift-etcd because it was missing openshift-machine-api 58m Normal Created pod/machine-api-controllers-674d9f54f6-r6g9g Created container kube-rbac-proxy-machine-mtrc openshift-cluster-node-tuning-operator 58m Normal LeaderElection configmap/node-tuning-operator-lock cluster-node-tuning-operator-5886c76fd4-7qpt5_a79d3a7c-813b-471f-aef9-9ceca8d7f970 became leader openshift-etcd-operator 58m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-metrics-proxy-serving-ca-2 -n openshift-etcd because it was missing openshift-kube-controller-manager-operator 58m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" to "InstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nWorkerLatencyProfileDegraded: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" openshift-cluster-node-tuning-operator 58m Normal SuccessfulCreate daemonset/tuned Created pod: tuned-zxj2p openshift-machine-api 58m Warning Unhealthy pod/machine-api-controllers-674d9f54f6-r6g9g Readiness probe failed: Get "http://10.128.0.13:9441/healthz": dial tcp 10.128.0.13:9441: connect: connection refused openshift-etcd-operator 58m Warning EtcdEndpointsErrorUpdatingStatus deployment/etcd-operator Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again openshift-machine-api 58m Normal Pulled pod/machine-api-controllers-674d9f54f6-r6g9g Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-machine-api 58m Normal Started pod/machine-api-controllers-674d9f54f6-r6g9g Started container kube-rbac-proxy-machine-mtrc openshift-cluster-csi-drivers 58m Normal Pulled pod/aws-ebs-csi-driver-controller-75b78f4dd4-hcwbd Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:726ed98ed8df6da72ea0aaecf62714470ad60d9e5665b65286271e92e4f46c1d" in 1.259170222s (1.259183277s including waiting) openshift-machine-api 58m Normal Created pod/machine-api-controllers-674d9f54f6-r6g9g Created container kube-rbac-proxy-mhc-mtrc openshift-etcd-operator 58m Warning RequiredInstallerResourcesMissing deployment/etcd-operator configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1 openshift-authentication-operator 58m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" openshift-cluster-node-tuning-operator 58m Normal Pulling pod/tuned-zxj2p Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" openshift-kube-scheduler 58m Normal AddedInterface pod/installer-4-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.15/23] from ovn-kubernetes openshift-cluster-csi-drivers 58m Normal Pulled pod/aws-ebs-csi-driver-controller-75b78f4dd4-m5mpr Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:726ed98ed8df6da72ea0aaecf62714470ad60d9e5665b65286271e92e4f46c1d" in 2.166357963s (2.166364322s including waiting) openshift-machine-api 58m Normal LeaderElection lease/cluster-api-provider-machineset-leader machine-api-controllers-674d9f54f6-r6g9g_f1dd2127-357e-4d2a-acd9-e8e6924ef9dd became leader openshift-cluster-csi-drivers 58m Normal Created pod/aws-ebs-csi-driver-controller-75b78f4dd4-m5mpr Created container csi-attacher openshift-kube-scheduler-operator 58m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/installer-4-ip-10-0-239-132.ec2.internal -n openshift-kube-scheduler because it was missing openshift-cluster-node-tuning-operator 58m Normal Pulling pod/tuned-pbkvf Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" openshift-etcd-operator 58m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-endpoints-2 -n openshift-etcd because it was missing openshift-cluster-csi-drivers 58m Normal LeaderElection lease/external-resizer-ebs-csi-aws-com aws-ebs-csi-driver-controller-75b78f4dd4-hcwbd became leader openshift-kube-scheduler 58m Normal Started pod/installer-4-ip-10-0-239-132.ec2.internal Started container installer openshift-kube-scheduler 58m Normal Created pod/installer-4-ip-10-0-239-132.ec2.internal Created container installer openshift-kube-controller-manager-operator 58m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0]\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nWorkerLatencyProfileDegraded: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "InstallerControllerDegraded: missing required resources: [configmaps: client-ca, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nWorkerLatencyProfileDegraded: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" openshift-cluster-csi-drivers 58m Normal Created pod/aws-ebs-csi-driver-controller-75b78f4dd4-m5mpr Created container attacher-kube-rbac-proxy openshift-cluster-csi-drivers 58m Normal Pulled pod/aws-ebs-csi-driver-controller-75b78f4dd4-m5mpr Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-cluster-csi-drivers 58m Normal Started pod/aws-ebs-csi-driver-controller-75b78f4dd4-m5mpr Started container csi-attacher openshift-kube-scheduler 58m Normal Pulled pod/installer-4-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-etcd-operator 58m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nRevisionControllerDegraded: configmaps \"etcd-pod\" not found\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1]" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1]" openshift-etcd-operator 58m Normal RevisionCreate deployment/etcd-operator Revision 1 created because configmap "etcd-pod-1" not found openshift-etcd-operator 58m Normal SecretCreated deployment/etcd-operator Created Secret/etcd-all-certs-2 -n openshift-etcd because it was missing openshift-machine-api 58m Normal LeaderElection lease/cluster-api-provider-nodelink-leader machine-api-controllers-674d9f54f6-r6g9g_2c411d6e-00f7-4f84-b7b7-e2fe2f5b736c became leader default 58m Normal AnnotationChange machineconfigpool/master Node ip-10-0-197-197.ec2.internal now has machineconfiguration.openshift.io/currentConfig=rendered-master-ff215a8818ae4a038b75bd4a838f2d00 default 58m Normal AnnotationChange machineconfigpool/master Node ip-10-0-197-197.ec2.internal now has machineconfiguration.openshift.io/desiredConfig=rendered-master-ff215a8818ae4a038b75bd4a838f2d00 default 58m Normal AnnotationChange machineconfigpool/master Node ip-10-0-197-197.ec2.internal now has machineconfiguration.openshift.io/state=Done openshift-etcd-operator 58m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1]" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready" openshift-machine-api 58m Warning FailedUpdate machineset/qeaisrhods-c13-28wr5-worker-us-east-1a Failed to set autoscaling from zero annotations, instance type unknown openshift-dns 58m Normal Pulling pod/dns-default-vlp6d Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a604c96952da5e59f104a305e0c8303d474e33ef345b8c534fd677f189d05299" openshift-machine-api 58m Normal LeaderElection lease/cluster-api-provider-aws-leader machine-api-controllers-674d9f54f6-r6g9g_c3a96e79-e385-4a18-bd7d-1b1c786daf49 became leader openshift-machine-api 58m Normal LeaderElection lease/cluster-api-provider-healthcheck-leader machine-api-controllers-674d9f54f6-r6g9g_ae57dcbf-cb3b-48a2-ad0d-f51f140fa9ba became leader openshift-etcd-operator 58m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 2" openshift-etcd-operator 58m Normal NodeTargetRevisionChanged deployment/etcd-operator Updating node "ip-10-0-239-132.ec2.internal" from revision 0 to 2 because node ip-10-0-239-132.ec2.internal static pod not found openshift-authentication-operator 58m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDomainValidationControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" openshift-network-diagnostics 58m Normal AddedInterface pod/network-check-target-dvjbf Add eth0 [10.130.0.3/23] from ovn-kubernetes openshift-cluster-csi-drivers 58m Normal LeaderElection lease/external-snapshotter-leader-ebs-csi-aws-com aws-ebs-csi-driver-controller-75b78f4dd4-hcwbd became leader openshift-cluster-node-tuning-operator 58m Normal Started pod/tuned-pbkvf Started container tuned openshift-cluster-node-tuning-operator 58m Normal Pulled pod/tuned-pbkvf Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" in 4.231426304s (4.231437371s including waiting) openshift-multus 58m Normal AddedInterface pod/network-metrics-daemon-9gx7g Add eth0 [10.130.0.4/23] from ovn-kubernetes openshift-dns 58m Normal Pulling pod/node-resolver-t57dw Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" openshift-machine-config-operator 58m Normal Pulled pod/machine-config-server-4bmnx Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" already present on machine openshift-machine-config-operator 58m Normal Started pod/machine-config-server-4bmnx Started container machine-config-server openshift-cluster-node-tuning-operator 58m Normal Created pod/tuned-x9jkg Created container tuned openshift-cluster-node-tuning-operator 58m Normal Started pod/tuned-x9jkg Started container tuned openshift-cluster-node-tuning-operator 58m Normal Pulled pod/tuned-x9jkg Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" already present on machine openshift-machine-config-operator 58m Normal Created pod/machine-config-server-4bmnx Created container machine-config-server openshift-cluster-node-tuning-operator 58m Normal Created pod/tuned-pbkvf Created container tuned openshift-kube-controller-manager-operator 58m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: client-ca, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nWorkerLatencyProfileDegraded: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "InstallerControllerDegraded: missing required resources: [configmaps: client-ca, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nWorkerLatencyProfileDegraded: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" openshift-etcd-operator 58m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries" kube-system 58m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-ingress namespace openshift-cluster-node-tuning-operator 58m Normal Started pod/tuned-zxj2p Started container tuned openshift-ingress 58m Normal SuccessfulCreate replicaset/router-default-699d8c97f Created pod: router-default-699d8c97f-9xbcx openshift-ingress 58m Normal SuccessfulCreate replicaset/router-default-699d8c97f Created pod: router-default-699d8c97f-6nwwk openshift-etcd-operator 58m Normal PodCreated deployment/etcd-operator Created Pod/installer-2-ip-10-0-239-132.ec2.internal -n openshift-etcd because it was missing openshift-cluster-node-tuning-operator 58m Normal Pulled pod/tuned-zxj2p Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" in 4.29062738s (4.290634351s including waiting) openshift-ingress 58m Normal ScalingReplicaSet deployment/router-default Scaled up replica set router-default-699d8c97f to 2 openshift-ingress-operator 58m Normal CreatedWildcardCACert secret/router-ca Created a default wildcard CA certificate openshift-ingress-operator 58m Normal Admitted ingresscontroller/default ingresscontroller passed validation openshift-authentication-operator 58m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDomainValidationControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDomainValidationControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" openshift-cluster-node-tuning-operator 58m Normal Created pod/tuned-zxj2p Created container tuned openshift-cluster-storage-operator 58m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Available changed from False to True ("DefaultStorageClassControllerAvailable: StorageClass provided by supplied CSI Driver instead of the cluster-storage-operator\nAWSEBSCSIDriverOperatorCRAvailable: All is well") openshift-config-managed 58m Normal PublishedRouterCA configmap/default-ingress-cert Published "default-ingress-cert" in "openshift-config-managed" openshift-config-managed 58m Normal PublishedRouterCertificates secret/router-certs Published router certificates openshift-ingress-operator 58m Normal CreatedDefaultCertificate ingresscontroller/default Created default wildcard certificate "router-certs-default" openshift-etcd 58m Normal Pulling pod/installer-2-ip-10-0-239-132.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" openshift-etcd 58m Normal AddedInterface pod/installer-2-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.16/23] from ovn-kubernetes openshift-etcd-operator 58m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" openshift-kube-controller-manager-operator 58m Normal RevisionTriggered deployment/kube-controller-manager-operator new revision 2 triggered by "configmap \"kube-controller-manager-pod-1\" not found" openshift-kube-controller-manager-operator 58m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: client-ca, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nWorkerLatencyProfileDegraded: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "InstallerControllerDegraded: missing required resources: [configmaps: client-ca, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nWorkerLatencyProfileDegraded: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" openshift-kube-apiserver-operator 58m Normal ObserveStorageUpdated deployment/kube-apiserver-operator Updated storage urls to https://localhost:2379 openshift-kube-apiserver-operator 58m Warning ConfigMissing deployment/kube-apiserver-operator no observedConfig openshift-kube-apiserver-operator 58m Normal ObservedConfigChanged deployment/kube-apiserver-operator Writing updated observed config: map[string]any{... openshift-kube-scheduler-operator 58m Normal ConfigMapUpdated deployment/openshift-kube-scheduler-operator Updated ConfigMap/serviceaccount-ca -n openshift-kube-scheduler:... openshift-etcd 58m Normal Created pod/installer-2-ip-10-0-239-132.ec2.internal Created container installer openshift-etcd 58m Normal Started pod/installer-2-ip-10-0-239-132.ec2.internal Started container installer openshift-etcd 58m Normal Pulled pod/installer-2-ip-10-0-239-132.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" in 1.901694393s (1.901708258s including waiting) openshift-kube-controller-manager-operator 58m Warning RequiredInstallerResourcesMissing deployment/kube-controller-manager-operator configmaps: client-ca, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1 openshift-kube-controller-manager-operator 58m Normal RevisionCreate deployment/kube-controller-manager-operator Revision 1 created because configmap "kube-controller-manager-pod-1" not found openshift-machine-api 58m Normal AddedInterface pod/cluster-autoscaler-operator-7fcffdb7c8-g4w4m Add eth0 [10.130.0.6/23] from ovn-kubernetes openshift-kube-scheduler-operator 58m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/kube-scheduler-pod-5 -n openshift-kube-scheduler because it was missing openshift-machine-api 58m Normal Pulled pod/cluster-autoscaler-operator-7fcffdb7c8-g4w4m Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-dns 58m Normal Pulled pod/dns-default-vlp6d Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a604c96952da5e59f104a305e0c8303d474e33ef345b8c534fd677f189d05299" in 5.725362241s (5.72537519s including waiting) openshift-kube-scheduler-operator 58m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/revision-status-5 -n openshift-kube-scheduler because it was missing openshift-dns 58m Normal Pulled pod/node-resolver-t57dw Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" in 5.397818858s (5.397838945s including waiting) openshift-cluster-csi-drivers 58m Normal SuccessfulCreate replicaset/aws-ebs-csi-driver-controller-5ff7cf9694 Created pod: aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 openshift-dns 58m Normal Created pod/dns-default-vlp6d Created container dns openshift-dns 58m Normal Started pod/dns-default-vlp6d Started container dns openshift-cluster-storage-operator 58m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Progressing message changed from "AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods" to "AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverControllerServiceControllerProgressing: Waiting for Deployment to update pods" openshift-dns 58m Normal Pulled pod/dns-default-vlp6d Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-dns 58m Normal Created pod/dns-default-vlp6d Created container kube-rbac-proxy openshift-operator-lifecycle-manager 58m Normal Created pod/catalog-operator-567d5cdcc9-gwwnx Created container catalog-operator openshift-kube-scheduler-operator 58m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/config-5 -n openshift-kube-scheduler because it was missing openshift-operator-lifecycle-manager 58m Normal AddedInterface pod/catalog-operator-567d5cdcc9-gwwnx Add eth0 [10.130.0.29/23] from ovn-kubernetes openshift-operator-lifecycle-manager 58m Normal Pulled pod/catalog-operator-567d5cdcc9-gwwnx Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:497121bfdb3b293af2c939af393bfbda39aeace9d754f7288a244cd7ec2662d7" already present on machine openshift-operator-lifecycle-manager 58m Normal Started pod/catalog-operator-567d5cdcc9-gwwnx Started container catalog-operator openshift-image-registry 58m Normal AddedInterface pod/cluster-image-registry-operator-868788f8c6-frhj8 Add eth0 [10.130.0.26/23] from ovn-kubernetes openshift-marketplace 58m Normal AddedInterface pod/marketplace-operator-554c77d6df-2q9k5 Add eth0 [10.130.0.18/23] from ovn-kubernetes openshift-marketplace 58m Normal Pulling pod/marketplace-operator-554c77d6df-2q9k5 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e8bda93aae5c360f971e4532706ab6a95eb260e026a6704f837016cab6525fb" openshift-machine-api 58m Normal Pulling pod/cluster-baremetal-operator-cb6794dd9-8bqk2 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a4f74849d28d578b213bb837750fbc967abe1cf433ad7611dde27be1f15baf36" openshift-machine-api 58m Normal Pulling pod/cluster-autoscaler-operator-7fcffdb7c8-g4w4m Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b696ffc14cdc67e31403d1a6308c7448d7970ed7f872ec18fea9c2017029814" openshift-dns 58m Normal Created pod/node-resolver-t57dw Created container dns-node-resolver openshift-dns 58m Normal Started pod/node-resolver-t57dw Started container dns-node-resolver openshift-authentication-operator 58m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nAuditPolicyDegraded: namespaces \"openshift-oauth-apiserver\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDomainValidationControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDomainValidationControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" openshift-machine-api 58m Normal AddedInterface pod/cluster-baremetal-operator-cb6794dd9-8bqk2 Add eth0 [10.130.0.7/23] from ovn-kubernetes openshift-machine-api 58m Normal Created pod/cluster-autoscaler-operator-7fcffdb7c8-g4w4m Created container kube-rbac-proxy openshift-machine-api 58m Normal Started pod/cluster-autoscaler-operator-7fcffdb7c8-g4w4m Started container kube-rbac-proxy openshift-image-registry 58m Normal Pulling pod/cluster-image-registry-operator-868788f8c6-frhj8 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d049299956446154ed1d1c21e5d4561bb452b41f6c3bf17a48f3550a2c998cbe" openshift-cluster-csi-drivers 58m Normal SuccessfulDelete replicaset/aws-ebs-csi-driver-controller-75b78f4dd4 Deleted pod: aws-ebs-csi-driver-controller-75b78f4dd4-hcwbd openshift-monitoring 58m Normal Pulling pod/cluster-monitoring-operator-78777bc588-rhggh Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9a1e35d8ae26fad862261135aaaa0658befbaccf9ffba55291dc4e8a95c20546" openshift-monitoring 58m Normal AddedInterface pod/cluster-monitoring-operator-78777bc588-rhggh Add eth0 [10.130.0.11/23] from ovn-kubernetes openshift-cluster-csi-drivers 58m Normal ScalingReplicaSet deployment/aws-ebs-csi-driver-controller Scaled down replica set aws-ebs-csi-driver-controller-75b78f4dd4 to 0 from 1 openshift-cluster-csi-drivers 58m Normal ScalingReplicaSet deployment/aws-ebs-csi-driver-controller Scaled up replica set aws-ebs-csi-driver-controller-5ff7cf9694 to 2 from 1 openshift-cluster-storage-operator 58m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Progressing message changed from "AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverControllerServiceControllerProgressing: Waiting for Deployment to update pods" to "AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods" openshift-dns 58m Normal Started pod/dns-default-vlp6d Started container kube-rbac-proxy openshift-authentication-operator 58m Warning ObservedConfigWriteError deployment/authentication-operator Failed to write observed config: Operation cannot be fulfilled on authentications.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again openshift-kube-scheduler-operator 58m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/serviceaccount-ca-5 -n openshift-kube-scheduler because it was missing openshift-kube-controller-manager-operator 58m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: client-ca, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nWorkerLatencyProfileDegraded: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "InstallerControllerDegraded: missing required resources: configmaps: client-ca\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nWorkerLatencyProfileDegraded: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" openshift-kube-controller-manager-operator 58m Warning RequiredInstallerResourcesMissing deployment/kube-controller-manager-operator configmaps: client-ca openshift-kube-apiserver-operator 58m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/config -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 58m Normal RevisionTriggered deployment/kube-apiserver-operator new revision 2 triggered by "configmap \"kube-apiserver-pod-1\" not found" openshift-kube-apiserver-operator 58m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-client-ca -n openshift-config-managed because it was missing openshift-kube-apiserver-operator 58m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-pod-2 -n openshift-kube-apiserver because it was missing openshift-kube-scheduler-operator 58m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/scheduler-kubeconfig-5 -n openshift-kube-scheduler because it was missing openshift-kube-apiserver-operator 58m Normal RevisionTriggered deployment/kube-apiserver-operator new revision 2 triggered by "configmap \"kube-apiserver-pod\" not found" openshift-kube-apiserver-operator 58m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/client-ca -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 58m Normal ConfigMapUpdated deployment/kube-apiserver-operator Updated ConfigMap/revision-status-2 -n openshift-kube-apiserver:... openshift-kube-apiserver-operator 58m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-pod -n openshift-kube-apiserver because it was missing openshift-authentication-operator 58m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-kube-apiserver-operator 58m Warning RequiredInstallerResourcesMissing deployment/kube-apiserver-operator configmaps: client-ca, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1 openshift-kube-apiserver-operator 58m Warning ConfigMapCreateFailed deployment/kube-apiserver-operator Failed to create ConfigMap/kube-apiserver-client-ca -n openshift-config-managed: configmaps "kube-apiserver-client-ca" already exists openshift-kube-controller-manager-operator 58m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/client-ca -n openshift-kube-controller-manager because it was missing openshift-kube-scheduler-operator 58m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-5 -n openshift-kube-scheduler because it was missing openshift-kube-controller-manager-operator 58m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: configmaps: client-ca\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nWorkerLatencyProfileDegraded: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "InstallerControllerDegraded: missing required resources: configmaps: client-ca\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" openshift-kube-apiserver-operator 58m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-2 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 58m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/config-2 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 58m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-server-ca -n openshift-kube-apiserver because it was missing openshift-kube-scheduler-operator 58m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nRevisionControllerDegraded: conflicting latestAvailableRevision 5" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" openshift-kube-scheduler-operator 58m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nRevisionControllerDegraded: conflicting latestAvailableRevision 5" openshift-kube-apiserver-operator 58m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/bound-sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing openshift-authentication-operator 58m Normal SecretCreated deployment/authentication-operator Created Secret/v4-0-config-system-router-certs -n openshift-authentication because it was missing openshift-kube-scheduler-operator 58m Normal RevisionCreate deployment/openshift-kube-scheduler-operator Revision 4 created because configmap/serviceaccount-ca has changed openshift-kube-scheduler-operator 58m Normal SecretCreated deployment/openshift-kube-scheduler-operator Created Secret/localhost-recovery-client-token-5 -n openshift-kube-scheduler because it was missing openshift-authentication-operator 58m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDomainValidationControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" openshift-kube-scheduler-operator 58m Normal SecretCreated deployment/openshift-kube-scheduler-operator Created Secret/serving-cert-5 -n openshift-kube-scheduler because it was missing openshift-authentication-operator 58m Normal ConfigMapCreated deployment/authentication-operator Created ConfigMap/oauth-serving-cert -n openshift-config-managed because it was missing openshift-kube-scheduler-operator 58m Normal RevisionTriggered deployment/openshift-kube-scheduler-operator new revision 5 triggered by "configmap/serviceaccount-ca has changed" openshift-kube-apiserver-operator 58m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/etcd-serving-ca-2 -n openshift-kube-apiserver because it was missing openshift-machine-api 58m Normal Started pod/cluster-baremetal-operator-cb6794dd9-8bqk2 Started container cluster-baremetal-operator openshift-image-registry 58m Normal Created pod/cluster-image-registry-operator-868788f8c6-frhj8 Created container cluster-image-registry-operator openshift-image-registry 58m Normal Pulled pod/cluster-image-registry-operator-868788f8c6-frhj8 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d049299956446154ed1d1c21e5d4561bb452b41f6c3bf17a48f3550a2c998cbe" in 5.423757765s (5.423778649s including waiting) openshift-kube-controller-manager-operator 58m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: configmaps: client-ca\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" to "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" openshift-machine-api 58m Normal Created pod/cluster-autoscaler-operator-7fcffdb7c8-g4w4m Created container cluster-autoscaler-operator openshift-machine-api 58m Normal Pulled pod/cluster-autoscaler-operator-7fcffdb7c8-g4w4m Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b696ffc14cdc67e31403d1a6308c7448d7970ed7f872ec18fea9c2017029814" in 5.459192801s (5.459202327s including waiting) openshift-machine-api 58m Normal Created pod/cluster-baremetal-operator-cb6794dd9-8bqk2 Created container cluster-baremetal-operator openshift-machine-api 58m Normal Pulled pod/cluster-baremetal-operator-cb6794dd9-8bqk2 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-kube-apiserver-operator 58m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kubelet-serving-ca-2 -n openshift-kube-apiserver because it was missing openshift-monitoring 58m Normal Pulled pod/cluster-monitoring-operator-78777bc588-rhggh Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9a1e35d8ae26fad862261135aaaa0658befbaccf9ffba55291dc4e8a95c20546" in 5.496411588s (5.496420036s including waiting) openshift-kube-apiserver-operator 58m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-server-ca-2 -n openshift-kube-apiserver because it was missing openshift-monitoring 58m Normal Created pod/cluster-monitoring-operator-78777bc588-rhggh Created container cluster-monitoring-operator openshift-marketplace 58m Normal Pulled pod/marketplace-operator-554c77d6df-2q9k5 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e8bda93aae5c360f971e4532706ab6a95eb260e026a6704f837016cab6525fb" in 5.506104078s (5.506139678s including waiting) openshift-kube-apiserver-operator 58m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing openshift-image-registry 58m Normal Started pod/cluster-image-registry-operator-868788f8c6-frhj8 Started container cluster-image-registry-operator openshift-machine-api 58m Normal Pulled pod/cluster-baremetal-operator-cb6794dd9-8bqk2 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a4f74849d28d578b213bb837750fbc967abe1cf433ad7611dde27be1f15baf36" in 5.544214251s (5.544223852s including waiting) openshift-image-registry 58m Normal LeaderElection lease/openshift-master-controllers cluster-image-registry-operator-868788f8c6-frhj8_c348a043-cc45-4767-aab6-f2b0b934cf18 became leader openshift-monitoring 58m Normal Started pod/cluster-monitoring-operator-78777bc588-rhggh Started container cluster-monitoring-operator openshift-authentication-operator 58m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthAPIServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" openshift-monitoring 58m Normal NoValidCertificateFound deployment/cluster-monitoring-operator No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates openshift-monitoring 58m Normal CSRCreated deployment/cluster-monitoring-operator A csr "system:openshift:openshift-monitoring-pprmd" is created for OpenShiftMonitoringClientCertRequester openshift-monitoring 58m Normal ServiceAccountCreated deployment/cluster-monitoring-operator Created ServiceAccount/prometheus-operator -n openshift-monitoring because it was missing openshift-monitoring 58m Normal ServiceAccountCreated deployment/cluster-monitoring-operator Created ServiceAccount/prometheus-operator-admission-webhook -n openshift-monitoring because it was missing openshift-image-registry 58m Normal LeaderElection configmap/openshift-master-controllers cluster-image-registry-operator-868788f8c6-frhj8_c348a043-cc45-4767-aab6-f2b0b934cf18 became leader openshift-authentication-operator 58m Normal ObserveAPIAudiences deployment/authentication-operator service account issuer changed from to https://kubernetes.default.svc openshift-machine-api 58m Normal Created pod/cluster-baremetal-operator-cb6794dd9-8bqk2 Created container baremetal-kube-rbac-proxy openshift-authentication-operator 58m Normal ObserveTLSSecurityProfile deployment/authentication-operator cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] openshift-kube-apiserver-operator 58m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/localhost-recovery-serving-certkey-2 -n openshift-kube-apiserver because it was missing openshift-image-registry 58m Warning FastControllerResync deployment/cluster-image-registry-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-authentication-operator 58m Normal ObserveTLSSecurityProfile deployment/authentication-operator minTLSVersion changed to VersionTLS12 kube-system 58m Normal CSRApproval pod/bootstrap-kube-controller-manager-ip-10-0-8-110 The CSR "system:openshift:openshift-monitoring-pprmd" has been approved openshift-kube-controller-manager-operator 58m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" to "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready" openshift-monitoring 58m Normal NoPods poddisruptionbudget/prometheus-operator-admission-webhook No matching pods found openshift-kube-apiserver-operator 58m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/etcd-client-2 -n openshift-kube-apiserver because it was missing openshift-machine-api 58m Normal Started pod/cluster-autoscaler-operator-7fcffdb7c8-g4w4m Started container cluster-autoscaler-operator openshift-kube-apiserver-operator 58m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-audit-policies-2 -n openshift-kube-apiserver because it was missing openshift-kube-scheduler 58m Normal Killing pod/installer-4-ip-10-0-239-132.ec2.internal Stopping container installer openshift-monitoring 58m Normal ScalingReplicaSet deployment/prometheus-operator-admission-webhook Scaled up replica set prometheus-operator-admission-webhook-5c549f4449 to 2 openshift-authentication-operator 58m Normal ObservedConfigChanged deployment/authentication-operator Writing updated section ("oauthAPIServer") of observed config: " map[string]any(\n- \tnil,\n+ \t{\n+ \t\t\"apiServerArguments\": map[string]any{\n+ \t\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n+ \t\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+ \t\t\t\"tls-cipher-suites\": []any{\n+ \t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...),\n+ \t\t\t\tstring(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_S\"...),\n+ \t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM\"...),\n+ \t\t\t\tstring(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_S\"...), ...,\n+ \t\t\t},\n+ \t\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n+ \t\t},\n+ \t},\n )\n" openshift-kube-controller-manager-operator 58m Normal MasterNodeObserved deployment/kube-controller-manager-operator Observed new master node ip-10-0-239-132.ec2.internal openshift-kube-controller-manager-operator 58m Normal MasterNodeObserved deployment/kube-controller-manager-operator Observed new master node ip-10-0-197-197.ec2.internal openshift-monitoring 58m Normal SuccessfulCreate replicaset/prometheus-operator-admission-webhook-5c549f4449 Created pod: prometheus-operator-admission-webhook-5c549f4449-v9x8h openshift-kube-scheduler-operator 58m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 5" openshift-kube-controller-manager-operator 58m Normal MasterNodeObserved deployment/kube-controller-manager-operator Observed new master node ip-10-0-140-6.ec2.internal openshift-kube-controller-manager-operator 58m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready" openshift-monitoring 58m Normal SuccessfulCreate replicaset/prometheus-operator-admission-webhook-5c549f4449 Created pod: prometheus-operator-admission-webhook-5c549f4449-d5c7w openshift-monitoring 58m Normal ClientCertificateCreated deployment/cluster-monitoring-operator A new client certificate for OpenShiftMonitoringClientCertRequester is available openshift-machine-api 58m Normal Started pod/cluster-baremetal-operator-cb6794dd9-8bqk2 Started container baremetal-kube-rbac-proxy openshift-cluster-csi-drivers 58m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-kube-apiserver-operator 58m Warning RequiredInstallerResourcesMissing deployment/kube-apiserver-operator secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1 openshift-cluster-csi-drivers 58m Normal AddedInterface pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Add eth0 [10.128.0.15/23] from ovn-kubernetes openshift-cluster-csi-drivers 58m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Started container csi-driver openshift-kube-apiserver-operator 58m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/localhost-recovery-client-token-2 -n openshift-kube-apiserver because it was missing openshift-cluster-csi-drivers 58m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Started container driver-kube-rbac-proxy openshift-marketplace 58m Warning ProbeError pod/marketplace-operator-554c77d6df-2q9k5 Liveness probe error: Get "http://10.130.0.18:8080/healthz": dial tcp 10.130.0.18:8080: connect: connection refused... openshift-cluster-csi-drivers 58m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" already present on machine openshift-marketplace 58m Warning Unhealthy pod/marketplace-operator-554c77d6df-2q9k5 Liveness probe failed: Get "http://10.130.0.18:8080/healthz": dial tcp 10.130.0.18:8080: connect: connection refused openshift-cluster-csi-drivers 58m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Created container csi-driver openshift-cluster-csi-drivers 58m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-cluster-csi-drivers 58m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77941761aca0cba770d56fcf4d213512b4dd959aa49d3f50c9da02a7aee8d62e" already present on machine openshift-kube-controller-manager-operator 58m Normal NodeTargetRevisionChanged deployment/kube-controller-manager-operator Updating node "ip-10-0-239-132.ec2.internal" from revision 0 to 2 because node ip-10-0-239-132.ec2.internal static pod not found openshift-cluster-csi-drivers 58m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Created container driver-kube-rbac-proxy openshift-cluster-csi-drivers 58m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Created container csi-provisioner openshift-kube-controller-manager-operator 58m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 2"),Available message changed from "StaticPodsAvailable: 0 nodes are active; " to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 2" openshift-cluster-csi-drivers 58m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Started container csi-provisioner openshift-cluster-csi-drivers 58m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-cluster-csi-drivers 58m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Created container csi-resizer openshift-cluster-csi-drivers 58m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66daa08f96501fa939342eafe2de7be5307656a3ff3ec9bde82664905c695bb6" already present on machine openshift-cluster-csi-drivers 58m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Created container csi-attacher openshift-cluster-csi-drivers 58m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Started container attacher-kube-rbac-proxy openshift-cluster-csi-drivers 58m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Created container attacher-kube-rbac-proxy openshift-cluster-csi-drivers 58m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:726ed98ed8df6da72ea0aaecf62714470ad60d9e5665b65286271e92e4f46c1d" already present on machine openshift-cluster-csi-drivers 58m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Created container resizer-kube-rbac-proxy openshift-cluster-csi-drivers 58m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-cluster-csi-drivers 58m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Started container csi-attacher openshift-kube-scheduler-operator 58m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" openshift-cluster-csi-drivers 58m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:281a8f2be0f5afefc956d758928a761a97ac9a5b3e1f4f5785717906d791a5e3" already present on machine openshift-cluster-csi-drivers 58m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Started container csi-resizer openshift-cluster-csi-drivers 58m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Started container resizer-kube-rbac-proxy openshift-kube-scheduler-operator 58m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/installer-5-ip-10-0-239-132.ec2.internal -n openshift-kube-scheduler because it was missing openshift-cluster-csi-drivers 58m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Started container provisioner-kube-rbac-proxy openshift-cluster-csi-drivers 58m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Created container provisioner-kube-rbac-proxy openshift-controller-manager 58m Normal ScalingReplicaSet deployment/controller-manager Scaled up replica set controller-manager-77cd478b57 to 1 from 0 openshift-kube-controller-manager-operator 58m Normal PodCreated deployment/kube-controller-manager-operator Created Pod/installer-2-ip-10-0-239-132.ec2.internal -n openshift-kube-controller-manager because it was missing openshift-controller-manager 58m Normal SuccessfulCreate replicaset/controller-manager-77cd478b57 Created pod: controller-manager-77cd478b57-4s2qm openshift-controller-manager 58m Normal ScalingReplicaSet deployment/controller-manager Scaled down replica set controller-manager-78f477fd5c to 0 from 1 openshift-cluster-storage-operator 58m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Progressing changed from True to False ("AWSEBSCSIDriverOperatorCRProgressing: All is well") openshift-controller-manager 58m Normal SuccessfulDelete replicaset/controller-manager-78f477fd5c Deleted pod: controller-manager-78f477fd5c-r8mcx openshift-route-controller-manager 58m Normal SuccessfulCreate replicaset/route-controller-manager-795466d555 Created pod: route-controller-manager-795466d555-hwftm openshift-authentication-operator 58m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nOAuthAPIServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthAPIServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" openshift-route-controller-manager 58m Normal ScalingReplicaSet deployment/route-controller-manager Scaled up replica set route-controller-manager-795466d555 to 1 from 0 openshift-controller-manager-operator 58m Normal OperatorStatusChanged deployment/openshift-controller-manager-operator Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" to "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" openshift-route-controller-manager 58m Normal SuccessfulDelete replicaset/route-controller-manager-7d7696bfd4 Deleted pod: route-controller-manager-7d7696bfd4-zpkmp openshift-route-controller-manager 58m Normal ScalingReplicaSet deployment/route-controller-manager Scaled down replica set route-controller-manager-7d7696bfd4 to 0 from 1 openshift-kube-scheduler 58m Normal AddedInterface pod/installer-5-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.17/23] from ovn-kubernetes openshift-kube-scheduler 58m Normal Pulled pod/installer-5-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-controller-manager-operator 58m Normal ConfigMapCreated deployment/openshift-controller-manager-operator Created ConfigMap/client-ca -n openshift-route-controller-manager because it was missing openshift-kube-scheduler 58m Normal Started pod/installer-5-ip-10-0-239-132.ec2.internal Started container installer openshift-kube-scheduler 58m Normal Created pod/installer-5-ip-10-0-239-132.ec2.internal Created container installer openshift-controller-manager-operator 58m Normal ConfigMapCreated deployment/openshift-controller-manager-operator Created ConfigMap/client-ca -n openshift-controller-manager because it was missing openshift-kube-apiserver-operator 58m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/node-kubeconfigs -n openshift-kube-apiserver because it was missing openshift-kube-controller-manager 58m Normal Pulling pod/installer-2-ip-10-0-239-132.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" openshift-kube-controller-manager 58m Normal AddedInterface pod/installer-2-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.18/23] from ovn-kubernetes openshift-kube-scheduler-operator 58m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" openshift-kube-scheduler-operator 58m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" openshift-kube-controller-manager 58m Normal Pulled pod/installer-2-ip-10-0-239-132.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" in 1.849319283s (1.849334558s including waiting) openshift-kube-scheduler-operator 58m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]" openshift-kube-scheduler-operator 58m Normal Created pod/openshift-kube-scheduler-operator-c98d57874-wj7tl Created container kube-scheduler-operator-container openshift-authentication-operator 58m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-authentication-operator 58m Normal ObservedConfigChanged deployment/authentication-operator Writing updated section ("oauthServer") of observed config: " map[string]any{\n \t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n \t\"oauthConfig\": map[string]any{\"assetPublicURL\": string(\"\"), \"loginURL\": string(\"https://api.qeaisrhods-c13.abmw.s1.devshift.org:6443\"), \"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)}, \"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)}},\n \t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n \t\"servingInfo\": map[string]any{\n \t\t\"cipherSuites\": []any{string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\"), string(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"), ...},\n \t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+ \t\t\"namedCertificates\": []any{\n+ \t\t\tmap[string]any{\n+ \t\t\t\t\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+ \t\t\t\t\"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+ \t\t\t\t\"names\": []any{string(\"*.apps.qeaisrhods-c13.abmw.s1.de\"...)},\n+ \t\t\t},\n+ \t\t},\n \t},\n \t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n }\n" openshift-kube-apiserver-operator 58m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" openshift-kube-scheduler-operator 58m Normal Pulled pod/openshift-kube-scheduler-operator-c98d57874-wj7tl Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-authentication-operator 58m Warning ObservedConfigWriteError deployment/authentication-operator Failed to write observed config: Operation cannot be fulfilled on authentications.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again openshift-authentication-operator 58m Normal ObserveRouterSecret deployment/authentication-operator namedCertificates changed to []interface {}{map[string]interface {}{"certFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.qeaisrhods-c13.abmw.s1.devshift.org", "keyFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.qeaisrhods-c13.abmw.s1.devshift.org", "names":[]interface {}{"*.apps.qeaisrhods-c13.abmw.s1.devshift.org"}}} openshift-kube-scheduler-operator 58m Normal Started pod/openshift-kube-scheduler-operator-c98d57874-wj7tl Started container kube-scheduler-operator-container openshift-kube-controller-manager-operator 58m Normal ConfigMapUpdated deployment/kube-controller-manager-operator Updated ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager:... openshift-kube-scheduler-operator 58m Warning FastControllerResync deployment/openshift-kube-scheduler-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler-operator 58m Warning FastControllerResync deployment/openshift-kube-scheduler-operator Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler-operator 58m Warning FastControllerResync deployment/openshift-kube-scheduler-operator Controller "InstallerController" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler-operator 58m Warning FastControllerResync deployment/openshift-kube-scheduler-operator Controller "PruneController" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler-operator 58m Warning FastControllerResync deployment/openshift-kube-scheduler-operator Controller "NodeController" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler-operator 58m Warning FastControllerResync deployment/openshift-kube-scheduler-operator Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler-operator 58m Normal LeaderElection lease/openshift-cluster-kube-scheduler-operator-lock openshift-kube-scheduler-operator-c98d57874-wj7tl_bd55b66f-0a00-4f47-92fe-4d86bf76c0e8 became leader openshift-kube-scheduler-operator 58m Warning FastControllerResync deployment/openshift-kube-scheduler-operator Controller "GuardController" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler-operator 58m Normal LeaderElection configmap/openshift-cluster-kube-scheduler-operator-lock openshift-kube-scheduler-operator-c98d57874-wj7tl_bd55b66f-0a00-4f47-92fe-4d86bf76c0e8 became leader openshift-route-controller-manager 58m Normal Pulling pod/route-controller-manager-7d7696bfd4-zpkmp Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:884971410b4691a0666287eb389e07a425040d4fc90665c864aa9f4728d4599c" openshift-controller-manager 58m Normal Pulling pod/controller-manager-64556d4c99-kxhn7 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:03d999968e93d76624e3d10f22591abfbfdaaf2dd8f315abf64cdc590ab64195" openshift-route-controller-manager 58m Normal Pulling pod/route-controller-manager-7d7696bfd4-2tvnf Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:884971410b4691a0666287eb389e07a425040d4fc90665c864aa9f4728d4599c" openshift-controller-manager 58m Normal AddedInterface pod/controller-manager-64556d4c99-kxhn7 Add eth0 [10.128.0.8/23] from ovn-kubernetes openshift-controller-manager 58m Normal Pulling pod/controller-manager-64556d4c99-46tn2 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:03d999968e93d76624e3d10f22591abfbfdaaf2dd8f315abf64cdc590ab64195" openshift-controller-manager 58m Normal AddedInterface pod/controller-manager-64556d4c99-46tn2 Add eth0 [10.130.0.36/23] from ovn-kubernetes openshift-route-controller-manager 58m Normal AddedInterface pod/route-controller-manager-7d7696bfd4-zpkmp Add eth0 [10.130.0.37/23] from ovn-kubernetes openshift-route-controller-manager 58m Normal AddedInterface pod/route-controller-manager-7d7696bfd4-2tvnf Add eth0 [10.128.0.9/23] from ovn-kubernetes openshift-kube-controller-manager-operator 58m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/revision-status-3 -n openshift-kube-controller-manager because it was missing openshift-authentication-operator 58m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-kube-controller-manager-operator 58m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/kube-controller-manager-pod-3 -n openshift-kube-controller-manager because it was missing openshift-kube-controller-manager-operator 58m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/config-3 -n openshift-kube-controller-manager because it was missing openshift-kube-apiserver-operator 58m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" openshift-kube-controller-manager-operator 58m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/cluster-policy-controller-config-3 -n openshift-kube-controller-manager because it was missing openshift-route-controller-manager 58m Normal Killing pod/route-controller-manager-7d7696bfd4-2tvnf Stopping container route-controller-manager openshift-route-controller-manager 58m Normal Started pod/route-controller-manager-7d7696bfd4-2tvnf Started container route-controller-manager openshift-kube-scheduler-operator 58m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]" openshift-route-controller-manager 58m Normal LeaderElection lease/openshift-route-controllers route-controller-manager-7d7696bfd4-2tvnf became leader openshift-route-controller-manager 58m Normal LeaderElection lease/openshift-route-controllers route-controller-manager-7d7696bfd4-zpkmp became leader openshift-controller-manager 58m Normal Pulled pod/controller-manager-64556d4c99-46tn2 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:03d999968e93d76624e3d10f22591abfbfdaaf2dd8f315abf64cdc590ab64195" in 2.875491911s (2.875505277s including waiting) openshift-route-controller-manager 58m Normal Created pod/route-controller-manager-7d7696bfd4-2tvnf Created container route-controller-manager openshift-controller-manager 58m Normal Started pod/controller-manager-64556d4c99-46tn2 Started container controller-manager openshift-route-controller-manager 58m Normal Started pod/route-controller-manager-7d7696bfd4-zpkmp Started container route-controller-manager openshift-route-controller-manager 58m Normal Created pod/route-controller-manager-7d7696bfd4-zpkmp Created container route-controller-manager openshift-controller-manager 58m Normal Started pod/controller-manager-64556d4c99-kxhn7 Started container controller-manager openshift-controller-manager 58m Normal LeaderElection lease/openshift-master-controllers controller-manager-64556d4c99-46tn2 became leader openshift-controller-manager 58m Normal LeaderElection configmap/openshift-master-controllers controller-manager-64556d4c99-46tn2 became leader openshift-controller-manager 58m Normal Created pod/controller-manager-64556d4c99-kxhn7 Created container controller-manager openshift-controller-manager 58m Normal Pulled pod/controller-manager-64556d4c99-kxhn7 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:03d999968e93d76624e3d10f22591abfbfdaaf2dd8f315abf64cdc590ab64195" in 2.788983273s (2.788993603s including waiting) openshift-route-controller-manager 58m Normal Pulled pod/route-controller-manager-7d7696bfd4-zpkmp Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:884971410b4691a0666287eb389e07a425040d4fc90665c864aa9f4728d4599c" in 2.835411627s (2.835424339s including waiting) openshift-controller-manager 58m Normal Created pod/controller-manager-64556d4c99-46tn2 Created container controller-manager openshift-kube-controller-manager-operator 58m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/controller-manager-kubeconfig-3 -n openshift-kube-controller-manager because it was missing openshift-route-controller-manager 58m Normal Pulled pod/route-controller-manager-7d7696bfd4-2tvnf Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:884971410b4691a0666287eb389e07a425040d4fc90665c864aa9f4728d4599c" in 2.744773241s (2.744787929s including waiting) openshift-controller-manager 58m Normal LeaderElection configmap/openshift-master-controllers controller-manager-64556d4c99-kxhn7 became leader openshift-controller-manager 58m Normal LeaderElection lease/openshift-master-controllers controller-manager-64556d4c99-kxhn7 became leader openshift-controller-manager 58m Normal Killing pod/controller-manager-64556d4c99-kxhn7 Stopping container controller-manager openshift-authentication-operator 58m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthAPIServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found" openshift-controller-manager 58m Normal Killing pod/controller-manager-64556d4c99-46tn2 Stopping container controller-manager openshift-kube-controller-manager-operator 58m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/kube-controller-cert-syncer-kubeconfig-3 -n openshift-kube-controller-manager because it was missing openshift-kube-controller-manager-operator 58m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nNodeControllerDegraded: All master nodes are ready" openshift-route-controller-manager 58m Normal Killing pod/route-controller-manager-7d7696bfd4-zpkmp Stopping container route-controller-manager openshift-kube-controller-manager-operator 58m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-controller-manager because it was missing openshift-route-controller-manager 58m Normal AddedInterface pod/route-controller-manager-7d7696bfd4-z2bjq Add eth0 [10.129.0.9/23] from ovn-kubernetes openshift-route-controller-manager 58m Normal Pulling pod/route-controller-manager-7d7696bfd4-z2bjq Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:884971410b4691a0666287eb389e07a425040d4fc90665c864aa9f4728d4599c" openshift-controller-manager 58m Normal Started pod/controller-manager-5ff6588dbb-fwcgz Started container controller-manager openshift-controller-manager 58m Normal Created pod/controller-manager-5ff6588dbb-fwcgz Created container controller-manager openshift-controller-manager 58m Normal AddedInterface pod/controller-manager-64556d4c99-8fw47 Add eth0 [10.129.0.8/23] from ovn-kubernetes openshift-kube-controller-manager 58m Normal Created pod/installer-2-ip-10-0-239-132.ec2.internal Created container installer openshift-controller-manager 58m Normal Pulling pod/controller-manager-64556d4c99-8fw47 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:03d999968e93d76624e3d10f22591abfbfdaaf2dd8f315abf64cdc590ab64195" openshift-controller-manager 58m Normal Pulled pod/controller-manager-5ff6588dbb-fwcgz Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:03d999968e93d76624e3d10f22591abfbfdaaf2dd8f315abf64cdc590ab64195" already present on machine openshift-kube-controller-manager 58m Normal Started pod/installer-2-ip-10-0-239-132.ec2.internal Started container installer openshift-controller-manager 58m Normal AddedInterface pod/controller-manager-5ff6588dbb-fwcgz Add eth0 [10.128.0.16/23] from ovn-kubernetes openshift-controller-manager 58m Normal LeaderElection configmap/openshift-master-controllers controller-manager-5ff6588dbb-fwcgz became leader openshift-controller-manager 58m Normal LeaderElection lease/openshift-master-controllers controller-manager-5ff6588dbb-fwcgz became leader openshift-marketplace 58m Normal AddedInterface pod/certified-operators-77trp Add eth0 [10.128.0.18/23] from ovn-kubernetes openshift-marketplace 58m Normal Pulling pod/community-operators-7jr7c Pulling image "registry.redhat.io/redhat/community-operator-index:v4.12" openshift-marketplace 58m Normal AddedInterface pod/community-operators-7jr7c Add eth0 [10.128.0.19/23] from ovn-kubernetes openshift-marketplace 58m Normal Pulling pod/certified-operators-77trp Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.12" openshift-kube-controller-manager-operator 58m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nNodeControllerDegraded: All master nodes are ready" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready" openshift-marketplace 58m Normal AddedInterface pod/redhat-operators-jzt5b Add eth0 [10.128.0.17/23] from ovn-kubernetes openshift-marketplace 58m Normal Pulling pod/redhat-operators-jzt5b Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.12" openshift-marketplace 58m Normal AddedInterface pod/redhat-marketplace-crqrm Add eth0 [10.128.0.20/23] from ovn-kubernetes openshift-kube-controller-manager-operator 58m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/service-ca-3 -n openshift-kube-controller-manager because it was missing openshift-authentication-operator 58m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused" openshift-route-controller-manager 58m Normal LeaderElection lease/openshift-route-controllers route-controller-manager-795466d555-hwftm became leader openshift-authentication-operator 58m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused" default 58m Normal ConfigDriftMonitorStarted node/ip-10-0-197-197.ec2.internal Config Drift Monitor started, watching against rendered-master-ff215a8818ae4a038b75bd4a838f2d00 openshift-route-controller-manager 58m Normal Started pod/route-controller-manager-795466d555-hwftm Started container route-controller-manager default 58m Normal NodeDone node/ip-10-0-197-197.ec2.internal Setting node ip-10-0-197-197.ec2.internal, currentConfig rendered-master-ff215a8818ae4a038b75bd4a838f2d00 to Done openshift-route-controller-manager 58m Normal Pulled pod/route-controller-manager-795466d555-hwftm Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:884971410b4691a0666287eb389e07a425040d4fc90665c864aa9f4728d4599c" already present on machine openshift-route-controller-manager 58m Warning Unhealthy pod/route-controller-manager-795466d555-hwftm Readiness probe failed: Get "https://10.128.0.21:8443/healthz": dial tcp 10.128.0.21:8443: connect: connection refused openshift-controller-manager 58m Normal AddedInterface pod/controller-manager-77cd478b57-4s2qm Add eth0 [10.130.0.40/23] from ovn-kubernetes openshift-route-controller-manager 58m Normal Created pod/route-controller-manager-795466d555-hwftm Created container route-controller-manager openshift-route-controller-manager 58m Normal AddedInterface pod/route-controller-manager-795466d555-hwftm Add eth0 [10.128.0.21/23] from ovn-kubernetes openshift-controller-manager 58m Normal Pulled pod/controller-manager-77cd478b57-4s2qm Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:03d999968e93d76624e3d10f22591abfbfdaaf2dd8f315abf64cdc590ab64195" already present on machine default 58m Normal Uncordon node/ip-10-0-197-197.ec2.internal Update completed for config rendered-master-ff215a8818ae4a038b75bd4a838f2d00 and node has been uncordoned openshift-authentication-operator 58m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory apiServerArguments.etcd-servers\nAPIServerDeploymentDegraded: " openshift-marketplace 58m Normal Pulling pod/redhat-marketplace-crqrm Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.12" openshift-kube-controller-manager-operator 58m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/recycler-config-3 -n openshift-kube-controller-manager because it was missing openshift-controller-manager 58m Normal Created pod/controller-manager-77cd478b57-4s2qm Created container controller-manager openshift-route-controller-manager 58m Warning ProbeError pod/route-controller-manager-795466d555-hwftm Readiness probe error: Get "https://10.128.0.21:8443/healthz": dial tcp 10.128.0.21:8443: connect: connection refused... openshift-controller-manager 58m Normal Started pod/controller-manager-77cd478b57-4s2qm Started container controller-manager openshift-kube-apiserver-operator 58m Warning ConfigMapCreateFailed deployment/kube-apiserver-operator Failed to create ConfigMap/kube-apiserver-server-ca -n openshift-config-managed: configmaps "kube-apiserver-server-ca" already exists openshift-kube-apiserver-operator 58m Warning RequiredInstallerResourcesMissing deployment/kube-apiserver-operator configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1 openshift-kube-storage-version-migrator-operator 58m Normal Created pod/kube-storage-version-migrator-operator-7f8b95cf5f-x5hzl Created container kube-storage-version-migrator-operator openshift-controller-manager 58m Normal Started pod/controller-manager-64556d4c99-8fw47 Started container controller-manager openshift-route-controller-manager 58m Normal ScalingReplicaSet deployment/route-controller-manager Scaled up replica set route-controller-manager-795466d555 to 2 from 1 openshift-route-controller-manager 58m Normal ScalingReplicaSet deployment/route-controller-manager Scaled down replica set route-controller-manager-6b76fb6ddf to 0 from 1 openshift-kube-storage-version-migrator-operator 58m Normal Started pod/kube-storage-version-migrator-operator-7f8b95cf5f-x5hzl Started container kube-storage-version-migrator-operator openshift-kube-storage-version-migrator-operator 58m Normal Pulled pod/kube-storage-version-migrator-operator-7f8b95cf5f-x5hzl Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc8e1a30ec145b1e91f862880b9866d48abe8056fe69edd94d760739137b6d4a" already present on machine openshift-controller-manager 58m Normal Created pod/controller-manager-64556d4c99-8fw47 Created container controller-manager openshift-route-controller-manager 58m Normal Pulled pod/route-controller-manager-7d7696bfd4-z2bjq Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:884971410b4691a0666287eb389e07a425040d4fc90665c864aa9f4728d4599c" in 2.86655045s (2.866565638s including waiting) openshift-route-controller-manager 58m Normal Created pod/route-controller-manager-7d7696bfd4-z2bjq Created container route-controller-manager openshift-controller-manager 58m Normal Pulled pod/controller-manager-64556d4c99-8fw47 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:03d999968e93d76624e3d10f22591abfbfdaaf2dd8f315abf64cdc590ab64195" in 2.637737654s (2.637750009s including waiting) openshift-kube-apiserver-operator 58m Warning ObserveStorageFailed deployment/kube-apiserver-operator configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found openshift-kube-apiserver-operator 58m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found\nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found" openshift-route-controller-manager 58m Normal SuccessfulCreate replicaset/route-controller-manager-795466d555 Created pod: route-controller-manager-795466d555-dxq7d openshift-route-controller-manager 58m Normal Started pod/route-controller-manager-7d7696bfd4-z2bjq Started container route-controller-manager openshift-kube-apiserver-operator 58m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-server-ca -n openshift-config-managed because it was missing openshift-route-controller-manager 58m Normal SuccessfulDelete replicaset/route-controller-manager-6b76fb6ddf Deleted pod: route-controller-manager-6b76fb6ddf-hqd6b default 58m Normal AnnotationChange machineconfigpool/master Node ip-10-0-140-6.ec2.internal now has machineconfiguration.openshift.io/reason= default 58m Normal Uncordon node/ip-10-0-140-6.ec2.internal Update completed for config rendered-master-ff215a8818ae4a038b75bd4a838f2d00 and node has been uncordoned openshift-route-controller-manager 58m Normal Killing pod/route-controller-manager-7d7696bfd4-z2bjq Stopping container route-controller-manager openshift-controller-manager 58m Normal SuccessfulDelete replicaset/controller-manager-579956b947 Deleted pod: controller-manager-579956b947-ql6fs openshift-route-controller-manager 58m Normal Started pod/route-controller-manager-795466d555-dxq7d Started container route-controller-manager openshift-route-controller-manager 58m Normal Pulled pod/route-controller-manager-795466d555-dxq7d Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:884971410b4691a0666287eb389e07a425040d4fc90665c864aa9f4728d4599c" already present on machine openshift-controller-manager 58m Normal SuccessfulCreate replicaset/controller-manager-77cd478b57 Created pod: controller-manager-77cd478b57-z4w9g openshift-authentication-operator 58m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Progressing message changed from "All is well" to "AuthenticatorCertKeyProgressing: All is well" openshift-route-controller-manager 58m Normal SuccessfulCreate replicaset/route-controller-manager-795466d555 Created pod: route-controller-manager-795466d555-57pst openshift-controller-manager 58m Normal Killing pod/controller-manager-64556d4c99-8fw47 Stopping container controller-manager openshift-route-controller-manager 58m Normal Created pod/route-controller-manager-795466d555-dxq7d Created container route-controller-manager openshift-route-controller-manager 58m Normal SuccessfulDelete replicaset/route-controller-manager-678c989865 Deleted pod: route-controller-manager-678c989865-fj78v openshift-route-controller-manager 58m Normal AddedInterface pod/route-controller-manager-795466d555-dxq7d Add eth0 [10.130.0.41/23] from ovn-kubernetes openshift-authentication-operator 58m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory apiServerArguments.etcd-servers\nAPIServerDeploymentDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory apiServerArguments.etcd-servers\nAPIServerDeploymentDegraded: " openshift-kube-controller-manager-operator 58m Normal SecretCreated deployment/kube-controller-manager-operator Created Secret/service-account-private-key-3 -n openshift-kube-controller-manager because it was missing default 58m Normal ConfigDriftMonitorStarted node/ip-10-0-140-6.ec2.internal Config Drift Monitor started, watching against rendered-master-ff215a8818ae4a038b75bd4a838f2d00 default 58m Normal NodeDone node/ip-10-0-140-6.ec2.internal Setting node ip-10-0-140-6.ec2.internal, currentConfig rendered-master-ff215a8818ae4a038b75bd4a838f2d00 to Done openshift-authentication-operator 58m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory apiServerArguments.etcd-servers\nAPIServerDeploymentDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory apiServerArguments.etcd-servers\nAPIServerDeploymentDegraded: " openshift-kube-controller-manager-operator 58m Normal SecretCreated deployment/kube-controller-manager-operator Created Secret/serving-cert-3 -n openshift-kube-controller-manager because it was missing openshift-route-controller-manager 58m Normal AddedInterface pod/route-controller-manager-795466d555-57pst Add eth0 [10.129.0.20/23] from ovn-kubernetes openshift-route-controller-manager 58m Normal Created pod/route-controller-manager-795466d555-57pst Created container route-controller-manager openshift-controller-manager 58m Normal Started pod/controller-manager-77cd478b57-z4w9g Started container controller-manager openshift-route-controller-manager 58m Normal Started pod/route-controller-manager-795466d555-57pst Started container route-controller-manager openshift-controller-manager 58m Normal Created pod/controller-manager-77cd478b57-z4w9g Created container controller-manager openshift-kube-controller-manager-operator 58m Normal SecretCreated deployment/kube-controller-manager-operator Created Secret/localhost-recovery-client-token-3 -n openshift-kube-controller-manager because it was missing openshift-kube-controller-manager-operator 58m Normal RevisionTriggered deployment/kube-controller-manager-operator new revision 3 triggered by "configmap/serviceaccount-ca has changed" openshift-kube-controller-manager-operator 58m Normal RevisionCreate deployment/kube-controller-manager-operator Revision 2 created because configmap/serviceaccount-ca has changed openshift-kube-controller-manager-operator 58m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nRevisionControllerDegraded: conflicting latestAvailableRevision 3\nNodeControllerDegraded: All master nodes are ready" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready" openshift-controller-manager 58m Normal Pulled pod/controller-manager-77cd478b57-z4w9g Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:03d999968e93d76624e3d10f22591abfbfdaaf2dd8f315abf64cdc590ab64195" already present on machine openshift-kube-controller-manager-operator 58m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nRevisionControllerDegraded: conflicting latestAvailableRevision 3\nNodeControllerDegraded: All master nodes are ready" openshift-controller-manager 58m Normal AddedInterface pod/controller-manager-77cd478b57-z4w9g Add eth0 [10.129.0.19/23] from ovn-kubernetes openshift-route-controller-manager 58m Normal Pulled pod/route-controller-manager-795466d555-57pst Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:884971410b4691a0666287eb389e07a425040d4fc90665c864aa9f4728d4599c" already present on machine openshift-kube-controller-manager-operator 58m Normal SATokenSignerControllerStuck deployment/kube-controller-manager-operator unexpected addresses: 10.0.8.110 openshift-controller-manager 58m Normal SuccessfulDelete replicaset/controller-manager-5ff6588dbb Deleted pod: controller-manager-5ff6588dbb-fwcgz openshift-controller-manager 58m Normal SuccessfulCreate replicaset/controller-manager-77cd478b57 Created pod: controller-manager-77cd478b57-m9m5f openshift-controller-manager 58m Normal Killing pod/controller-manager-5ff6588dbb-fwcgz Stopping container controller-manager openshift-kube-apiserver-operator 58m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found\nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found" openshift-kube-apiserver-operator 58m Normal RevisionCreate deployment/kube-apiserver-operator Revision 1 created because configmap "kube-apiserver-pod-1" not found openshift-cluster-storage-operator 58m Normal Pulled pod/cluster-storage-operator-fb5868667-cclnx Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2a4719dd49c67aa02ad187264977e0b64ad2b0d6725e99b1d460567663961ef4" already present on machine openshift-cluster-storage-operator 58m Normal Created pod/cluster-storage-operator-fb5868667-cclnx Created container cluster-storage-operator openshift-cluster-storage-operator 58m Normal Started pod/cluster-storage-operator-fb5868667-cclnx Started container cluster-storage-operator openshift-kube-controller-manager-operator 58m Normal Created pod/kube-controller-manager-operator-655bd6977c-z9mb9 Created container kube-controller-manager-operator openshift-kube-controller-manager-operator 58m Normal Pulled pod/kube-controller-manager-operator-655bd6977c-z9mb9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-scheduler-operator 58m Normal RevisionTriggered deployment/openshift-kube-scheduler-operator new revision 6 triggered by "configmap/serviceaccount-ca has changed" openshift-machine-api 58m Normal Update machine/qeaisrhods-c13-28wr5-master-1 Updated Machine qeaisrhods-c13-28wr5-master-1 openshift-route-controller-manager 58m Normal SuccessfulCreate replicaset/route-controller-manager-7ff89c67c Created pod: route-controller-manager-7ff89c67c-2bq47 openshift-kube-controller-manager-operator 58m Warning FastControllerResync deployment/kube-controller-manager-operator Controller "NodeController" resync interval is set to 0s which might lead to client request throttling openshift-controller-manager 58m Normal SuccessfulCreate replicaset/controller-manager-6fcd58c8dc Created pod: controller-manager-6fcd58c8dc-wdb9f openshift-kube-controller-manager-operator 58m Warning FastControllerResync deployment/kube-controller-manager-operator Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager-operator 58m Warning FastControllerResync deployment/kube-controller-manager-operator Controller "InstallerController" resync interval is set to 0s which might lead to client request throttling openshift-controller-manager-operator 58m Normal DeploymentUpdated deployment/openshift-controller-manager-operator Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed openshift-kube-scheduler-operator 58m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/revision-status-6 -n openshift-kube-scheduler because it was missing openshift-kube-controller-manager-operator 58m Normal LeaderElection lease/kube-controller-manager-operator-lock kube-controller-manager-operator-655bd6977c-z9mb9_8f7f64c0-afc9-44ac-b01f-76ec9edec9b6 became leader openshift-kube-controller-manager-operator 58m Normal LeaderElection configmap/kube-controller-manager-operator-lock kube-controller-manager-operator-655bd6977c-z9mb9_8f7f64c0-afc9-44ac-b01f-76ec9edec9b6 became leader openshift-kube-controller-manager-operator 58m Warning FastControllerResync deployment/kube-controller-manager-operator Controller "GuardController" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager-operator 58m Warning FastControllerResync deployment/kube-controller-manager-operator Controller "PruneController" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager-operator 58m Normal Started pod/kube-controller-manager-operator-655bd6977c-z9mb9 Started container kube-controller-manager-operator openshift-machine-api 58m Normal Update machine/qeaisrhods-c13-28wr5-master-2 Updated Machine qeaisrhods-c13-28wr5-master-2 openshift-controller-manager-operator 58m Normal DeploymentUpdated deployment/openshift-controller-manager-operator Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed openshift-machine-api 58m Normal LeaderElection lease/cluster-autoscaler-operator-leader cluster-autoscaler-operator-7fcffdb7c8-g4w4m_e29ae286-79e5-4af1-84be-2c798eef5362 became leader openshift-controller-manager-operator 58m Normal ConfigMapUpdated deployment/openshift-controller-manager-operator Updated ConfigMap/config -n openshift-controller-manager:... openshift-controller-manager-operator 58m Normal ConfigMapUpdated deployment/openshift-controller-manager-operator Updated ConfigMap/config -n openshift-route-controller-manager:... openshift-controller-manager 58m Normal SuccessfulDelete replicaset/controller-manager-77cd478b57 Deleted pod: controller-manager-77cd478b57-m9m5f openshift-route-controller-manager 58m Normal SuccessfulDelete replicaset/route-controller-manager-795466d555 Deleted pod: route-controller-manager-795466d555-57pst openshift-controller-manager-operator 58m Normal OperatorVersionChanged deployment/openshift-controller-manager-operator clusteroperator/openshift-controller-manager version "operator" changed from "" to "4.13.0-rc.0" openshift-controller-manager-operator 58m Normal OperatorStatusChanged deployment/openshift-controller-manager-operator Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" to "Progressing: deployment/controller-manager: observed generation is 5, desired generation is 6.\nProgressing: deployment/route-controller-manager: observed generation is 4, desired generation is 5.",Available changed from False to True ("All is well"),status.versions changed from [] to [{"operator" "4.13.0-rc.0"}] openshift-kube-controller-manager-operator 58m Warning FastControllerResync deployment/kube-controller-manager-operator Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager-operator 58m Warning FastControllerResync deployment/kube-controller-manager-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager-operator 58m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-3,config-3,controller-manager-kubeconfig-3,kube-controller-cert-syncer-kubeconfig-3,kube-controller-manager-pod-3,recycler-config-3,service-ca-3,serviceaccount-ca-3]\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready",Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 3" openshift-etcd 58m Normal StaticPodInstallerCompleted pod/installer-2-ip-10-0-239-132.ec2.internal Successfully installed revision 2 openshift-kube-controller-manager-operator 58m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-3,config-3,controller-manager-kubeconfig-3,kube-controller-cert-syncer-kubeconfig-3,kube-controller-manager-pod-3,recycler-config-3,service-ca-3,serviceaccount-ca-3]\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-3,config-3,controller-manager-kubeconfig-3,kube-controller-cert-syncer-kubeconfig-3,kube-controller-manager-pod-3,recycler-config-3,service-ca-3,serviceaccount-ca-3]\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nNodeControllerDegraded: All master nodes are ready" openshift-kube-scheduler-operator 58m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/kube-scheduler-pod-6 -n openshift-kube-scheduler because it was missing openshift-kube-controller-manager-operator 58m Warning RequiredInstallerResourcesMissing deployment/kube-controller-manager-operator configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-3,config-3,controller-manager-kubeconfig-3,kube-controller-cert-syncer-kubeconfig-3,kube-controller-manager-pod-3,recycler-config-3,service-ca-3,serviceaccount-ca-3 openshift-route-controller-manager 58m Normal Killing pod/route-controller-manager-795466d555-57pst Stopping container route-controller-manager openshift-etcd 58m Normal Pulling pod/etcd-ip-10-0-239-132.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" openshift-route-controller-manager 58m Normal Pulled pod/route-controller-manager-7ff89c67c-2bq47 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:884971410b4691a0666287eb389e07a425040d4fc90665c864aa9f4728d4599c" already present on machine openshift-kube-controller-manager-operator 58m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-3,config-3,controller-manager-kubeconfig-3,kube-controller-cert-syncer-kubeconfig-3,kube-controller-manager-pod-3,recycler-config-3,service-ca-3,serviceaccount-ca-3]\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-3,config-3,controller-manager-kubeconfig-3,kube-controller-cert-syncer-kubeconfig-3,kube-controller-manager-pod-3,recycler-config-3,service-ca-3,serviceaccount-ca-3]\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nNodeControllerDegraded: All master nodes are ready" openshift-kube-apiserver-operator 58m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 2" openshift-route-controller-manager 58m Normal AddedInterface pod/route-controller-manager-7ff89c67c-2bq47 Add eth0 [10.129.0.21/23] from ovn-kubernetes openshift-kube-controller-manager-operator 58m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-3,config-3,controller-manager-kubeconfig-3,kube-controller-cert-syncer-kubeconfig-3,kube-controller-manager-pod-3,recycler-config-3,service-ca-3,serviceaccount-ca-3]\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nNodeControllerDegraded: All master nodes are ready" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-3,config-3,controller-manager-kubeconfig-3,kube-controller-cert-syncer-kubeconfig-3,kube-controller-manager-pod-3,recycler-config-3,service-ca-3,serviceaccount-ca-3]\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready" openshift-machine-api 58m Normal Create machine/qeaisrhods-c13-28wr5-worker-us-east-1a-tfwzm Created Machine qeaisrhods-c13-28wr5-worker-us-east-1a-tfwzm openshift-kube-apiserver-operator 58m Normal NodeTargetRevisionChanged deployment/kube-apiserver-operator Updating node "ip-10-0-197-197.ec2.internal" from revision 0 to 2 because node ip-10-0-197-197.ec2.internal static pod not found openshift-kube-scheduler-operator 58m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/config-6 -n openshift-kube-scheduler because it was missing openshift-etcd 58m Normal NoPods poddisruptionbudget/etcd-guard-pdb No matching pods found openshift-route-controller-manager 58m Normal Killing pod/route-controller-manager-795466d555-dxq7d Stopping container route-controller-manager openshift-route-controller-manager 58m Normal Started pod/route-controller-manager-7ff89c67c-2bq47 Started container route-controller-manager openshift-route-controller-manager 58m Normal SuccessfulCreate replicaset/route-controller-manager-7ff89c67c Created pod: route-controller-manager-7ff89c67c-8b8g2 openshift-route-controller-manager 58m Normal SuccessfulDelete replicaset/route-controller-manager-795466d555 Deleted pod: route-controller-manager-795466d555-dxq7d openshift-route-controller-manager 58m Normal Created pod/route-controller-manager-7ff89c67c-2bq47 Created container route-controller-manager openshift-etcd 58m Normal Created pod/etcd-ip-10-0-239-132.ec2.internal Created container setup openshift-etcd-operator 58m Normal OperatorVersionChanged deployment/etcd-operator clusteroperator/etcd version "etcd" changed from "" to "4.13.0-rc.0" openshift-etcd-operator 58m Normal OperatorVersionChanged deployment/etcd-operator clusteroperator/etcd version "operator" changed from "" to "4.13.0-rc.0" openshift-etcd-operator 58m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: status.versions changed from [{"raw-internal" "4.13.0-rc.0"}] to [{"raw-internal" "4.13.0-rc.0"} {"etcd" "4.13.0-rc.0"} {"operator" "4.13.0-rc.0"}] openshift-authentication-operator 58m Warning ObserveStorageFailed deployment/authentication-operator configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found openshift-service-ca-operator 58m Normal Started pod/service-ca-operator-7988896c96-5q667 Started container service-ca-operator openshift-service-ca-operator 58m Normal Pulled pod/service-ca-operator-7988896c96-5q667 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7f7cb6554c1dc9b5b3b58f162f592062e5c63bf24c5ed90a62074e117be3f743" already present on machine openshift-etcd 58m Normal Started pod/etcd-ip-10-0-239-132.ec2.internal Started container setup openshift-etcd-operator 58m Normal PodDisruptionBudgetCreated deployment/etcd-operator Created PodDisruptionBudget.policy/etcd-guard-pdb -n openshift-etcd because it was missing openshift-service-ca-operator 58m Normal Created pod/service-ca-operator-7988896c96-5q667 Created container service-ca-operator openshift-kube-scheduler-operator 58m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/serviceaccount-ca-6 -n openshift-kube-scheduler because it was missing openshift-etcd 58m Normal Pulled pod/etcd-ip-10-0-239-132.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" in 1.649505409s (1.649528917s including waiting) openshift-kube-controller-manager-operator 58m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-3,config-3,controller-manager-kubeconfig-3,kube-controller-cert-syncer-kubeconfig-3,kube-controller-manager-pod-3,recycler-config-3,service-ca-3,serviceaccount-ca-3]\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nNodeControllerDegraded: All master nodes are ready" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-3,config-3,controller-manager-kubeconfig-3,kube-controller-cert-syncer-kubeconfig-3,kube-controller-manager-pod-3,recycler-config-3,service-ca-3,serviceaccount-ca-3]\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nNodeControllerDegraded: All master nodes are ready" openshift-kube-controller-manager-operator 58m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, configmaps: cluster-policy-controller-config-3,config-3,controller-manager-kubeconfig-3,kube-controller-cert-syncer-kubeconfig-3,kube-controller-manager-pod-3,recycler-config-3,service-ca-3,serviceaccount-ca-3]\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nNodeControllerDegraded: All master nodes are ready" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nNodeControllerDegraded: All master nodes are ready" openshift-etcd 58m Normal Started pod/etcd-ip-10-0-239-132.ec2.internal Started container etcd-ensure-env-vars openshift-kube-scheduler-operator 58m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/scheduler-kubeconfig-6 -n openshift-kube-scheduler because it was missing openshift-service-ca-operator 58m Warning FastControllerResync deployment/service-ca-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-service-ca-operator 58m Warning FastControllerResync deployment/service-ca-operator Controller "ServiceCAOperator" resync interval is set to 0s which might lead to client request throttling openshift-service-ca-operator 58m Normal LeaderElection lease/service-ca-operator-lock service-ca-operator-7988896c96-5q667_41197652-6c6f-4475-a8ba-584955027da2 became leader openshift-etcd 58m Normal Pulled pod/etcd-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-service-ca-operator 58m Normal LeaderElection configmap/service-ca-operator-lock service-ca-operator-7988896c96-5q667_41197652-6c6f-4475-a8ba-584955027da2 became leader openshift-etcd 58m Normal Created pod/etcd-ip-10-0-239-132.ec2.internal Created container etcd-ensure-env-vars openshift-etcd-operator 58m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdMembersDegraded: No unhealthy members found" openshift-machine-api 58m Normal Create machine/qeaisrhods-c13-28wr5-worker-us-east-1a-cp7f7 Created Machine qeaisrhods-c13-28wr5-worker-us-east-1a-cp7f7 openshift-etcd 58m Normal Pulled pod/etcd-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-machine-api 58m Normal Update machine/qeaisrhods-c13-28wr5-master-0 Updated Machine qeaisrhods-c13-28wr5-master-0 default 58m Normal AnnotationChange machineconfigpool/master Node ip-10-0-239-132.ec2.internal now has machineconfiguration.openshift.io/state=Done openshift-apiserver-operator 58m Warning ObserveStorageFailed deployment/openshift-apiserver-operator configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found default 58m Normal Uncordon node/ip-10-0-239-132.ec2.internal Update completed for config rendered-master-ff215a8818ae4a038b75bd4a838f2d00 and node has been uncordoned openshift-etcd 58m Normal Created pod/etcd-ip-10-0-239-132.ec2.internal Created container etcd-resources-copy openshift-kube-scheduler-operator 58m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-6 -n openshift-kube-scheduler because it was missing default 58m Normal NodeDone node/ip-10-0-239-132.ec2.internal Setting node ip-10-0-239-132.ec2.internal, currentConfig rendered-master-ff215a8818ae4a038b75bd4a838f2d00 to Done openshift-etcd 58m Normal Started pod/etcd-ip-10-0-239-132.ec2.internal Started container etcd-resources-copy default 58m Normal AnnotationChange machineconfigpool/master Node ip-10-0-239-132.ec2.internal now has machineconfiguration.openshift.io/reason= default 58m Normal ConfigDriftMonitorStarted node/ip-10-0-239-132.ec2.internal Config Drift Monitor started, watching against rendered-master-ff215a8818ae4a038b75bd4a838f2d00 openshift-route-controller-manager 58m Normal Pulled pod/route-controller-manager-7ff89c67c-8b8g2 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:884971410b4691a0666287eb389e07a425040d4fc90665c864aa9f4728d4599c" already present on machine openshift-route-controller-manager 58m Normal Created pod/route-controller-manager-7ff89c67c-8b8g2 Created container route-controller-manager openshift-route-controller-manager 58m Normal Started pod/route-controller-manager-7ff89c67c-8b8g2 Started container route-controller-manager openshift-route-controller-manager 58m Normal AddedInterface pod/route-controller-manager-7ff89c67c-8b8g2 Add eth0 [10.130.0.42/23] from ovn-kubernetes openshift-etcd-operator 58m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdMembersDegraded: No unhealthy members found" openshift-etcd 58m Normal Pulled pod/etcd-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 58m Normal Pulled pod/etcd-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-route-controller-manager 58m Normal ScalingReplicaSet deployment/route-controller-manager (combined from similar events): Scaled up replica set route-controller-manager-7ff89c67c to 3 from 2 openshift-etcd 58m Normal Started pod/etcd-ip-10-0-239-132.ec2.internal Started container etcd-metrics openshift-etcd 58m Normal Created pod/etcd-ip-10-0-239-132.ec2.internal Created container etcd-metrics openshift-route-controller-manager 58m Normal Killing pod/route-controller-manager-795466d555-hwftm Stopping container route-controller-manager openshift-etcd 58m Normal Started pod/etcd-ip-10-0-239-132.ec2.internal Started container etcd openshift-etcd 58m Normal Created pod/etcd-ip-10-0-239-132.ec2.internal Created container etcd openshift-kube-scheduler-operator 58m Normal SecretCreated deployment/openshift-kube-scheduler-operator Created Secret/serving-cert-6 -n openshift-kube-scheduler because it was missing openshift-etcd 58m Normal Pulled pod/etcd-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 58m Normal Started pod/etcd-ip-10-0-239-132.ec2.internal Started container etcdctl openshift-etcd 58m Normal Created pod/etcd-ip-10-0-239-132.ec2.internal Created container etcdctl openshift-route-controller-manager 58m Normal SuccessfulCreate replicaset/route-controller-manager-7ff89c67c Created pod: route-controller-manager-7ff89c67c-4622z openshift-kube-controller-manager-operator 58m Normal PodCreated deployment/kube-controller-manager-operator Created Pod/installer-3-ip-10-0-239-132.ec2.internal -n openshift-kube-controller-manager because it was missing openshift-route-controller-manager 58m Normal SuccessfulDelete replicaset/route-controller-manager-795466d555 Deleted pod: route-controller-manager-795466d555-hwftm openshift-etcd 58m Normal Pulled pod/etcd-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd-operator 58m Normal Created pod/etcd-operator-775754ddff-xjxrm Created container etcd-operator openshift-kube-apiserver-operator 58m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nResourceSyncControllerDegraded: configmaps \"kube-apiserver-server-ca\" already exists\nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found" openshift-kube-controller-manager 58m Normal Started pod/installer-3-ip-10-0-239-132.ec2.internal Started container installer openshift-kube-controller-manager 58m Normal AddedInterface pod/installer-3-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.22/23] from ovn-kubernetes openshift-kube-scheduler-operator 58m Normal SecretCreated deployment/openshift-kube-scheduler-operator Created Secret/localhost-recovery-client-token-6 -n openshift-kube-scheduler because it was missing openshift-kube-controller-manager 58m Normal Created pod/installer-3-ip-10-0-239-132.ec2.internal Created container installer openshift-kube-controller-manager 58m Normal Pulled pod/installer-3-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-etcd 58m Normal Started pod/etcd-ip-10-0-239-132.ec2.internal Started container etcd-readyz openshift-etcd 58m Normal Created pod/etcd-ip-10-0-239-132.ec2.internal Created container etcd-readyz openshift-etcd-operator 58m Normal Pulled pod/etcd-operator-775754ddff-xjxrm Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd-operator 58m Warning PodCreateFailed deployment/etcd-operator Failed to create Pod/etcd-guard-ip-10-0-239-132.ec2.internal -n openshift-etcd: client rate limiter Wait returned an error: context canceled openshift-etcd-operator 58m Warning ScriptControllerErrorUpdatingStatus deployment/etcd-operator client rate limiter Wait returned an error: context canceled openshift-kube-scheduler-operator 58m Normal RevisionCreate deployment/openshift-kube-scheduler-operator Revision 5 created because configmap/serviceaccount-ca has changed openshift-authentication-operator 58m Normal Created pod/authentication-operator-dbb89644b-tbxcm Created container authentication-operator openshift-etcd-operator 58m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" to "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" openshift-etcd-operator 58m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: no etcd members are present\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" openshift-etcd-operator 58m Normal Started pod/etcd-operator-775754ddff-xjxrm Started container etcd-operator openshift-etcd-operator 58m Warning FastControllerResync deployment/etcd-operator Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling openshift-etcd-operator 58m Warning FastControllerResync deployment/etcd-operator Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling openshift-etcd-operator 58m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: no etcd members are present\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdMembersDegraded: No unhealthy members found" openshift-etcd-operator 58m Warning FastControllerResync deployment/etcd-operator Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling openshift-authentication-operator 58m Normal Pulled pod/authentication-operator-dbb89644b-tbxcm Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8b9deb101306eca89fb04662fd5266a3704ad19d6e54cae5ae79e373c0ec62d" already present on machine openshift-etcd-operator 58m Normal OperatorLogLevelChange deployment/etcd-operator Operator log level changed from "Debug" to "Normal" openshift-etcd-operator 58m Warning FastControllerResync deployment/etcd-operator Controller "InstallerController" resync interval is set to 0s which might lead to client request throttling openshift-etcd-operator 58m Warning ReportEtcdMembersErrorUpdatingStatus deployment/etcd-operator etcds.operator.openshift.io "cluster" not found openshift-etcd-operator 58m Warning FastControllerResync deployment/etcd-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-etcd-operator 58m Normal LeaderElection configmap/openshift-cluster-etcd-operator-lock etcd-operator-775754ddff-xjxrm_58816624-bdfd-404a-88ac-5c2f2dd7de0c became leader openshift-etcd-operator 58m Warning FastControllerResync deployment/etcd-operator Controller "NodeController" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 58m Normal Pulled pod/kube-apiserver-operator-79b598d5b4-dqp95 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-etcd-operator 58m Warning FastControllerResync deployment/etcd-operator Controller "PruneController" resync interval is set to 0s which might lead to client request throttling openshift-etcd-operator 58m Warning FastControllerResync deployment/etcd-operator Controller "GuardController" resync interval is set to 0s which might lead to client request throttling openshift-etcd-operator 58m Normal LeaderElection lease/openshift-cluster-etcd-operator-lock etcd-operator-775754ddff-xjxrm_58816624-bdfd-404a-88ac-5c2f2dd7de0c became leader openshift-kube-apiserver-operator 58m Warning FastControllerResync deployment/kube-apiserver-operator Controller "GuardController" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 58m Normal LeaderElection lease/kube-apiserver-operator-lock kube-apiserver-operator-79b598d5b4-dqp95_16b3a423-8831-4ab9-9ab7-b7bc96f8bbf7 became leader openshift-kube-apiserver-operator 58m Warning FastControllerResync deployment/kube-apiserver-operator Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling openshift-authentication-operator 58m Normal LeaderElection configmap/cluster-authentication-operator-lock authentication-operator-dbb89644b-tbxcm_c291144e-34f4-48ea-805f-84b20fb85392 became leader openshift-config-operator 58m Normal Pulled pod/openshift-config-operator-67bdbffb68-sdgx7 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6eca04bc4045ccf6694e6e0c94453e9c1d8dcbb669a58419603b3c2aab18488b" already present on machine openshift-kube-apiserver-operator 58m Warning FastControllerResync deployment/kube-apiserver-operator Controller "KubeletVersionSkewController" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 58m Warning FastControllerResync deployment/kube-apiserver-operator Controller "webhookSupportabilityController" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler 58m Normal Killing pod/installer-5-ip-10-0-239-132.ec2.internal Stopping container installer openshift-kube-apiserver-operator 58m Normal Created pod/kube-apiserver-operator-79b598d5b4-dqp95 Created container kube-apiserver-operator openshift-kube-apiserver-operator 58m Normal Started pod/kube-apiserver-operator-79b598d5b4-dqp95 Started container kube-apiserver-operator openshift-kube-apiserver-operator 58m Warning FastControllerResync deployment/kube-apiserver-operator Controller "PruneController" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 58m Warning FastControllerResync deployment/kube-apiserver-operator Controller "FeatureUpgradeableController" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 58m Warning FastControllerResync deployment/kube-apiserver-operator Controller "NodeController" resync interval is set to 0s which might lead to client request throttling openshift-apiserver-operator 58m Normal Pulled pod/openshift-apiserver-operator-67fd94b9d7-nvg29 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:55b8c96568666d4340d71558c31742bd8b5c02ab0cca7913fa41586d5f2de697" already present on machine openshift-authentication-operator 58m Normal LeaderElection lease/cluster-authentication-operator-lock authentication-operator-dbb89644b-tbxcm_c291144e-34f4-48ea-805f-84b20fb85392 became leader openshift-kube-apiserver-operator 58m Warning FastControllerResync deployment/kube-apiserver-operator Controller "EventWatchController" resync interval is set to 0s which might lead to client request throttling openshift-authentication-operator 58m Normal Started pod/authentication-operator-dbb89644b-tbxcm Started container authentication-operator openshift-kube-scheduler-operator 58m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 5" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 6",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 6" openshift-controller-manager-operator 58m Normal Pulled pod/openshift-controller-manager-operator-6548869cc5-9kqx5 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa8066a640500eaaf14c73b769e8792c0b420a927adb8db98ec47d9440a85d32" already present on machine openshift-kube-apiserver-operator 58m Warning FastControllerResync deployment/kube-apiserver-operator Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 58m Warning FastControllerResync deployment/kube-apiserver-operator Controller "InstallerController" resync interval is set to 0s which might lead to client request throttling openshift-authentication-operator 58m Warning FastControllerResync deployment/authentication-operator Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling openshift-authentication-operator 58m Warning FastControllerResync deployment/authentication-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-authentication-operator 58m Warning FastControllerResync deployment/authentication-operator Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling openshift-authentication-operator 58m Warning FastControllerResync deployment/authentication-operator Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling openshift-authentication-operator 58m Warning FastControllerResync deployment/authentication-operator Controller "SecretRevisionPruneController" resync interval is set to 0s which might lead to client request throttling openshift-authentication-operator 58m Warning FastControllerResync deployment/authentication-operator Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling openshift-kube-apiserver-operator 58m Warning FastControllerResync deployment/kube-apiserver-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-authentication-operator 58m Warning FastControllerResync deployment/authentication-operator Controller "OAuthAPIServerControllerWorkloadController" resync interval is set to 0s which might lead to client request throttling openshift-authentication-operator 58m Warning FastControllerResync deployment/authentication-operator Controller "OAuthServerWorkloadController" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 58m Warning FastControllerResync deployment/kube-apiserver-operator Controller "ConnectivityCheckController" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 58m Warning FastControllerResync deployment/kube-apiserver-operator Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling openshift-apiserver-operator 58m Warning FastControllerResync deployment/openshift-apiserver-operator Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling openshift-apiserver-operator 58m Normal Created pod/openshift-apiserver-operator-67fd94b9d7-nvg29 Created container openshift-apiserver-operator openshift-apiserver-operator 58m Warning FastControllerResync deployment/openshift-apiserver-operator Controller "OpenShiftAPIServerWorkloadController" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 58m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nResourceSyncControllerDegraded: configmaps \"kube-apiserver-server-ca\" already exists\nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-2,config-2,etcd-serving-ca-2,kube-apiserver-audit-policies-2,kube-apiserver-cert-syncer-kubeconfig-2,kube-apiserver-pod-2,kubelet-serving-ca-2,sa-token-signing-certs-2, secrets: etcd-client-2,localhost-recovery-client-token-2,localhost-recovery-serving-certkey-2]\nResourceSyncControllerDegraded: configmaps \"kube-apiserver-server-ca\" already exists\nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found" openshift-apiserver-operator 58m Warning FastControllerResync deployment/openshift-apiserver-operator Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling openshift-apiserver-operator 58m Warning FastControllerResync deployment/openshift-apiserver-operator Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling openshift-apiserver-operator 58m Normal Started pod/openshift-apiserver-operator-67fd94b9d7-nvg29 Started container openshift-apiserver-operator openshift-apiserver-operator 58m Warning FastControllerResync deployment/openshift-apiserver-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 58m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-2,config-2,etcd-serving-ca-2,kube-apiserver-audit-policies-2,kube-apiserver-cert-syncer-kubeconfig-2,kube-apiserver-pod-2,kubelet-serving-ca-2,sa-token-signing-certs-2, secrets: etcd-client-2,localhost-recovery-client-token-2,localhost-recovery-serving-certkey-2]\nResourceSyncControllerDegraded: configmaps \"kube-apiserver-server-ca\" already exists\nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, secrets: etcd-client-2,localhost-recovery-client-token-2,localhost-recovery-serving-certkey-2]\nResourceSyncControllerDegraded: configmaps \"kube-apiserver-server-ca\" already exists\nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found" openshift-apiserver-operator 58m Warning FastControllerResync deployment/openshift-apiserver-operator Controller "ConnectivityCheckController" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 58m Warning RequiredInstallerResourcesMissing deployment/kube-apiserver-operator configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-2,config-2,etcd-serving-ca-2,kube-apiserver-audit-policies-2,kube-apiserver-cert-syncer-kubeconfig-2,kube-apiserver-pod-2,kubelet-serving-ca-2,sa-token-signing-certs-2, secrets: etcd-client-2,localhost-recovery-client-token-2,localhost-recovery-serving-certkey-2 openshift-controller-manager-operator 58m Normal Started pod/openshift-controller-manager-operator-6548869cc5-9kqx5 Started container openshift-controller-manager-operator openshift-config-operator 58m Normal LeaderElection configmap/config-operator-lock openshift-config-operator-67bdbffb68-sdgx7_60b3000b-90fd-4f7b-bcb6-4549f0cc94b9 became leader openshift-config-operator 58m Normal LeaderElection lease/config-operator-lock openshift-config-operator-67bdbffb68-sdgx7_60b3000b-90fd-4f7b-bcb6-4549f0cc94b9 became leader openshift-config-operator 58m Warning FastControllerResync deployment/openshift-config-operator Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling openshift-controller-manager-operator 58m Normal Created pod/openshift-controller-manager-operator-6548869cc5-9kqx5 Created container openshift-controller-manager-operator openshift-config-operator 58m Normal Created pod/openshift-config-operator-67bdbffb68-sdgx7 Created container openshift-config-operator openshift-config-operator 58m Normal Started pod/openshift-config-operator-67bdbffb68-sdgx7 Started container openshift-config-operator openshift-apiserver-operator 58m Normal LeaderElection configmap/openshift-apiserver-operator-lock openshift-apiserver-operator-67fd94b9d7-nvg29_2a884a06-bad1-40ba-80a3-458233730b40 became leader openshift-config-operator 58m Warning FastControllerResync deployment/openshift-config-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-apiserver-operator 58m Normal LeaderElection lease/openshift-apiserver-operator-lock openshift-apiserver-operator-67fd94b9d7-nvg29_2a884a06-bad1-40ba-80a3-458233730b40 became leader openshift-apiserver-operator 58m Warning FastControllerResync deployment/openshift-apiserver-operator Controller "SecretRevisionPruneController" resync interval is set to 0s which might lead to client request throttling openshift-apiserver-operator 58m Warning FastControllerResync deployment/openshift-apiserver-operator Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling default 58m Normal OperatorVersionChanged /machine-config clusteroperator/machine-config-operator version changed from [] to [{operator 4.13.0-rc.0}] openshift-kube-apiserver-operator 58m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nInstallerControllerDegraded: missing required resources: [secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, secrets: etcd-client-2,localhost-recovery-client-token-2,localhost-recovery-serving-certkey-2]\nResourceSyncControllerDegraded: configmaps \"kube-apiserver-server-ca\" already exists\nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nInstallerControllerDegraded: missing required resources: [secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, secrets: etcd-client-2,localhost-recovery-client-token-2,localhost-recovery-serving-certkey-2]\nResourceSyncControllerDegraded: configmaps \"kube-apiserver-server-ca\" already exists\nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found" openshift-kube-scheduler-operator 58m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/installer-6-ip-10-0-239-132.ec2.internal -n openshift-kube-scheduler because it was missing openshift-kube-scheduler 58m Normal Pulled pod/revision-pruner-6-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-etcd-operator 58m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" openshift-kube-scheduler 58m Normal AddedInterface pod/revision-pruner-6-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.23/23] from ovn-kubernetes openshift-kube-scheduler-operator 58m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/revision-pruner-6-ip-10-0-239-132.ec2.internal -n openshift-kube-scheduler because it was missing openshift-cluster-version 58m Normal Pulling pod/cluster-version-operator-5d74b9d6f5-qzcfb Pulling image "quay.io/openshift-release-dev/ocp-release@sha256:e686d3cd173d9848fc304da0ebe4d348c6e3be902989f500c5382590e2e41a11" openshift-kube-apiserver-operator 58m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nInstallerControllerDegraded: missing required resources: [secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, secrets: etcd-client-2,localhost-recovery-client-token-2,localhost-recovery-serving-certkey-2]\nResourceSyncControllerDegraded: configmaps \"kube-apiserver-server-ca\" already exists\nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nInstallerControllerDegraded: missing required resources: [secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, secrets: etcd-client-2,localhost-recovery-client-token-2,localhost-recovery-serving-certkey-2]\nResourceSyncControllerDegraded: configmaps \"kube-apiserver-server-ca\" already exists\nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found" openshift-kube-scheduler 58m Normal Pulled pod/installer-6-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 58m Normal AddedInterface pod/installer-6-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.24/23] from ovn-kubernetes openshift-kube-apiserver-operator 58m Warning RequiredInstallerResourcesMissing deployment/kube-apiserver-operator secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, secrets: etcd-client-2,localhost-recovery-client-token-2,localhost-recovery-serving-certkey-2 openshift-kube-scheduler 58m Normal Created pod/installer-6-ip-10-0-239-132.ec2.internal Created container installer openshift-kube-scheduler 58m Normal Started pod/installer-6-ip-10-0-239-132.ec2.internal Started container installer openshift-kube-scheduler 58m Normal Created pod/revision-pruner-6-ip-10-0-239-132.ec2.internal Created container pruner openshift-kube-apiserver-operator 58m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nInstallerControllerDegraded: missing required resources: [secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, secrets: etcd-client-2,localhost-recovery-client-token-2,localhost-recovery-serving-certkey-2]\nResourceSyncControllerDegraded: configmaps \"kube-apiserver-server-ca\" already exists\nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, secrets: etcd-client-2,localhost-recovery-client-token-2,localhost-recovery-serving-certkey-2]\nResourceSyncControllerDegraded: configmaps \"kube-apiserver-server-ca\" already exists\nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found" openshift-kube-scheduler 58m Normal Started pod/revision-pruner-6-ip-10-0-239-132.ec2.internal Started container pruner openshift-kube-apiserver-operator 58m Warning ObserveStorageFailed deployment/kube-apiserver-operator configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found openshift-kube-controller-manager-operator 58m Normal RevisionTriggered deployment/kube-controller-manager-operator new revision 4 triggered by "configmap/serviceaccount-ca has changed" openshift-route-controller-manager 58m Warning ProbeError pod/route-controller-manager-795466d555-hwftm Readiness probe error: Get "https://10.128.0.21:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)... openshift-route-controller-manager 58m Warning Unhealthy pod/route-controller-manager-795466d555-hwftm Readiness probe failed: Get "https://10.128.0.21:8443/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) openshift-etcd-operator 58m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" to "GuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" openshift-cluster-version 58m Normal Started pod/cluster-version-operator-5d74b9d6f5-qzcfb Started container cluster-version-operator openshift-cluster-version 58m Normal Created pod/cluster-version-operator-5d74b9d6f5-qzcfb Created container cluster-version-operator openshift-cluster-version 58m Normal Pulled pod/cluster-version-operator-5d74b9d6f5-qzcfb Successfully pulled image "quay.io/openshift-release-dev/ocp-release@sha256:e686d3cd173d9848fc304da0ebe4d348c6e3be902989f500c5382590e2e41a11" in 1.718903405s (1.718915589s including waiting) openshift-kube-scheduler-operator 58m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/revision-pruner-6-ip-10-0-140-6.ec2.internal -n openshift-kube-scheduler because it was missing openshift-etcd 58m Normal Started pod/etcd-guard-ip-10-0-239-132.ec2.internal Started container guard openshift-etcd 58m Normal AddedInterface pod/etcd-guard-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.25/23] from ovn-kubernetes openshift-etcd 58m Normal Pulled pod/etcd-guard-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd 58m Normal Created pod/etcd-guard-ip-10-0-239-132.ec2.internal Created container guard openshift-marketplace 58m Normal Pulled pod/redhat-operators-jzt5b Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.12" in 22.876676792s (22.87668718s including waiting) openshift-controller-manager 58m Normal AddedInterface pod/controller-manager-77cd478b57-m9m5f Add eth0 [10.128.0.22/23] from ovn-kubernetes openshift-kube-controller-manager-operator 58m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/revision-status-4 -n openshift-kube-controller-manager because it was missing openshift-kube-controller-manager-operator 57m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/kube-controller-manager-pod-4 -n openshift-kube-controller-manager because it was missing openshift-controller-manager 57m Normal Pulled pod/controller-manager-77cd478b57-m9m5f Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:03d999968e93d76624e3d10f22591abfbfdaaf2dd8f315abf64cdc590ab64195" already present on machine openshift-route-controller-manager 57m Normal Pulled pod/route-controller-manager-7ff89c67c-4622z Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:884971410b4691a0666287eb389e07a425040d4fc90665c864aa9f4728d4599c" already present on machine openshift-kube-scheduler 57m Normal Pulling pod/revision-pruner-6-ip-10-0-140-6.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" openshift-marketplace 57m Normal Started pod/redhat-operators-jzt5b Started container registry-server openshift-route-controller-manager 57m Normal Started pod/route-controller-manager-7ff89c67c-4622z Started container route-controller-manager openshift-route-controller-manager 57m Normal Created pod/route-controller-manager-7ff89c67c-4622z Created container route-controller-manager openshift-kube-scheduler 57m Normal AddedInterface pod/revision-pruner-6-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.23/23] from ovn-kubernetes openshift-marketplace 57m Normal Created pod/redhat-operators-jzt5b Created container registry-server openshift-route-controller-manager 57m Normal LeaderElection lease/openshift-route-controllers route-controller-manager-7ff89c67c-4622z became leader openshift-network-operator 57m Normal Pulled pod/network-operator-6c9d58d76b-pl9td Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" already present on machine openshift-route-controller-manager 57m Normal AddedInterface pod/route-controller-manager-7ff89c67c-4622z Add eth0 [10.128.0.24/23] from ovn-kubernetes openshift-network-operator 57m Normal LeaderElection configmap/network-operator-lock ip-10-0-239-132_87721e6f-93c9-498c-bf74-5efc12ba2fb0 became leader openshift-marketplace 57m Normal Pulled pod/community-operators-7jr7c Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.12" in 24.4046488s (24.404658509s including waiting) openshift-marketplace 57m Normal Created pod/community-operators-7jr7c Created container registry-server openshift-marketplace 57m Normal Pulled pod/certified-operators-77trp Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.12" in 24.398724132s (24.398745326s including waiting) openshift-marketplace 57m Normal Started pod/redhat-marketplace-crqrm Started container registry-server openshift-authentication-operator 57m Normal SecretCreated deployment/authentication-operator Created Secret/webhook-authentication-integrated-oauth -n openshift-config because it was missing openshift-authentication-operator 57m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory apiServerArguments.etcd-servers\nAPIServerDeploymentDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory apiServerArguments.etcd-servers\nAPIServerDeploymentDegraded: " openshift-marketplace 57m Normal Started pod/certified-operators-77trp Started container registry-server openshift-controller-manager 57m Normal Created pod/controller-manager-77cd478b57-m9m5f Created container controller-manager openshift-marketplace 57m Normal Started pod/community-operators-7jr7c Started container registry-server openshift-kube-scheduler-operator 57m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" openshift-controller-manager 57m Normal Started pod/controller-manager-77cd478b57-m9m5f Started container controller-manager openshift-kube-scheduler-operator 57m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" openshift-marketplace 57m Normal Created pod/redhat-marketplace-crqrm Created container registry-server openshift-marketplace 57m Normal Pulled pod/redhat-marketplace-crqrm Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.12" in 24.223900854s (24.223907505s including waiting) openshift-controller-manager 57m Normal LeaderElection lease/openshift-master-controllers controller-manager-77cd478b57-m9m5f became leader openshift-network-operator 57m Normal Started pod/network-operator-6c9d58d76b-pl9td Started container network-operator openshift-marketplace 57m Normal Created pod/certified-operators-77trp Created container registry-server openshift-kube-controller-manager-operator 57m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/config-4 -n openshift-kube-controller-manager because it was missing openshift-network-operator 57m Warning FastControllerResync deployment/network-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-network-operator 57m Normal LeaderElection lease/network-operator-lock ip-10-0-239-132_87721e6f-93c9-498c-bf74-5efc12ba2fb0 became leader openshift-kube-apiserver-operator 57m Normal PodCreated deployment/kube-apiserver-operator Created Pod/installer-2-ip-10-0-197-197.ec2.internal -n openshift-kube-apiserver because it was missing openshift-kube-scheduler-operator 57m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/revision-pruner-6-ip-10-0-197-197.ec2.internal -n openshift-kube-scheduler because it was missing openshift-network-operator 57m Normal Created pod/network-operator-6c9d58d76b-pl9td Created container network-operator openshift-controller-manager 57m Normal LeaderElection configmap/openshift-master-controllers controller-manager-77cd478b57-m9m5f became leader openshift-kube-apiserver 57m Normal Created pod/installer-2-ip-10-0-197-197.ec2.internal Created container installer openshift-kube-apiserver 57m Normal AddedInterface pod/installer-2-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.43/23] from ovn-kubernetes openshift-kube-apiserver 57m Normal Pulled pod/installer-2-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-scheduler 57m Normal AddedInterface pod/revision-pruner-6-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.44/23] from ovn-kubernetes openshift-kube-scheduler 57m Normal Pulled pod/revision-pruner-6-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 57m Normal Created pod/revision-pruner-6-ip-10-0-197-197.ec2.internal Created container pruner openshift-controller-manager 57m Normal Killing pod/controller-manager-77cd478b57-m9m5f Stopping container controller-manager openshift-machine-api 57m Warning FailedUpdate machine/qeaisrhods-c13-28wr5-worker-us-east-1a-tfwzm qeaisrhods-c13-28wr5-worker-us-east-1a-tfwzm: reconciler failed to Update machine: requeue in: 20s openshift-kube-apiserver 57m Normal Started pod/installer-2-ip-10-0-197-197.ec2.internal Started container installer openshift-kube-scheduler 57m Normal Started pod/revision-pruner-6-ip-10-0-197-197.ec2.internal Started container pruner openshift-kube-controller-manager-operator 57m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/cluster-policy-controller-config-4 -n openshift-kube-controller-manager because it was missing openshift-kube-scheduler 57m Normal Started pod/revision-pruner-6-ip-10-0-140-6.ec2.internal Started container pruner openshift-kube-scheduler 57m Normal Created pod/revision-pruner-6-ip-10-0-140-6.ec2.internal Created container pruner openshift-kube-controller-manager-operator 57m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/controller-manager-kubeconfig-4 -n openshift-kube-controller-manager because it was missing openshift-kube-scheduler 57m Normal Pulled pod/revision-pruner-6-ip-10-0-140-6.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" in 2.406565251s (2.406576509s including waiting) openshift-kube-controller-manager-operator 57m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/kube-controller-cert-syncer-kubeconfig-4 -n openshift-kube-controller-manager because it was missing openshift-insights 57m Normal Pulled pod/insights-operator-6fd65c6b65-vrxhp Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7cb4c45f3e100ceddafee4c6ccd57d79f5a6627686484aba625c1486c2ffc1c8" already present on machine openshift-kube-controller-manager-operator 57m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-controller-manager because it was missing openshift-insights 57m Normal Started pod/insights-operator-6fd65c6b65-vrxhp Started container insights-operator openshift-kube-controller-manager 57m Normal Killing pod/installer-2-ip-10-0-239-132.ec2.internal Stopping container installer openshift-insights 57m Normal Created pod/insights-operator-6fd65c6b65-vrxhp Created container insights-operator openshift-etcd 57m Warning Unhealthy pod/etcd-ip-10-0-239-132.ec2.internal Startup probe failed: Get "https://10.0.239.132:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) openshift-kube-controller-manager-operator 57m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/service-ca-4 -n openshift-kube-controller-manager because it was missing openshift-kube-controller-manager-operator 57m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/recycler-config-4 -n openshift-kube-controller-manager because it was missing openshift-controller-manager 57m Normal AddedInterface pod/controller-manager-6fcd58c8dc-wdb9f Add eth0 [10.128.0.25/23] from ovn-kubernetes openshift-kube-controller-manager-operator 57m Normal SecretCreated deployment/kube-controller-manager-operator Created Secret/service-account-private-key-4 -n openshift-kube-controller-manager because it was missing openshift-controller-manager 57m Normal Pulled pod/controller-manager-6fcd58c8dc-wdb9f Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:03d999968e93d76624e3d10f22591abfbfdaaf2dd8f315abf64cdc590ab64195" already present on machine openshift-controller-manager 57m Normal Started pod/controller-manager-6fcd58c8dc-wdb9f Started container controller-manager openshift-controller-manager 57m Normal LeaderElection lease/openshift-master-controllers controller-manager-6fcd58c8dc-wdb9f became leader openshift-controller-manager 57m Normal LeaderElection configmap/openshift-master-controllers controller-manager-6fcd58c8dc-wdb9f became leader openshift-controller-manager 57m Normal Created pod/controller-manager-6fcd58c8dc-wdb9f Created container controller-manager openshift-controller-manager 57m Normal Killing pod/controller-manager-77cd478b57-z4w9g Stopping container controller-manager openshift-controller-manager 57m Normal SuccessfulDelete replicaset/controller-manager-77cd478b57 Deleted pod: controller-manager-77cd478b57-z4w9g openshift-controller-manager 57m Normal SuccessfulCreate replicaset/controller-manager-6fcd58c8dc Created pod: controller-manager-6fcd58c8dc-dnsjp openshift-multus 57m Normal AddedInterface pod/multus-admission-controller-6896747cbb-ljc49 Add eth0 [10.128.0.26/23] from ovn-kubernetes openshift-multus 57m Normal Pulling pod/multus-admission-controller-6896747cbb-ljc49 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c3cca6e2da92a6cd38e7f20f77bffc675895bd800157fdb50261b7f7ea9fc90" openshift-multus 57m Normal ScalingReplicaSet deployment/multus-admission-controller Scaled up replica set multus-admission-controller-6896747cbb to 1 openshift-controller-manager 57m Normal Pulled pod/controller-manager-6fcd58c8dc-dnsjp Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:03d999968e93d76624e3d10f22591abfbfdaaf2dd8f315abf64cdc590ab64195" already present on machine openshift-multus 57m Normal SuccessfulCreate replicaset/multus-admission-controller-6896747cbb Created pod: multus-admission-controller-6896747cbb-ljc49 openshift-kube-controller-manager-operator 57m Normal SecretCreated deployment/kube-controller-manager-operator Created Secret/serving-cert-4 -n openshift-kube-controller-manager because it was missing openshift-controller-manager 57m Normal AddedInterface pod/controller-manager-6fcd58c8dc-dnsjp Add eth0 [10.129.0.26/23] from ovn-kubernetes openshift-controller-manager 57m Normal SuccessfulDelete replicaset/controller-manager-77cd478b57 Deleted pod: controller-manager-77cd478b57-4s2qm openshift-controller-manager 57m Normal SuccessfulCreate replicaset/controller-manager-6fcd58c8dc Created pod: controller-manager-6fcd58c8dc-6vtpl openshift-kube-controller-manager-operator 57m Normal RevisionCreate deployment/kube-controller-manager-operator Revision 3 created because configmap/serviceaccount-ca has changed openshift-controller-manager 57m Normal Killing pod/controller-manager-77cd478b57-4s2qm Stopping container controller-manager openshift-controller-manager 57m Normal ScalingReplicaSet deployment/controller-manager (combined from similar events): Scaled up replica set controller-manager-6fcd58c8dc to 3 from 2 openshift-controller-manager 57m Normal Created pod/controller-manager-6fcd58c8dc-dnsjp Created container controller-manager openshift-machine-api 57m Normal Update machine/qeaisrhods-c13-28wr5-worker-us-east-1a-cp7f7 Updated Machine qeaisrhods-c13-28wr5-worker-us-east-1a-cp7f7 openshift-controller-manager 57m Normal Started pod/controller-manager-6fcd58c8dc-dnsjp Started container controller-manager openshift-kube-controller-manager-operator 57m Normal SecretCreated deployment/kube-controller-manager-operator Created Secret/localhost-recovery-client-token-4 -n openshift-kube-controller-manager because it was missing openshift-multus 57m Normal Pulled pod/multus-admission-controller-6896747cbb-ljc49 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-multus 57m Normal Started pod/multus-admission-controller-6896747cbb-ljc49 Started container multus-admission-controller openshift-multus 57m Normal Started pod/multus-admission-controller-6896747cbb-ljc49 Started container kube-rbac-proxy openshift-multus 57m Normal Created pod/multus-admission-controller-6896747cbb-ljc49 Created container kube-rbac-proxy openshift-multus 57m Normal Created pod/multus-admission-controller-6896747cbb-ljc49 Created container multus-admission-controller openshift-multus 57m Normal Pulled pod/multus-admission-controller-6896747cbb-ljc49 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c3cca6e2da92a6cd38e7f20f77bffc675895bd800157fdb50261b7f7ea9fc90" in 1.270494074s (1.270508319s including waiting) openshift-multus 57m Normal AddedInterface pod/multus-admission-controller-6896747cbb-rlm9s Add eth0 [10.130.0.46/23] from ovn-kubernetes openshift-controller-manager 57m Normal AddedInterface pod/controller-manager-6fcd58c8dc-6vtpl Add eth0 [10.130.0.45/23] from ovn-kubernetes openshift-controller-manager 57m Normal Pulled pod/controller-manager-6fcd58c8dc-6vtpl Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:03d999968e93d76624e3d10f22591abfbfdaaf2dd8f315abf64cdc590ab64195" already present on machine openshift-controller-manager 57m Normal Created pod/controller-manager-6fcd58c8dc-6vtpl Created container controller-manager openshift-multus 57m Normal Pulled pod/multus-admission-controller-6896747cbb-rlm9s Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c3cca6e2da92a6cd38e7f20f77bffc675895bd800157fdb50261b7f7ea9fc90" already present on machine openshift-multus 57m Normal SuccessfulCreate replicaset/multus-admission-controller-6896747cbb Created pod: multus-admission-controller-6896747cbb-rlm9s openshift-kube-controller-manager-operator 57m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 4" openshift-multus 57m Normal ScalingReplicaSet deployment/multus-admission-controller Scaled up replica set multus-admission-controller-6896747cbb to 2 from 1 openshift-multus 57m Normal ScalingReplicaSet deployment/multus-admission-controller Scaled down replica set multus-admission-controller-6f95d97cb6 to 1 from 2 openshift-multus 57m Normal SuccessfulDelete replicaset/multus-admission-controller-6f95d97cb6 Deleted pod: multus-admission-controller-6f95d97cb6-7wv72 openshift-etcd-operator 57m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" to "MachineDeletionHooksControllerDegraded: Operation cannot be fulfilled on machines.machine.openshift.io \"qeaisrhods-c13-28wr5-master-2\": the object has been modified; please apply your changes to the latest version and try again\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" openshift-kube-controller-manager 57m Normal Killing pod/installer-3-ip-10-0-239-132.ec2.internal Stopping container installer openshift-multus 57m Normal Killing pod/multus-admission-controller-6f95d97cb6-7wv72 Stopping container multus-admission-controller openshift-multus 57m Normal Killing pod/multus-admission-controller-6f95d97cb6-7wv72 Stopping container kube-rbac-proxy openshift-controller-manager 57m Normal Started pod/controller-manager-6fcd58c8dc-6vtpl Started container controller-manager openshift-multus 57m Normal Killing pod/multus-admission-controller-6f95d97cb6-x5s87 Stopping container multus-admission-controller openshift-multus 57m Normal Killing pod/multus-admission-controller-6f95d97cb6-x5s87 Stopping container kube-rbac-proxy openshift-multus 57m Normal Started pod/multus-admission-controller-6896747cbb-rlm9s Started container kube-rbac-proxy openshift-multus 57m Normal SuccessfulDelete replicaset/multus-admission-controller-6f95d97cb6 Deleted pod: multus-admission-controller-6f95d97cb6-x5s87 openshift-multus 57m Normal Created pod/multus-admission-controller-6896747cbb-rlm9s Created container multus-admission-controller openshift-multus 57m Normal ScalingReplicaSet deployment/multus-admission-controller Scaled down replica set multus-admission-controller-6f95d97cb6 to 0 from 1 openshift-multus 57m Normal Created pod/multus-admission-controller-6896747cbb-rlm9s Created container kube-rbac-proxy openshift-multus 57m Normal Pulled pod/multus-admission-controller-6896747cbb-rlm9s Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-multus 57m Normal Started pod/multus-admission-controller-6896747cbb-rlm9s Started container multus-admission-controller openshift-kube-controller-manager-operator 57m Normal PodCreated deployment/kube-controller-manager-operator Created Pod/installer-4-ip-10-0-239-132.ec2.internal -n openshift-kube-controller-manager because it was missing openshift-kube-controller-manager 57m Normal Created pod/installer-4-ip-10-0-239-132.ec2.internal Created container installer openshift-kube-controller-manager 57m Normal Pulled pod/installer-4-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-etcd 57m Warning ProbeError pod/etcd-ip-10-0-239-132.ec2.internal Startup probe error: Get "https://10.0.239.132:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers)... openshift-kube-controller-manager 57m Normal Started pod/installer-4-ip-10-0-239-132.ec2.internal Started container installer openshift-apiserver-operator 57m Warning ObserveStorageFailed deployment/openshift-apiserver-operator configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found openshift-kube-scheduler-operator 57m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" openshift-kube-scheduler-operator 57m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]" openshift-kube-controller-manager 57m Normal AddedInterface pod/installer-4-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.27/23] from ovn-kubernetes openshift-kube-controller-manager-operator 57m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nNodeControllerDegraded: All master nodes are ready" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready" openshift-kube-apiserver-operator 57m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nInstallerControllerDegraded: missing required resources: [secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, secrets: etcd-client-2,localhost-recovery-client-token-2,localhost-recovery-serving-certkey-2]\nResourceSyncControllerDegraded: configmaps \"kube-apiserver-server-ca\" already exists\nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nInstallerControllerDegraded: missing required resources: [secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, secrets: etcd-client-2,localhost-recovery-client-token-2,localhost-recovery-serving-certkey-2]\nResourceSyncControllerDegraded: configmaps \"kube-apiserver-server-ca\" already exists\nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found" openshift-kube-controller-manager-operator 57m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nNodeControllerDegraded: All master nodes are ready" openshift-kube-apiserver-operator 57m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-audit-policies-3 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 57m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/webhook-authenticator -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 57m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-pod-3 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 57m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/config-3 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 57m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-3 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 57m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/bound-sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 57m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/etcd-serving-ca-3 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 57m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-server-ca-3 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 57m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kubelet-serving-ca-3 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 57m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 57m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/revision-status-3 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 57m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/etcd-client-3 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 57m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/localhost-recovery-serving-certkey-3 -n openshift-kube-apiserver because it was missing openshift-etcd 57m Warning Unhealthy pod/etcd-guard-ip-10-0-239-132.ec2.internal Readiness probe failed: Get "https://10.0.239.132:9980/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) openshift-kube-apiserver-operator 57m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/localhost-recovery-client-token-3 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 57m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/webhook-authenticator-3 -n openshift-kube-apiserver because it was missing openshift-etcd-operator 57m Warning UnhealthyEtcdMember deployment/etcd-operator unhealthy members: NAME-PENDING-10.0.239.132 openshift-authentication-operator 57m Warning ObserveStorageFailed deployment/authentication-operator configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found openshift-etcd-operator 57m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "MachineDeletionHooksControllerDegraded: Operation cannot be fulfilled on machines.machine.openshift.io \"qeaisrhods-c13-28wr5-master-2\": the object has been modified; please apply your changes to the latest version and try again\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" to "GuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" openshift-etcd-operator 57m Normal MemberAddAsLearner deployment/etcd-operator successfully added new member https://10.0.239.132:2380 openshift-etcd-operator 57m Warning UnstartedEtcdMember deployment/etcd-operator unstarted members: NAME-PENDING-10.0.239.132 openshift-kube-scheduler-operator 57m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded changed from False to True ("GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]") openshift-etcd-operator 57m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" to "GuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: no etcd members are present\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" openshift-machine-api 57m Normal Update machine/qeaisrhods-c13-28wr5-worker-us-east-1a-tfwzm Updated Machine qeaisrhods-c13-28wr5-worker-us-east-1a-tfwzm openshift-authentication-operator 57m Normal ObserveStorageUpdated deployment/authentication-operator Updated storage urls to https://10.0.239.132:2379 openshift-etcd-operator 57m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" to "GuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" openshift-etcd-operator 57m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: no etcd members are present\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "GuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: no etcd members are present\nEtcdMembersDegraded: No unhealthy members found" openshift-etcd-operator 57m Normal ConfigMapUpdated deployment/etcd-operator Updated ConfigMap/etcd-endpoints -n openshift-etcd:... openshift-authentication-operator 57m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthAPIServerConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory apiServerArguments.etcd-servers\nAPIServerDeploymentDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory apiServerArguments.etcd-servers\nAPIServerDeploymentDegraded: " openshift-etcd-operator 57m Normal RevisionTriggered deployment/etcd-operator new revision 3 triggered by "configmap/etcd-endpoints has changed" openshift-authentication-operator 57m Normal ObservedConfigChanged deployment/authentication-operator Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"apiServerArguments\": map[string]any{\n\u00a0\u00a0\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n\u00a0\u00a0\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+\u00a0\t\t\"etcd-servers\": []any{string(\"https://10.0.239.132:2379\")},\n\u00a0\u00a0\t\t\"tls-cipher-suites\": []any{string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\"), string(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"), ...},\n\u00a0\u00a0\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n\u00a0\u00a0\t},\n\u00a0\u00a0}\n" openshift-etcd-operator 57m Normal MemberPromote deployment/etcd-operator successfully promoted learner member https://10.0.239.132:2380 openshift-etcd-operator 57m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/revision-status-3 -n openshift-etcd because it was missing openshift-oauth-apiserver 57m Normal SuccessfulCreate replicaset/apiserver-89645c77 Created pod: apiserver-89645c77-fdwmw openshift-oauth-apiserver 57m Normal SuccessfulCreate replicaset/apiserver-89645c77 Created pod: apiserver-89645c77-26sj6 openshift-authentication-operator 57m Normal DeploymentCreated deployment/authentication-operator Created Deployment.apps/apiserver -n openshift-oauth-apiserver because it was missing openshift-kube-apiserver-operator 57m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded changed from False to True ("ConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nInstallerControllerDegraded: missing required resources: [secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, secrets: etcd-client-2,localhost-recovery-client-token-2,localhost-recovery-serving-certkey-2]\nResourceSyncControllerDegraded: configmaps \"kube-apiserver-server-ca\" already exists") openshift-authentication-operator 57m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nAPIServerDeploymentDegraded: waiting for observed configuration to have mandatory apiServerArguments.etcd-servers\nAPIServerDeploymentDegraded: " to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused",Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 0, desired generation is 1."),Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-oauth-apiserver 57m Normal ScalingReplicaSet deployment/apiserver Scaled up replica set apiserver-89645c77 to 3 openshift-oauth-apiserver 57m Normal SuccessfulCreate replicaset/apiserver-89645c77 Created pod: apiserver-89645c77-szcw6 openshift-apiserver-operator 57m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded changed from False to True ("APIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: \nConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found") openshift-kube-controller-manager-operator 57m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nNodeControllerDegraded: All master nodes are ready" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nNodeControllerDegraded: All master nodes are ready" openshift-etcd-operator 57m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-pod-3 -n openshift-etcd because it was missing openshift-oauth-apiserver 57m Normal Pulling pod/apiserver-89645c77-szcw6 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" openshift-oauth-apiserver 57m Normal AddedInterface pod/apiserver-89645c77-szcw6 Add eth0 [10.129.0.28/23] from ovn-kubernetes openshift-oauth-apiserver 57m Normal AddedInterface pod/apiserver-89645c77-fdwmw Add eth0 [10.128.0.27/23] from ovn-kubernetes openshift-oauth-apiserver 57m Normal Pulling pod/apiserver-89645c77-fdwmw Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" openshift-oauth-apiserver 57m Normal AddedInterface pod/apiserver-89645c77-26sj6 Add eth0 [10.130.0.47/23] from ovn-kubernetes openshift-kube-controller-manager-operator 57m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nNodeControllerDegraded: All master nodes are ready" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nNodeControllerDegraded: All master nodes are ready" openshift-oauth-apiserver 57m Normal Pulling pod/apiserver-89645c77-26sj6 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" openshift-kube-scheduler-operator 57m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" to "GuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" openshift-kube-apiserver-operator 57m Normal ObservedConfigChanged deployment/kube-apiserver-operator Writing updated observed config:   map[string]any{... openshift-etcd-operator 57m Normal NodeCurrentRevisionChanged deployment/etcd-operator Updated node "ip-10-0-239-132.ec2.internal" from revision 0 to 2 because static pod is ready openshift-etcd-operator 57m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 2 nodes are at revision 0; 1 nodes are at revision 2",Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 nodes are at revision 2\nEtcdMembersAvailable: 1 members are available") openshift-oauth-apiserver 57m Normal Pulled pod/apiserver-89645c77-szcw6 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" in 1.741841886s (1.741856298s including waiting) openshift-oauth-apiserver 57m Normal Pulled pod/apiserver-89645c77-26sj6 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" in 1.883155865s (1.883169062s including waiting) openshift-oauth-apiserver 57m Normal Started pod/apiserver-89645c77-26sj6 Started container fix-audit-permissions openshift-oauth-apiserver 57m Normal Pulled pod/apiserver-89645c77-26sj6 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" already present on machine openshift-oauth-apiserver 57m Normal Started pod/apiserver-89645c77-szcw6 Started container oauth-apiserver openshift-oauth-apiserver 57m Normal Created pod/apiserver-89645c77-szcw6 Created container oauth-apiserver openshift-oauth-apiserver 57m Normal Pulled pod/apiserver-89645c77-szcw6 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" already present on machine openshift-authentication-operator 57m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nAPIServerDeploymentDegraded: 3 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-89645c77-fdwmw pod, 2 containers are waiting in pending apiserver-89645c77-26sj6 pod, container is waiting in pending apiserver-89645c77-szcw6 pod)",Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well") openshift-oauth-apiserver 57m Normal Started pod/apiserver-89645c77-szcw6 Started container fix-audit-permissions openshift-oauth-apiserver 57m Normal Created pod/apiserver-89645c77-szcw6 Created container fix-audit-permissions openshift-oauth-apiserver 57m Normal Pulled pod/apiserver-89645c77-fdwmw Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" in 2.160988193s (2.161002896s including waiting) openshift-oauth-apiserver 57m Normal Started pod/apiserver-89645c77-fdwmw Started container fix-audit-permissions openshift-etcd-operator 57m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-serving-ca-3 -n openshift-etcd because it was missing openshift-oauth-apiserver 57m Normal Created pod/apiserver-89645c77-26sj6 Created container fix-audit-permissions openshift-oauth-apiserver 57m Normal Created pod/apiserver-89645c77-fdwmw Created container fix-audit-permissions openshift-oauth-apiserver 57m Normal Started pod/apiserver-89645c77-fdwmw Started container oauth-apiserver openshift-oauth-apiserver 57m Normal Created pod/apiserver-89645c77-26sj6 Created container oauth-apiserver openshift-kube-apiserver-operator 57m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "ConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nInstallerControllerDegraded: missing required resources: [secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, secrets: etcd-client-2,localhost-recovery-client-token-2,localhost-recovery-serving-certkey-2]\nResourceSyncControllerDegraded: configmaps \"kube-apiserver-server-ca\" already exists" to "ConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nResourceSyncControllerDegraded: configmaps \"kube-apiserver-server-ca\" already exists" openshift-oauth-apiserver 57m Normal Created pod/apiserver-89645c77-fdwmw Created container oauth-apiserver openshift-oauth-apiserver 57m Normal Started pod/apiserver-89645c77-26sj6 Started container oauth-apiserver openshift-oauth-apiserver 57m Normal Pulled pod/apiserver-89645c77-fdwmw Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" already present on machine openshift-etcd-operator 57m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-peer-client-ca-3 -n openshift-etcd because it was missing openshift-authentication-operator 57m Normal Created deployment/authentication-operator Created /v1.oauth.openshift.io because it was missing openshift-authentication-operator 57m Normal Created deployment/authentication-operator Created /v1.user.openshift.io because it was missing openshift-authentication-operator 57m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-authentication-operator 57m Normal OperatorVersionChanged deployment/authentication-operator clusteroperator/authentication version "oauth-apiserver" changed from "" to "4.13.0-rc.0" openshift-authentication-operator 57m Warning OpenShiftAPICheckFailed deployment/authentication-operator "user.openshift.io.v1" failed with an attempt failed with statusCode = 404, err = the server could not find the requested resource openshift-authentication-operator 57m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-etcd-operator 57m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-metrics-proxy-serving-ca-3 -n openshift-etcd because it was missing openshift-authentication-operator 57m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nAPIServerDeploymentDegraded: 3 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-89645c77-fdwmw pod, 2 containers are waiting in pending apiserver-89645c77-26sj6 pod, container is waiting in pending apiserver-89645c77-szcw6 pod)" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused",Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-authentication-operator 57m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.13.0-rc.0"}] to [{"operator" "4.13.0-rc.0"} {"oauth-apiserver" "4.13.0-rc.0"}] openshift-etcd-operator 57m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-metrics-proxy-client-ca-3 -n openshift-etcd because it was missing openshift-kube-apiserver-operator 57m Warning ObservedConfigWriteError deployment/kube-apiserver-operator Failed to write observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again openshift-etcd-operator 57m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-endpoints-3 -n openshift-etcd because it was missing openshift-etcd-operator 57m Normal NodeTargetRevisionChanged deployment/etcd-operator Updating node "ip-10-0-140-6.ec2.internal" from revision 0 to 2 because node ip-10-0-140-6.ec2.internal static pod not found openshift-etcd-operator 57m Normal RevisionTriggered deployment/etcd-operator new revision 3 triggered by "configmap/etcd-pod has changed,configmap/etcd-endpoints has changed" openshift-kube-apiserver-operator 57m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "ConfigObservationDegraded: configmaps openshift-etcd/etcd-endpoints: no etcd endpoint addresses found\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nResourceSyncControllerDegraded: configmaps \"kube-apiserver-server-ca\" already exists" to "ConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nResourceSyncControllerDegraded: configmaps \"kube-apiserver-server-ca\" already exists" openshift-etcd-operator 57m Normal SecretCreated deployment/etcd-operator Created Secret/etcd-all-certs-3 -n openshift-etcd because it was missing openshift-etcd-operator 57m Normal RevisionCreate deployment/etcd-operator Revision 2 created because configmap/etcd-endpoints has changed openshift-authentication-operator 57m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded changed from False to True ("IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready") openshift-apiserver-operator 57m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded changed from True to False ("APIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: ") openshift-apiserver-operator 57m Normal ConfigMapCreated deployment/openshift-apiserver-operator Created ConfigMap/config -n openshift-apiserver because it was missing openshift-kube-apiserver-operator 57m Normal ObservedConfigChanged deployment/kube-apiserver-operator Writing updated observed config:   map[string]any{... openshift-kube-apiserver-operator 57m Normal ObserveStorageUpdated deployment/kube-apiserver-operator Updated storage urls to https://10.0.239.132:2379,https://localhost:2379 openshift-etcd-operator 57m Normal ConfigMapUpdated deployment/etcd-operator Updated ConfigMap/revision-status-3 -n openshift-etcd:... openshift-apiserver-operator 57m Normal ObserveStorageUpdated deployment/openshift-apiserver-operator Updated storage urls to https://10.0.239.132:2379 openshift-apiserver-operator 57m Normal ObservedConfigChanged deployment/openshift-apiserver-operator Writing updated observed config:   map[string]any{... openshift-kube-apiserver-operator 57m Normal ObserveWebhookTokenAuthenticator deployment/kube-apiserver-operator authentication-token webhook configuration status changed from false to true openshift-kube-apiserver-operator 57m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "ConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nResourceSyncControllerDegraded: configmaps \"kube-apiserver-server-ca\" already exists" to "ConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" openshift-apiserver-operator 57m Normal ConfigMapCreated deployment/openshift-apiserver-operator Created ConfigMap/image-import-ca -n openshift-apiserver because it was missing openshift-etcd 57m Normal AddedInterface pod/installer-2-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.28/23] from ovn-kubernetes openshift-etcd-operator 57m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "RevisionControllerDegraded: conflicting latestAvailableRevision 3\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" openshift-etcd-operator 57m Normal PodCreated deployment/etcd-operator Created Pod/installer-2-ip-10-0-140-6.ec2.internal -n openshift-etcd because it was missing openshift-etcd-operator 57m Normal ConfigMapUpdated deployment/etcd-operator Updated ConfigMap/etcd-pod-3 -n openshift-etcd:... openshift-etcd 57m Normal Pulling pod/installer-2-ip-10-0-140-6.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" openshift-etcd-operator 57m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "RevisionControllerDegraded: conflicting latestAvailableRevision 3\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "GuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" openshift-apiserver-operator 57m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: waiting for observed configuration to have mandatory StorageConfig.URLs\nAPIServerDeploymentDegraded: " to "All is well",Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 3." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 3.",Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" openshift-etcd-operator 57m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 1 nodes are at revision 2" to "NodeInstallerProgressing: 2 nodes are at revision 0; 1 nodes are at revision 2; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 nodes are at revision 2\nEtcdMembersAvailable: 1 members are available" to "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 nodes are at revision 2; 0 nodes have achieved new revision 3\nEtcdMembersAvailable: 1 members are available" openshift-apiserver-operator 57m Normal DeploymentCreated deployment/openshift-apiserver-operator Created Deployment.apps/apiserver -n openshift-apiserver because it was missing openshift-apiserver 57m Normal SuccessfulCreate replicaset/apiserver-565b67b9f7 Created pod: apiserver-565b67b9f7-wvhp4 openshift-apiserver 57m Normal SuccessfulCreate replicaset/apiserver-565b67b9f7 Created pod: apiserver-565b67b9f7-w2dv2 openshift-apiserver-operator 57m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 3.") openshift-kube-apiserver 57m Normal StaticPodInstallerCompleted pod/installer-2-ip-10-0-197-197.ec2.internal Successfully installed revision 2 openshift-kube-controller-manager-operator 57m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded changed from False to True ("GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]") openshift-apiserver 57m Normal SuccessfulCreate replicaset/apiserver-565b67b9f7 Created pod: apiserver-565b67b9f7-lvnhl openshift-apiserver 57m Normal ScalingReplicaSet deployment/apiserver Scaled up replica set apiserver-565b67b9f7 to 3 openshift-kube-apiserver 57m Normal Pulling pod/kube-apiserver-ip-10-0-197-197.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" openshift-kube-apiserver-operator 57m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "ConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nGuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" to "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" openshift-etcd 57m Normal Pulled pod/installer-2-ip-10-0-140-6.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" in 1.871067695s (1.871079699s including waiting) openshift-apiserver 57m Normal AddedInterface pod/apiserver-565b67b9f7-lvnhl Add eth0 [10.129.0.29/23] from ovn-kubernetes openshift-apiserver 57m Normal Pulling pod/apiserver-565b67b9f7-lvnhl Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" openshift-apiserver 57m Normal Pulling pod/apiserver-565b67b9f7-wvhp4 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" openshift-apiserver 57m Normal Pulling pod/apiserver-565b67b9f7-w2dv2 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" openshift-apiserver 57m Normal AddedInterface pod/apiserver-565b67b9f7-w2dv2 Add eth0 [10.130.0.48/23] from ovn-kubernetes openshift-apiserver 57m Normal AddedInterface pod/apiserver-565b67b9f7-wvhp4 Add eth0 [10.128.0.29/23] from ovn-kubernetes openshift-etcd 57m Normal Created pod/installer-2-ip-10-0-140-6.ec2.internal Created container installer openshift-etcd 57m Normal Started pod/installer-2-ip-10-0-140-6.ec2.internal Started container installer openshift-kube-apiserver-operator 57m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing PodIP in operand kube-apiserver-ip-10-0-197-197.ec2.internal on node ip-10-0-197-197.ec2.internal]" to "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing PodIP in operand kube-apiserver-ip-10-0-197-197.ec2.internal on node ip-10-0-197-197.ec2.internal]\nRevisionControllerDegraded: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" openshift-kube-apiserver-operator 57m Normal RevisionTriggered deployment/kube-apiserver-operator new revision 3 triggered by "optional secret/webhook-authenticator has been created" openshift-kube-apiserver-operator 57m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" to "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing PodIP in operand kube-apiserver-ip-10-0-197-197.ec2.internal on node ip-10-0-197-197.ec2.internal]" openshift-apiserver 57m Normal Pulled pod/apiserver-565b67b9f7-lvnhl Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" in 2.780473873s (2.780487021s including waiting) openshift-apiserver-operator 57m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 3." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." openshift-apiserver 57m Normal Pulling pod/apiserver-565b67b9f7-wvhp4 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" openshift-apiserver 57m Normal Created pod/apiserver-565b67b9f7-lvnhl Created container fix-audit-permissions openshift-apiserver-operator 57m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 3 of 3 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-565b67b9f7-w2dv2 pod, 3 containers are waiting in pending apiserver-565b67b9f7-wvhp4 pod, 3 containers are waiting in pending apiserver-565b67b9f7-lvnhl pod)",Progressing changed from True to False ("All is well") openshift-apiserver 57m Normal Started pod/apiserver-565b67b9f7-lvnhl Started container fix-audit-permissions openshift-authentication-operator 57m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.129.0.28:8443/apis/oauth.openshift.io/v1: Get \"https://10.129.0.28:8443/apis/oauth.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-apiserver 57m Normal Pulled pod/apiserver-565b67b9f7-lvnhl Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" already present on machine openshift-apiserver 57m Normal Created pod/apiserver-565b67b9f7-lvnhl Created container openshift-apiserver openshift-apiserver 57m Normal Pulling pod/apiserver-565b67b9f7-lvnhl Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" openshift-authentication-operator 57m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.129.0.28:8443/apis/oauth.openshift.io/v1: Get \"https://10.129.0.28:8443/apis/oauth.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.129.0.28:8443/apis/oauth.openshift.io/v1: Get \"https://10.129.0.28:8443/apis/oauth.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.130.0.47:8443/apis/user.openshift.io/v1: Get \"https://10.130.0.47:8443/apis/user.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-apiserver 57m Normal Pulled pod/apiserver-565b67b9f7-w2dv2 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" in 3.144098044s (3.144117133s including waiting) openshift-apiserver 57m Normal Started pod/apiserver-565b67b9f7-wvhp4 Started container openshift-apiserver openshift-apiserver 57m Normal Created pod/apiserver-565b67b9f7-w2dv2 Created container fix-audit-permissions openshift-apiserver 57m Normal Pulled pod/apiserver-565b67b9f7-wvhp4 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" in 2.430792982s (2.430823783s including waiting) openshift-apiserver 57m Normal Created pod/apiserver-565b67b9f7-wvhp4 Created container fix-audit-permissions openshift-apiserver 57m Normal Pulled pod/apiserver-565b67b9f7-w2dv2 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" already present on machine openshift-apiserver 57m Normal Started pod/apiserver-565b67b9f7-wvhp4 Started container fix-audit-permissions openshift-apiserver 57m Normal Pulled pod/apiserver-565b67b9f7-wvhp4 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" already present on machine openshift-apiserver 57m Normal Created pod/apiserver-565b67b9f7-wvhp4 Created container openshift-apiserver openshift-apiserver 57m Normal Started pod/apiserver-565b67b9f7-w2dv2 Started container fix-audit-permissions openshift-apiserver 57m Normal Started pod/apiserver-565b67b9f7-lvnhl Started container openshift-apiserver openshift-kube-apiserver-operator 57m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing PodIP in operand kube-apiserver-ip-10-0-197-197.ec2.internal on node ip-10-0-197-197.ec2.internal]\nRevisionControllerDegraded: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing PodIP in operand kube-apiserver-ip-10-0-197-197.ec2.internal on node ip-10-0-197-197.ec2.internal]" openshift-apiserver 57m Normal Pulled pod/apiserver-565b67b9f7-w2dv2 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-apiserver 57m Normal Started pod/apiserver-565b67b9f7-w2dv2 Started container openshift-apiserver-check-endpoints openshift-kube-apiserver-operator 57m Normal OperatorVersionChanged deployment/kube-apiserver-operator clusteroperator/kube-apiserver version "operator" changed from "" to "4.13.0-rc.0" openshift-kube-apiserver-operator 57m Normal OperatorVersionChanged deployment/kube-apiserver-operator clusteroperator/kube-apiserver version "kube-apiserver" changed from "" to "1.26.2" openshift-kube-apiserver-operator 57m Normal RevisionCreate deployment/kube-apiserver-operator Revision 2 created because optional secret/webhook-authenticator has been created openshift-kube-apiserver-operator 57m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: status.versions changed from [{"raw-internal" "4.13.0-rc.0"}] to [{"raw-internal" "4.13.0-rc.0"} {"kube-apiserver" "1.26.2"} {"operator" "4.13.0-rc.0"}] openshift-apiserver 57m Normal Created pod/apiserver-565b67b9f7-w2dv2 Created container openshift-apiserver openshift-apiserver 57m Normal Created pod/apiserver-565b67b9f7-w2dv2 Created container openshift-apiserver-check-endpoints openshift-apiserver 57m Normal Started pod/apiserver-565b67b9f7-w2dv2 Started container openshift-apiserver openshift-kube-apiserver-operator 57m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 3" openshift-apiserver 57m Warning FastControllerResync node/ip-10-0-197-197.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling openshift-apiserver 57m Normal Started pod/apiserver-565b67b9f7-wvhp4 Started container openshift-apiserver-check-endpoints openshift-apiserver 57m Warning FastControllerResync node/ip-10-0-197-197.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling openshift-apiserver 57m Normal Pulled pod/apiserver-565b67b9f7-lvnhl Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" in 1.831672002s (1.831685375s including waiting) openshift-apiserver 57m Normal Created pod/apiserver-565b67b9f7-wvhp4 Created container openshift-apiserver-check-endpoints openshift-apiserver 57m Normal Created pod/apiserver-565b67b9f7-lvnhl Created container openshift-apiserver-check-endpoints openshift-apiserver 57m Normal Pulled pod/apiserver-565b67b9f7-wvhp4 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" in 1.968662696s (1.968670334s including waiting) openshift-apiserver 57m Normal Started pod/apiserver-565b67b9f7-lvnhl Started container openshift-apiserver-check-endpoints openshift-apiserver 57m Warning FastControllerResync node/ip-10-0-140-6.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling openshift-apiserver 57m Warning FastControllerResync node/ip-10-0-140-6.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling openshift-apiserver 57m Warning FastControllerResync node/ip-10-0-239-132.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling openshift-apiserver 57m Warning FastControllerResync node/ip-10-0-239-132.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling kube-system 57m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-node namespace kube-system 57m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift namespace openshift-kube-apiserver-operator 57m Normal RevisionTriggered deployment/kube-apiserver-operator new revision 4 triggered by "required configmap/config has changed" openshift-apiserver-operator 57m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 3 of 3 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-565b67b9f7-w2dv2 pod, 3 containers are waiting in pending apiserver-565b67b9f7-wvhp4 pod, 3 containers are waiting in pending apiserver-565b67b9f7-lvnhl pod)" to "APIServerDeploymentDegraded: 3 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-565b67b9f7-w2dv2 pod, container is not ready in apiserver-565b67b9f7-wvhp4 pod, container is not ready in apiserver-565b67b9f7-lvnhl pod)" openshift-cluster-samples-operator 57m Warning FailedMount pod/cluster-samples-operator-bf9b9498c-mkgcp MountVolume.SetUp failed for volume "samples-operator-tls" : secret "samples-operator-tls" not found openshift-etcd 57m Normal Pulled pod/installer-3-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd 57m Normal AddedInterface pod/installer-3-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.30/23] from ovn-kubernetes openshift-kube-apiserver 57m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container setup openshift-authentication-operator 57m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.129.0.28:8443/apis/oauth.openshift.io/v1: Get \"https://10.129.0.28:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.130.0.47:8443/apis/user.openshift.io/v1: Get \"https://10.130.0.47:8443/apis/user.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.129.0.28:8443/apis/oauth.openshift.io/v1: Get \"https://10.129.0.28:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.130.0.47:8443/apis/user.openshift.io/v1: Get \"https://10.130.0.47:8443/apis/user.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-kube-scheduler-operator 57m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing PodIP in operand openshift-kube-scheduler-ip-10-0-239-132.ec2.internal on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" to "GuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing PodIP in operand openshift-kube-scheduler-ip-10-0-239-132.ec2.internal on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" openshift-cluster-samples-operator 57m Normal ScalingReplicaSet deployment/cluster-samples-operator Scaled up replica set cluster-samples-operator-bf9b9498c to 1 openshift-cluster-samples-operator 57m Normal SuccessfulCreate replicaset/cluster-samples-operator-bf9b9498c Created pod: cluster-samples-operator-bf9b9498c-mkgcp openshift-kube-controller-manager-operator 57m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]" openshift-kube-scheduler 57m Normal StaticPodInstallerCompleted pod/installer-6-ip-10-0-239-132.ec2.internal Successfully installed revision 6 openshift-kube-scheduler-operator 57m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" to "GuardControllerDegraded: [Missing PodIP in operand openshift-kube-scheduler-ip-10-0-239-132.ec2.internal on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" openshift-etcd-operator 57m Normal PodCreated deployment/etcd-operator Created Pod/installer-3-ip-10-0-140-6.ec2.internal -n openshift-etcd because it was missing openshift-kube-apiserver 57m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" in 8.182944414s (8.18298292s including waiting) openshift-kube-scheduler-operator 57m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" to "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" openshift-kube-apiserver 57m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container setup openshift-kube-scheduler 57m Normal Pulling pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" openshift-kube-apiserver-operator 57m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/revision-status-4 -n openshift-kube-apiserver because it was missing openshift-authentication-operator 57m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.129.0.28:8443/apis/oauth.openshift.io/v1: Get \"https://10.129.0.28:8443/apis/oauth.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.130.0.47:8443/apis/user.openshift.io/v1: Get \"https://10.130.0.47:8443/apis/user.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.129.0.28:8443/apis/oauth.openshift.io/v1: Get \"https://10.129.0.28:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.130.0.47:8443/apis/user.openshift.io/v1: Get \"https://10.130.0.47:8443/apis/user.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-kube-apiserver 57m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver-insecure-readyz openshift-kube-apiserver 57m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-apiserver 57m Normal AddedInterface pod/kube-apiserver-guard-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.49/23] from ovn-kubernetes openshift-kube-apiserver 57m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver openshift-kube-apiserver 57m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver-insecure-readyz openshift-cluster-samples-operator 57m Normal AddedInterface pod/cluster-samples-operator-bf9b9498c-mkgcp Add eth0 [10.128.0.31/23] from ovn-kubernetes openshift-kube-apiserver 57m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver openshift-etcd 57m Normal Created pod/installer-3-ip-10-0-140-6.ec2.internal Created container installer openshift-kube-apiserver 57m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 57m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 57m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver-cert-syncer openshift-kube-apiserver 57m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver-check-endpoints openshift-kube-apiserver 57m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver-cert-syncer openshift-etcd 57m Normal Started pod/installer-3-ip-10-0-140-6.ec2.internal Started container installer openshift-kube-apiserver-operator 57m Normal PodCreated deployment/kube-apiserver-operator Created Pod/kube-apiserver-guard-ip-10-0-197-197.ec2.internal -n openshift-kube-apiserver because it was missing openshift-kube-apiserver 57m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 57m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver-cert-regeneration-controller openshift-kube-apiserver 57m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver-cert-regeneration-controller openshift-kube-apiserver-operator 57m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing PodIP in operand kube-apiserver-ip-10-0-197-197.ec2.internal on node ip-10-0-197-197.ec2.internal]" to "GuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]" openshift-kube-apiserver 57m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 57m Normal Created pod/kube-apiserver-guard-ip-10-0-197-197.ec2.internal Created container guard openshift-apiserver-operator 57m Normal Created deployment/openshift-apiserver-operator Created /v1.project.openshift.io because it was missing openshift-apiserver-operator 57m Normal Created deployment/openshift-apiserver-operator Created /v1.build.openshift.io because it was missing openshift-apiserver-operator 57m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 3 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-565b67b9f7-w2dv2 pod, container is not ready in apiserver-565b67b9f7-wvhp4 pod, container is not ready in apiserver-565b67b9f7-lvnhl pod)" to "APIServerDeploymentDegraded: 2 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-565b67b9f7-wvhp4 pod, container is not ready in apiserver-565b67b9f7-lvnhl pod)",Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: PreconditionNotReady" openshift-apiserver-operator 57m Normal Created deployment/openshift-apiserver-operator Created /v1.quota.openshift.io because it was missing openshift-apiserver-operator 57m Normal Created deployment/openshift-apiserver-operator Created /v1.authorization.openshift.io because it was missing openshift-apiserver-operator 57m Normal Created deployment/openshift-apiserver-operator Created /v1.apps.openshift.io because it was missing openshift-apiserver-operator 57m Normal Created deployment/openshift-apiserver-operator Created /v1.route.openshift.io because it was missing openshift-kube-apiserver-operator 57m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-pod-4 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver 57m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver-check-endpoints openshift-cluster-samples-operator 57m Normal Pulling pod/cluster-samples-operator-bf9b9498c-mkgcp Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3066c35df5c02d6013ee2944ff5d100cdf41fb0d25076ce846d6e094b36d45c" openshift-kube-apiserver 57m Normal Pulled pod/kube-apiserver-guard-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 57m Normal Started pod/kube-apiserver-guard-ip-10-0-197-197.ec2.internal Started container guard openshift-apiserver-operator 57m Normal Created deployment/openshift-apiserver-operator Created /v1.image.openshift.io because it was missing openshift-cluster-samples-operator 57m Normal Pulled pod/cluster-samples-operator-bf9b9498c-mkgcp Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3066c35df5c02d6013ee2944ff5d100cdf41fb0d25076ce846d6e094b36d45c" in 1.931307159s (1.931320734s including waiting) openshift-apiserver-operator 57m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" openshift-apiserver-operator 57m Normal Created deployment/openshift-apiserver-operator Created /v1.template.openshift.io because it was missing openshift-apiserver-operator 57m Normal Created deployment/openshift-apiserver-operator Created /v1.security.openshift.io because it was missing openshift-cluster-samples-operator 57m Normal Created pod/cluster-samples-operator-bf9b9498c-mkgcp Created container cluster-samples-operator openshift-cluster-samples-operator 57m Normal Started pod/cluster-samples-operator-bf9b9498c-mkgcp Started container cluster-samples-operator openshift-cluster-samples-operator 57m Normal Pulled pod/cluster-samples-operator-bf9b9498c-mkgcp Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3066c35df5c02d6013ee2944ff5d100cdf41fb0d25076ce846d6e094b36d45c" already present on machine openshift-kube-scheduler-operator 57m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: status.versions changed from [{"raw-internal" "4.13.0-rc.0"}] to [{"raw-internal" "4.13.0-rc.0"} {"kube-scheduler" "1.26.2"} {"operator" "4.13.0-rc.0"}] openshift-kube-scheduler-operator 57m Normal OperatorVersionChanged deployment/openshift-kube-scheduler-operator clusteroperator/kube-scheduler version "operator" changed from "" to "4.13.0-rc.0" openshift-kube-scheduler-operator 57m Normal OperatorVersionChanged deployment/openshift-kube-scheduler-operator clusteroperator/kube-scheduler version "kube-scheduler" changed from "" to "1.26.2" openshift-cluster-samples-operator 57m Normal Created pod/cluster-samples-operator-bf9b9498c-mkgcp Created container cluster-samples-operator-watch openshift-kube-apiserver-operator 57m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/config-4 -n openshift-kube-apiserver because it was missing openshift-cluster-samples-operator 57m Normal Started pod/cluster-samples-operator-bf9b9498c-mkgcp Started container cluster-samples-operator-watch openshift-cluster-samples-operator 57m Normal FileChangeWatchdogStarted deployment/cluster-samples-operator Started watching files for process cluster-samples-operator[7] openshift-kube-scheduler-operator 57m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing PodIP in operand openshift-kube-scheduler-ip-10-0-239-132.ec2.internal on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" to "GuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" openshift-etcd-operator 57m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 nodes are at revision 2; 0 nodes have achieved new revision 3\nEtcdMembersAvailable: 1 members are available" to "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 nodes are at revision 2; 0 nodes have achieved new revision 3\nEtcdMembersAvailable: 2 members are available" openshift-authentication-operator 57m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.129.0.28:8443/apis/oauth.openshift.io/v1: Get \"https://10.129.0.28:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.130.0.47:8443/apis/user.openshift.io/v1: Get \"https://10.130.0.47:8443/apis/user.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.27:8443/apis/oauth.openshift.io/v1: Get \"https://10.128.0.27:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.27:8443/apis/user.openshift.io/v1: Get \"https://10.128.0.27:8443/apis/user.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-kube-apiserver-operator 57m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-4 -n openshift-kube-apiserver because it was missing openshift-apiserver-operator 57m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 2 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-565b67b9f7-wvhp4 pod, container is not ready in apiserver-565b67b9f7-lvnhl pod)" to "All is well" openshift-apiserver-operator 57m Normal OperatorVersionChanged deployment/openshift-apiserver-operator clusteroperator/openshift-apiserver version "openshift-apiserver" changed from "" to "4.13.0-rc.0" openshift-apiserver-operator 57m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: status.versions changed from [{"operator" "4.13.0-rc.0"}] to [{"operator" "4.13.0-rc.0"} {"openshift-apiserver" "4.13.0-rc.0"}] openshift-kube-scheduler 57m Normal AddedInterface pod/openshift-kube-scheduler-guard-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.30/23] from ovn-kubernetes openshift-kube-scheduler 57m Normal Pulled pod/openshift-kube-scheduler-guard-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 57m Normal Created pod/openshift-kube-scheduler-guard-ip-10-0-239-132.ec2.internal Created container guard openshift-kube-scheduler 57m Normal Started pod/openshift-kube-scheduler-guard-ip-10-0-239-132.ec2.internal Started container guard openshift-kube-scheduler 57m Normal Created pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Created container wait-for-host-port openshift-kube-scheduler 57m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" in 6.958660856s (6.958673221s including waiting) openshift-kube-scheduler 57m Normal Started pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Started container wait-for-host-port openshift-kube-scheduler 57m Normal Created pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Created container kube-scheduler-cert-syncer openshift-kube-scheduler 57m Normal Started pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Started container kube-scheduler openshift-kube-scheduler 57m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 57m Normal Started pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Started container kube-scheduler-cert-syncer openshift-kube-scheduler 57m Normal Created pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Created container kube-scheduler openshift-kube-scheduler 57m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-apiserver-operator 57m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/bound-sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing openshift-kube-scheduler 57m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-controller-manager-operator 57m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" openshift-kube-scheduler 57m Normal Created pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Created container kube-scheduler-recovery-controller openshift-kube-scheduler 57m Normal Started pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Started container kube-scheduler-recovery-controller openshift-kube-scheduler 57m Normal LeaderElection configmap/kube-scheduler ip-10-0-239-132_4abbf9ce-c538-4050-8c5b-274abcb94d90 became leader openshift-kube-scheduler 57m Normal LeaderElection lease/kube-scheduler ip-10-0-239-132_4abbf9ce-c538-4050-8c5b-274abcb94d90 became leader openshift-apiserver-operator 57m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/project.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" openshift-kube-apiserver-operator 57m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/etcd-serving-ca-4 -n openshift-kube-apiserver because it was missing openshift-etcd-operator 57m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "GuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" openshift-kube-apiserver 56m Warning Unhealthy pod/kube-apiserver-ip-10-0-197-197.ec2.internal Readiness probe failed: Get "https://10.0.197.197:17697/healthz": dial tcp 10.0.197.197:17697: connect: connection refused openshift-apiserver-operator 56m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/project.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/project.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" openshift-kube-apiserver 56m Warning Unhealthy pod/kube-apiserver-ip-10-0-197-197.ec2.internal Liveness probe failed: Get "https://10.0.197.197:17697/healthz": dial tcp 10.0.197.197:17697: connect: connection refused openshift-kube-apiserver 56m Warning ProbeError pod/kube-apiserver-ip-10-0-197-197.ec2.internal Liveness probe error: Get "https://10.0.197.197:17697/healthz": dial tcp 10.0.197.197:17697: connect: connection refused... openshift-kube-apiserver-operator 56m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-server-ca-4 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver 56m Warning ProbeError pod/kube-apiserver-ip-10-0-197-197.ec2.internal Readiness probe error: Get "https://10.0.197.197:17697/healthz": dial tcp 10.0.197.197:17697: connect: connection refused... openshift-kube-apiserver-operator 56m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kubelet-serving-ca-4 -n openshift-kube-apiserver because it was missing openshift-apiserver-operator 56m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/project.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/project.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/quota.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" openshift-kube-apiserver 56m Warning ProbeError pod/kube-apiserver-guard-ip-10-0-197-197.ec2.internal Readiness probe error: Get "https://10.0.197.197:6443/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)... openshift-kube-apiserver 56m Warning Unhealthy pod/kube-apiserver-guard-ip-10-0-197-197.ec2.internal Readiness probe failed: Get "https://10.0.197.197:6443/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) openshift-etcd-operator 56m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "GuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" openshift-kube-apiserver-operator 56m Normal PodCreated deployment/kube-apiserver-operator Created Pod/installer-3-ip-10-0-197-197.ec2.internal -n openshift-kube-apiserver because it was missing openshift-kube-apiserver 56m Normal Pulled pod/installer-3-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 56m Normal AddedInterface pod/installer-3-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.50/23] from ovn-kubernetes openshift-kube-apiserver-operator 56m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver 56m Normal Created pod/installer-3-ip-10-0-197-197.ec2.internal Created container installer openshift-apiserver-operator 56m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/project.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/quota.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/project.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/quota.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/template.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/template.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" openshift-kube-apiserver 56m Normal Started pod/installer-3-ip-10-0-197-197.ec2.internal Started container installer openshift-authentication-operator 56m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.130.0.47:8443/apis/oauth.openshift.io/v1: Get \"https://10.130.0.47:8443/apis/oauth.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.27:8443/apis/user.openshift.io/v1: Get \"https://10.128.0.27:8443/apis/user.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.130.0.47:8443/apis/oauth.openshift.io/v1: Get \"https://10.130.0.47:8443/apis/oauth.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.130.0.47:8443/apis/user.openshift.io/v1: Get \"https://10.130.0.47:8443/apis/user.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-kube-apiserver-operator 56m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-audit-policies-4 -n openshift-kube-apiserver because it was missing openshift-apiserver-operator 56m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/project.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/quota.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/template.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/template.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/project.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/quota.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/template.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/template.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" openshift-kube-apiserver-operator 56m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/etcd-client-4 -n openshift-kube-apiserver because it was missing openshift-apiserver-operator 56m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/project.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/quota.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/template.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/template.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/project.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/quota.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/template.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/template.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" openshift-kube-apiserver 56m Warning ProbeError pod/kube-apiserver-ip-10-0-197-197.ec2.internal Readiness probe error: Get "https://10.0.197.197:6443/readyz": read tcp 10.0.197.197:43680->10.0.197.197:6443: read: connection reset by peer... openshift-kube-apiserver 56m Warning Unhealthy pod/kube-apiserver-ip-10-0-197-197.ec2.internal Readiness probe failed: Get "https://10.0.197.197:6443/readyz": read tcp 10.0.197.197:43680->10.0.197.197:6443: read: connection reset by peer openshift-kube-apiserver 56m Warning Unhealthy pod/kube-apiserver-guard-ip-10-0-197-197.ec2.internal Readiness probe failed: Get "https://10.0.197.197:6443/readyz": read tcp 10.0.197.197:41548->10.0.197.197:6443: read: connection reset by peer openshift-kube-apiserver 56m Warning ProbeError pod/kube-apiserver-guard-ip-10-0-197-197.ec2.internal Readiness probe error: Get "https://10.0.197.197:6443/readyz": read tcp 10.0.197.197:41548->10.0.197.197:6443: read: connection reset by peer... kube-system 56m Normal CreatedSCCRanges pod/bootstrap-kube-controller-manager-ip-10-0-8-110 created SCC ranges for openshift-ingress-canary namespace openshift-apiserver-operator 56m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/project.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/quota.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/template.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/template.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/image.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/template.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/template.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" openshift-kube-apiserver-operator 56m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/localhost-recovery-serving-certkey-4 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 56m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/localhost-recovery-client-token-4 -n openshift-kube-apiserver because it was missing openshift-apiserver-operator 56m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/image.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/template.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/template.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/image.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/template.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/template.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" openshift-etcd 56m Normal Killing pod/installer-2-ip-10-0-140-6.ec2.internal Stopping container installer openshift-kube-apiserver-operator 56m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" to "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" is terminated: Error: alhost\",\nStaticPodsDegraded: \"Attributes\": null,\nStaticPodsDegraded: \"BalancerAttributes\": null,\nStaticPodsDegraded: \"Type\": 0,\nStaticPodsDegraded: \"Metadata\": null\nStaticPodsDegraded: }. Err: connection error: desc = \"transport: Error while dialing dial tcp [::1]:2379: connect: connection refused\"\nStaticPodsDegraded: W0321 12:17:05.174138 18 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {\nStaticPodsDegraded: \"Addr\": \"localhost:2379\",\nStaticPodsDegraded: \"ServerName\": \"localhost\",\nStaticPodsDegraded: W0321 12:17:11.362773 18 logging.go:59] [core] [Channel #4 SubChannel #5] grpc: addrConn.createTransport failed to connect to {\nStaticPodsDegraded: W0321 12:17:11.860715 18 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {\nStaticPodsDegraded: W0321 12:17:12.625218 18 logging.go:59] [core] [Channel #3 SubChannel #6] grpc: addrConn.createTransport failed to connect to {\nStaticPodsDegraded: E0321 12:17:15.655199 18 run.go:74] \"command failed\" err=\"context deadline exceeded\"\nStaticPodsDegraded: I0321 12:17:15.658257 1 main.go:235] Termination finished with exit code 1\nStaticPodsDegraded: I0321 12:17:15.658275 1 main.go:188] Deleting termination lock file \"/var/log/kube-apiserver/.terminating\"\nStaticPodsDegraded: " openshift-kube-apiserver-operator 56m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/webhook-authenticator-4 -n openshift-kube-apiserver because it was missing openshift-authentication-operator 56m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.130.0.47:8443/apis/oauth.openshift.io/v1: Get \"https://10.130.0.47:8443/apis/oauth.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.130.0.47:8443/apis/user.openshift.io/v1: Get \"https://10.130.0.47:8443/apis/user.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.27:8443/apis/oauth.openshift.io/v1: Get \"https://10.128.0.27:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.130.0.47:8443/apis/user.openshift.io/v1: Get \"https://10.130.0.47:8443/apis/user.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-kube-apiserver-operator 56m Normal RevisionCreate deployment/kube-apiserver-operator Revision 3 created because required configmap/config has changed openshift-apiserver-operator 56m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/image.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/template.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/template.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/image.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/route.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/template.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/template.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" openshift-apiserver-operator 56m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/image.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/route.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/template.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/template.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/image.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/route.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" openshift-kube-apiserver-operator 56m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" is terminated: Error: alhost\",\nStaticPodsDegraded: \"Attributes\": null,\nStaticPodsDegraded: \"BalancerAttributes\": null,\nStaticPodsDegraded: \"Type\": 0,\nStaticPodsDegraded: \"Metadata\": null\nStaticPodsDegraded: }. Err: connection error: desc = \"transport: Error while dialing dial tcp [::1]:2379: connect: connection refused\"\nStaticPodsDegraded: W0321 12:17:05.174138 18 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {\nStaticPodsDegraded: \"Addr\": \"localhost:2379\",\nStaticPodsDegraded: \"ServerName\": \"localhost\",\nStaticPodsDegraded: W0321 12:17:11.362773 18 logging.go:59] [core] [Channel #4 SubChannel #5] grpc: addrConn.createTransport failed to connect to {\nStaticPodsDegraded: W0321 12:17:11.860715 18 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {\nStaticPodsDegraded: W0321 12:17:12.625218 18 logging.go:59] [core] [Channel #3 SubChannel #6] grpc: addrConn.createTransport failed to connect to {\nStaticPodsDegraded: E0321 12:17:15.655199 18 run.go:74] \"command failed\" err=\"context deadline exceeded\"\nStaticPodsDegraded: I0321 12:17:15.658257 1 main.go:235] Termination finished with exit code 1\nStaticPodsDegraded: I0321 12:17:15.658275 1 main.go:188] Deleting termination lock file \"/var/log/kube-apiserver/.terminating\"\nStaticPodsDegraded: " to "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" openshift-apiserver-operator 56m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/image.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/route.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/apps.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/apps.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/image.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/route.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" openshift-kube-controller-manager 56m Normal Pulled pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-controller-manager 56m Normal Pulling pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fec19020202ba3b7f09460bc0cb52fc001941fbb3a3fa7d1ca34a2e9ebf0e9" openshift-kube-controller-manager-operator 56m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing PodIP in operand kube-controller-manager-ip-10-0-239-132.ec2.internal on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" openshift-kube-controller-manager 56m Normal StaticPodInstallerCompleted pod/installer-4-ip-10-0-239-132.ec2.internal Successfully installed revision 4 openshift-kube-controller-manager-operator 56m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: status.versions changed from [{"raw-internal" "4.13.0-rc.0"}] to [{"raw-internal" "4.13.0-rc.0"} {"operator" "4.13.0-rc.0"} {"kube-controller-manager" "1.26.2"}] openshift-kube-controller-manager-operator 56m Normal OperatorVersionChanged deployment/kube-controller-manager-operator clusteroperator/kube-controller-manager version "operator" changed from "" to "4.13.0-rc.0" openshift-kube-controller-manager-operator 56m Normal OperatorVersionChanged deployment/kube-controller-manager-operator clusteroperator/kube-controller-manager version "kube-controller-manager" changed from "" to "1.26.2" openshift-kube-controller-manager 56m Normal Created pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Created container kube-controller-manager openshift-kube-controller-manager 56m Normal Started pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Started container kube-controller-manager openshift-kube-apiserver-operator 56m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 4" openshift-kube-controller-manager 56m Normal LeaderElection configmap/cluster-policy-controller-lock ip-10-0-239-132_5bc9be88-6e5f-46c2-b962-fad068eef503 became leader openshift-kube-controller-manager 56m Normal Created pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Created container kube-controller-manager-cert-syncer openshift-kube-controller-manager 56m Warning ClusterInfrastructureStatus pod/kube-controller-manager-ip-10-0-239-132.ec2.internal unable to get cluster infrastructure status, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope openshift-kube-controller-manager 56m Normal LeaderElection lease/cluster-policy-controller-lock ip-10-0-239-132_5bc9be88-6e5f-46c2-b962-fad068eef503 became leader openshift-kube-controller-manager 56m Warning FastControllerResync pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Controller "namespace-security-allocation-controller" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager 56m Normal Pulled pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 56m Normal Created pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Created container kube-controller-manager-recovery-controller openshift-kube-controller-manager 56m Normal Started pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Started container kube-controller-manager-recovery-controller openshift-kube-controller-manager 56m Normal Started pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Started container kube-controller-manager-cert-syncer openshift-apiserver-operator 56m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/apps.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/apps.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/image.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/route.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/apps.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/apps.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/build.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/image.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/route.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" openshift-kube-controller-manager 56m Normal Pulled pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 56m Normal Started pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Started container cluster-policy-controller openshift-kube-controller-manager 56m Normal Pulled pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fec19020202ba3b7f09460bc0cb52fc001941fbb3a3fa7d1ca34a2e9ebf0e9" in 1.572167883s (1.572182039s including waiting) openshift-kube-controller-manager 56m Warning FastControllerResync pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Controller "pod-security-admission-label-synchronization-controller" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager 56m Normal Created pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Created container cluster-policy-controller openshift-apiserver-operator 56m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/apps.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/apps.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/build.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/image.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/route.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/apps.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/apps.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/build.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/image.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" openshift-apiserver-operator 56m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/apps.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/apps.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/build.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/image.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/apps.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/apps.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/build.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/image.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" openshift-kube-controller-manager-operator 56m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing PodIP in operand kube-controller-manager-ip-10-0-239-132.ec2.internal on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" openshift-kube-controller-manager 56m Normal Pulled pod/kube-controller-manager-guard-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 56m Normal Started pod/kube-controller-manager-guard-ip-10-0-239-132.ec2.internal Started container guard openshift-kube-controller-manager 56m Normal Created pod/kube-controller-manager-guard-ip-10-0-239-132.ec2.internal Created container guard openshift-kube-controller-manager 56m Normal AddedInterface pod/kube-controller-manager-guard-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.31/23] from ovn-kubernetes openshift-authentication-operator 56m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.27:8443/apis/oauth.openshift.io/v1: Get \"https://10.128.0.27:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.130.0.47:8443/apis/user.openshift.io/v1: Get \"https://10.130.0.47:8443/apis/user.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.27:8443/apis/oauth.openshift.io/v1: Get \"https://10.128.0.27:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.27:8443/apis/user.openshift.io/v1: Get \"https://10.128.0.27:8443/apis/user.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-kube-apiserver 56m Warning Unhealthy pod/kube-apiserver-guard-ip-10-0-197-197.ec2.internal Readiness probe failed: Get "https://10.0.197.197:6443/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) openshift-kube-apiserver 56m Warning ProbeError pod/kube-apiserver-guard-ip-10-0-197-197.ec2.internal Readiness probe error: Get "https://10.0.197.197:6443/readyz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)... openshift-kube-apiserver 56m Warning ProbeError pod/kube-apiserver-guard-ip-10-0-197-197.ec2.internal Readiness probe error: Get "https://10.0.197.197:6443/readyz": read tcp 10.0.197.197:59348->10.0.197.197:6443: read: connection reset by peer... openshift-kube-apiserver 56m Warning Unhealthy pod/kube-apiserver-guard-ip-10-0-197-197.ec2.internal Readiness probe failed: Get "https://10.0.197.197:6443/readyz": read tcp 10.0.197.197:59348->10.0.197.197:6443: read: connection reset by peer openshift-apiserver-operator 56m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/apps.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/apps.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/build.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/image.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/apps.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/build.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/image.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" openshift-kube-apiserver-operator 56m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" to "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" is terminated: Error: alhost\",\nStaticPodsDegraded: \"Attributes\": null,\nStaticPodsDegraded: \"BalancerAttributes\": null,\nStaticPodsDegraded: \"Type\": 0,\nStaticPodsDegraded: \"Metadata\": null\nStaticPodsDegraded: }. Err: connection error: desc = \"transport: Error while dialing dial tcp [::1]:2379: connect: connection refused\"\nStaticPodsDegraded: W0321 12:17:27.230640 18 logging.go:59] [core] [Channel #3 SubChannel #5] grpc: addrConn.createTransport failed to connect to {\nStaticPodsDegraded: \"Addr\": \"localhost:2379\",\nStaticPodsDegraded: \"ServerName\": \"localhost\",\nStaticPodsDegraded: W0321 12:17:32.665930 18 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {\nStaticPodsDegraded: W0321 12:17:34.018366 18 logging.go:59] [core] [Channel #3 SubChannel #5] grpc: addrConn.createTransport failed to connect to {\nStaticPodsDegraded: W0321 12:17:34.273829 18 logging.go:59] [core] [Channel #4 SubChannel #6] grpc: addrConn.createTransport failed to connect to {\nStaticPodsDegraded: E0321 12:17:36.549053 18 run.go:74] \"command failed\" err=\"context deadline exceeded\"\nStaticPodsDegraded: I0321 12:17:36.554994 1 main.go:235] Termination finished with exit code 1\nStaticPodsDegraded: I0321 12:17:36.555023 1 main.go:188] Deleting termination lock file \"/var/log/kube-apiserver/.terminating\"\nStaticPodsDegraded: " openshift-apiserver-operator 56m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/apps.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/build.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/image.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/apps.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/authorization.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/image.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" openshift-apiserver-operator 56m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/apps.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/authorization.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/image.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/apps.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/authorization.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/image.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/project.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/project.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" openshift-kube-controller-manager-operator 56m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 2 nodes are at revision 0; 1 nodes are at revision 4",Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 nodes are at revision 4") openshift-kube-controller-manager-operator 56m Normal NodeCurrentRevisionChanged deployment/kube-controller-manager-operator Updated node "ip-10-0-239-132.ec2.internal" from revision 0 to 4 because static pod is ready openshift-kube-apiserver 56m Normal Killing pod/installer-3-ip-10-0-197-197.ec2.internal Stopping container installer openshift-apiserver-operator 56m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/apps.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/authorization.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/image.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/project.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/project.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/apps.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/authorization.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/image.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/project.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/project.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" openshift-cluster-csi-drivers 56m Normal LeaderElection lease/external-resizer-ebs-csi-aws-com aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp became leader openshift-kube-apiserver-operator 56m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" is terminated: Error: alhost\",\nStaticPodsDegraded: \"Attributes\": null,\nStaticPodsDegraded: \"BalancerAttributes\": null,\nStaticPodsDegraded: \"Type\": 0,\nStaticPodsDegraded: \"Metadata\": null\nStaticPodsDegraded: }. Err: connection error: desc = \"transport: Error while dialing dial tcp [::1]:2379: connect: connection refused\"\nStaticPodsDegraded: W0321 12:17:27.230640 18 logging.go:59] [core] [Channel #3 SubChannel #5] grpc: addrConn.createTransport failed to connect to {\nStaticPodsDegraded: \"Addr\": \"localhost:2379\",\nStaticPodsDegraded: \"ServerName\": \"localhost\",\nStaticPodsDegraded: W0321 12:17:32.665930 18 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {\nStaticPodsDegraded: W0321 12:17:34.018366 18 logging.go:59] [core] [Channel #3 SubChannel #5] grpc: addrConn.createTransport failed to connect to {\nStaticPodsDegraded: W0321 12:17:34.273829 18 logging.go:59] [core] [Channel #4 SubChannel #6] grpc: addrConn.createTransport failed to connect to {\nStaticPodsDegraded: E0321 12:17:36.549053 18 run.go:74] \"command failed\" err=\"context deadline exceeded\"\nStaticPodsDegraded: I0321 12:17:36.554994 1 main.go:235] Termination finished with exit code 1\nStaticPodsDegraded: I0321 12:17:36.555023 1 main.go:188] Deleting termination lock file \"/var/log/kube-apiserver/.terminating\"\nStaticPodsDegraded: " to "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" is waiting: CrashLoopBackOff: back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" is terminated: Error: W0321 12:17:38.417765 1 cmd.go:213] Using insecure, self-signed certificates\nStaticPodsDegraded: I0321 12:17:38.418020 1 crypto.go:601] Generating new CA for check-endpoints-signer@1679401058 cert, and key in /tmp/serving-cert-3510368061/serving-signer.crt, /tmp/serving-cert-3510368061/serving-signer.key\nStaticPodsDegraded: I0321 12:17:38.603508 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0321 12:17:38.605227 1 builder.go:230] unable to get owner reference (falling back to namespace): Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-197-197.ec2.internal\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0321 12:17:38.605370 1 builder.go:262] check-endpoints version 4.13.0-202303180002.p0.g5cec361.assembly.stream-5cec361-5cec361179f3658986890a87d0b51f40a1da89ad\nStaticPodsDegraded: I0321 12:17:38.605840 1 dynamic_serving_content.go:113] \"Loaded a new cert/key pair\" name=\"serving-cert::/tmp/serving-cert-3510368061/tls.crt::/tmp/serving-cert-3510368061/tls.key\"\nStaticPodsDegraded: F0321 12:17:38.824292 1 cmd.go:138] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: " openshift-authentication-operator 56m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.27:8443/apis/oauth.openshift.io/v1: Get \"https://10.128.0.27:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.27:8443/apis/user.openshift.io/v1: Get \"https://10.128.0.27:8443/apis/user.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.130.0.47:8443/apis/oauth.openshift.io/v1: Get \"https://10.130.0.47:8443/apis/oauth.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.27:8443/apis/user.openshift.io/v1: Get \"https://10.128.0.27:8443/apis/user.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-kube-apiserver 56m Warning Unhealthy pod/kube-apiserver-guard-ip-10-0-197-197.ec2.internal Readiness probe failed: Get "https://10.0.197.197:6443/readyz": dial tcp 10.0.197.197:6443: connect: connection refused openshift-kube-controller-manager-operator 56m Normal NodeTargetRevisionChanged deployment/kube-controller-manager-operator Updating node "ip-10-0-140-6.ec2.internal" from revision 0 to 4 because node ip-10-0-140-6.ec2.internal static pod not found openshift-authentication-operator 56m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.130.0.47:8443/apis/oauth.openshift.io/v1: Get \"https://10.130.0.47:8443/apis/oauth.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.27:8443/apis/user.openshift.io/v1: Get \"https://10.128.0.27:8443/apis/user.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.130.0.47:8443/apis/oauth.openshift.io/v1: Get \"https://10.130.0.47:8443/apis/oauth.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.129.0.28:8443/apis/user.openshift.io/v1: Get \"https://10.129.0.28:8443/apis/user.openshift.io/v1\": dial tcp 10.129.0.28:8443: i/o timeout\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-kube-apiserver-operator 56m Normal PodCreated deployment/kube-apiserver-operator Created Pod/installer-4-ip-10-0-197-197.ec2.internal -n openshift-kube-apiserver because it was missing openshift-kube-apiserver 56m Normal AddedInterface pod/installer-4-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.51/23] from ovn-kubernetes openshift-kube-apiserver 56m Normal Started pod/installer-4-ip-10-0-197-197.ec2.internal Started container installer openshift-kube-apiserver 56m Normal Created pod/installer-4-ip-10-0-197-197.ec2.internal Created container installer openshift-kube-apiserver 56m Normal Pulled pod/installer-4-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-cluster-csi-drivers 56m Normal LeaderElection lease/external-snapshotter-leader-ebs-csi-aws-com aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp became leader openshift-apiserver-operator 56m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/apps.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/authorization.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/image.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/project.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/project.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/apps.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/apps.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/image.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/project.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/project.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" openshift-kube-controller-manager-operator 56m Normal PodCreated deployment/kube-controller-manager-operator Created Pod/installer-4-ip-10-0-140-6.ec2.internal -n openshift-kube-controller-manager because it was missing openshift-apiserver-operator 56m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/apps.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/apps.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/image.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/project.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/project.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/apps.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/apps.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/build.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/image.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/project.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/project.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" openshift-kube-controller-manager 56m Normal Pulling pod/installer-4-ip-10-0-140-6.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" openshift-kube-controller-manager 56m Normal AddedInterface pod/installer-4-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.32/23] from ovn-kubernetes openshift-apiserver-operator 56m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/apps.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/apps.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/build.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/image.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/project.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/project.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/apps.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/apps.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/build.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/image.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/project.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/project.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" openshift-kube-controller-manager 56m Normal Pulled pod/installer-4-ip-10-0-140-6.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" in 1.753402744s (1.753408325s including waiting) openshift-kube-controller-manager 56m Normal Started pod/installer-4-ip-10-0-140-6.ec2.internal Started container installer openshift-kube-controller-manager 56m Normal Created pod/installer-4-ip-10-0-140-6.ec2.internal Created container installer openshift-kube-apiserver-operator 56m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" is waiting: CrashLoopBackOff: back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" is terminated: Error: W0321 12:17:38.417765 1 cmd.go:213] Using insecure, self-signed certificates\nStaticPodsDegraded: I0321 12:17:38.418020 1 crypto.go:601] Generating new CA for check-endpoints-signer@1679401058 cert, and key in /tmp/serving-cert-3510368061/serving-signer.crt, /tmp/serving-cert-3510368061/serving-signer.key\nStaticPodsDegraded: I0321 12:17:38.603508 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0321 12:17:38.605227 1 builder.go:230] unable to get owner reference (falling back to namespace): Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-197-197.ec2.internal\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0321 12:17:38.605370 1 builder.go:262] check-endpoints version 4.13.0-202303180002.p0.g5cec361.assembly.stream-5cec361-5cec361179f3658986890a87d0b51f40a1da89ad\nStaticPodsDegraded: I0321 12:17:38.605840 1 dynamic_serving_content.go:113] \"Loaded a new cert/key pair\" name=\"serving-cert::/tmp/serving-cert-3510368061/tls.crt::/tmp/serving-cert-3510368061/tls.key\"\nStaticPodsDegraded: F0321 12:17:38.824292 1 cmd.go:138] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: " to "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" is waiting: CrashLoopBackOff: back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)" openshift-apiserver-operator 56m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/apps.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/apps.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/build.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/image.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/project.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/project.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/apps.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/apps.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/build.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/image.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" openshift-kube-scheduler-operator 56m Normal NodeCurrentRevisionChanged deployment/openshift-kube-scheduler-operator Updated node "ip-10-0-239-132.ec2.internal" from revision 0 to 6 because static pod is ready openshift-kube-scheduler-operator 56m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 6" to "NodeInstallerProgressing: 2 nodes are at revision 0; 1 nodes are at revision 6",Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 nodes are at revision 6") openshift-apiserver-operator 56m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/apps.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/apps.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/build.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/image.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/apps.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/apps.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/build.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/image.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" openshift-kube-apiserver-operator 56m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" is waiting: CrashLoopBackOff: back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)" to "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" openshift-authentication-operator 56m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.27:8443/apis/oauth.openshift.io/v1: Get \"https://10.128.0.27:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.129.0.28:8443/apis/user.openshift.io/v1: Get \"https://10.129.0.28:8443/apis/user.openshift.io/v1\": dial tcp 10.129.0.28:8443: i/o timeout\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.27:8443/apis/oauth.openshift.io/v1: Get \"https://10.128.0.27:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.27:8443/apis/user.openshift.io/v1: Get \"https://10.128.0.27:8443/apis/user.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-authentication-operator 56m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.130.0.47:8443/apis/oauth.openshift.io/v1: Get \"https://10.130.0.47:8443/apis/oauth.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.129.0.28:8443/apis/user.openshift.io/v1: Get \"https://10.129.0.28:8443/apis/user.openshift.io/v1\": dial tcp 10.129.0.28:8443: i/o timeout\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.27:8443/apis/oauth.openshift.io/v1: Get \"https://10.128.0.27:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.129.0.28:8443/apis/user.openshift.io/v1: Get \"https://10.129.0.28:8443/apis/user.openshift.io/v1\": dial tcp 10.129.0.28:8443: i/o timeout\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-kube-controller-manager-operator 56m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" openshift-kube-scheduler-operator 56m Normal NodeTargetRevisionChanged deployment/openshift-kube-scheduler-operator Updating node "ip-10-0-140-6.ec2.internal" from revision 0 to 6 because node ip-10-0-140-6.ec2.internal static pod not found openshift-etcd 56m Normal StaticPodInstallerCompleted pod/installer-3-ip-10-0-140-6.ec2.internal Successfully installed revision 3 openshift-etcd-operator 56m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "GuardControllerDegraded: [Missing PodIP in operand etcd-ip-10-0-140-6.ec2.internal on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" openshift-etcd 56m Normal Pulling pod/etcd-ip-10-0-140-6.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" openshift-kube-scheduler-operator 56m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/installer-6-ip-10-0-140-6.ec2.internal -n openshift-kube-scheduler because it was missing openshift-kube-scheduler 56m Normal Started pod/installer-6-ip-10-0-140-6.ec2.internal Started container installer openshift-kube-scheduler 56m Normal Pulled pod/installer-6-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-etcd 56m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" in 1.608938958s (1.608951202s including waiting) openshift-kube-scheduler 56m Normal AddedInterface pod/installer-6-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.33/23] from ovn-kubernetes openshift-etcd 56m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container setup openshift-etcd 56m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 56m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcd-ensure-env-vars openshift-etcd 56m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container setup openshift-kube-scheduler 56m Normal Created pod/installer-6-ip-10-0-140-6.ec2.internal Created container installer openshift-etcd 56m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcd-ensure-env-vars openshift-etcd 56m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-apiserver-operator 56m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/apps.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/apps.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/build.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/image.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/apps.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/build.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/image.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" openshift-apiserver-operator 56m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/apps.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/build.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/image.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/apps.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/build.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" openshift-etcd 56m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 56m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 56m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcdctl openshift-etcd 56m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcd-resources-copy openshift-etcd 56m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcdctl openshift-etcd 56m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcd-resources-copy openshift-etcd 56m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcd openshift-etcd 56m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcd openshift-etcd 56m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd-operator 56m Normal MemberAddAsLearner deployment/etcd-operator successfully added new member https://10.0.140.6:2380 openshift-etcd 56m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcd-metrics openshift-etcd 56m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd-operator 56m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded changed from False to True ("ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nGuardControllerDegraded: [Missing PodIP in operand etcd-ip-10-0-140-6.ec2.internal on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]") openshift-etcd 56m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcd-readyz openshift-etcd 56m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcd-readyz openshift-etcd 56m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcd-metrics openshift-etcd-operator 56m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nGuardControllerDegraded: [Missing PodIP in operand etcd-ip-10-0-140-6.ec2.internal on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" to "GuardControllerDegraded: [Missing PodIP in operand etcd-ip-10-0-140-6.ec2.internal on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" openshift-etcd-operator 56m Normal RevisionTriggered deployment/etcd-operator new revision 4 triggered by "configmap/etcd-endpoints has changed" openshift-etcd-operator 56m Warning UnhealthyEtcdMember deployment/etcd-operator unhealthy members: NAME-PENDING-10.0.140.6 openshift-kube-controller-manager-operator 56m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" openshift-authentication-operator 56m Normal ObserveStorageUpdated deployment/authentication-operator Updated storage urls to https://10.0.140.6:2379,https://10.0.239.132:2379 openshift-etcd-operator 56m Normal ConfigMapUpdated deployment/etcd-operator Updated ConfigMap/etcd-endpoints -n openshift-etcd:... openshift-etcd-operator 56m Normal MemberPromote deployment/etcd-operator successfully promoted learner member https://10.0.140.6:2380 openshift-kube-apiserver-operator 56m Normal ObservedConfigChanged deployment/kube-apiserver-operator Writing updated observed config:   map[string]any{... openshift-kube-apiserver-operator 56m Normal ObserveStorageUpdated deployment/kube-apiserver-operator Updated storage urls to https://10.0.140.6:2379,https://10.0.239.132:2379,https://localhost:2379 openshift-etcd-operator 56m Warning UnstartedEtcdMember deployment/etcd-operator unstarted members: NAME-PENDING-10.0.140.6 openshift-authentication-operator 56m Normal ObservedConfigChanged deployment/authentication-operator Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"apiServerArguments\": map[string]any{\n\u00a0\u00a0\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n\u00a0\u00a0\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\t\"etcd-servers\": []any{\n+\u00a0\t\t\tstring(\"https://10.0.140.6:2379\"),\n\u00a0\u00a0\t\t\tstring(\"https://10.0.239.132:2379\"),\n\u00a0\u00a0\t\t},\n\u00a0\u00a0\t\t\"tls-cipher-suites\": []any{string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\"), string(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"), ...},\n\u00a0\u00a0\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n\u00a0\u00a0\t},\n\u00a0\u00a0}\n" openshift-oauth-apiserver 55m Normal ScalingReplicaSet deployment/apiserver Scaled up replica set apiserver-9b9694fdc to 1 from 0 openshift-oauth-apiserver 55m Normal SuccessfulCreate replicaset/apiserver-9b9694fdc Created pod: apiserver-9b9694fdc-kb6ks openshift-apiserver-operator 55m Normal ObserveStorageUpdated deployment/openshift-apiserver-operator Updated storage urls to https://10.0.140.6:2379,https://10.0.239.132:2379 openshift-apiserver-operator 55m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/apps.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/build.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/apps.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/build.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" openshift-authentication-operator 55m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.27:8443/apis/oauth.openshift.io/v1: Get \"https://10.128.0.27:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.27:8443/apis/user.openshift.io/v1: Get \"https://10.128.0.27:8443/apis/user.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.130.0.47:8443/apis/oauth.openshift.io/v1: Get \"https://10.130.0.47:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.27:8443/apis/user.openshift.io/v1: Get \"https://10.128.0.27:8443/apis/user.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-oauth-apiserver 55m Normal ScalingReplicaSet deployment/apiserver Scaled down replica set apiserver-89645c77 to 2 from 3 openshift-oauth-apiserver 55m Normal SuccessfulDelete replicaset/apiserver-89645c77 Deleted pod: apiserver-89645c77-26sj6 openshift-etcd-operator 55m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "GuardControllerDegraded: [Missing PodIP in operand etcd-ip-10-0-140-6.ec2.internal on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" to "GuardControllerDegraded: Missing operand on node ip-10-0-197-197.ec2.internal" openshift-apiserver-operator 55m Normal ObservedConfigChanged deployment/openshift-apiserver-operator Writing updated observed config:   map[string]any{... openshift-oauth-apiserver 55m Normal Killing pod/apiserver-89645c77-26sj6 Stopping container oauth-apiserver openshift-authentication-operator 55m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 1, desired generation is 2.") openshift-etcd 55m Normal Created pod/etcd-guard-ip-10-0-140-6.ec2.internal Created container guard openshift-etcd 55m Normal Pulled pod/etcd-guard-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd 55m Normal AddedInterface pod/etcd-guard-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.34/23] from ovn-kubernetes openshift-etcd-operator 55m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/revision-status-4 -n openshift-etcd because it was missing openshift-apiserver-operator 55m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/apps.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/build.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/apps.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/build.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": dial tcp 10.128.0.29:8443: i/o timeout\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/template.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/template.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" openshift-etcd 55m Normal Started pod/etcd-guard-ip-10-0-140-6.ec2.internal Started container guard openshift-cluster-storage-operator 55m Warning FastControllerResync deployment/cluster-storage-operator Controller "DefaultStorageClassController" resync interval is set to 0s which might lead to client request throttling openshift-cluster-storage-operator 55m Warning FastControllerResync deployment/cluster-storage-operator Controller "SnapshotCRDController" resync interval is set to 0s which might lead to client request throttling openshift-cluster-storage-operator 55m Warning FastControllerResync deployment/cluster-storage-operator Controller "VSphereProblemDetectorStarter" resync interval is set to 0s which might lead to client request throttling openshift-oauth-apiserver 55m Normal Pulled pod/apiserver-9b9694fdc-kb6ks Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" already present on machine openshift-cluster-storage-operator 55m Normal LeaderElection configmap/cluster-storage-operator-lock cluster-storage-operator-fb5868667-cclnx_c36d1741-b611-4e39-95cc-ca0aef08883f became leader openshift-cluster-storage-operator 55m Normal LeaderElection lease/cluster-storage-operator-lock cluster-storage-operator-fb5868667-cclnx_c36d1741-b611-4e39-95cc-ca0aef08883f became leader openshift-oauth-apiserver 55m Normal AddedInterface pod/apiserver-9b9694fdc-kb6ks Add eth0 [10.130.0.52/23] from ovn-kubernetes openshift-cluster-storage-operator 55m Warning FastControllerResync deployment/cluster-storage-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-cluster-storage-operator 55m Warning FastControllerResync deployment/cluster-storage-operator Controller "CSIDriverStarter" resync interval is set to 0s which might lead to client request throttling openshift-oauth-apiserver 55m Normal Pulled pod/apiserver-9b9694fdc-kb6ks Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" already present on machine openshift-etcd-operator 55m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-pod-4 -n openshift-etcd because it was missing openshift-authentication-operator 55m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 1, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation" openshift-authentication-operator 55m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.130.0.47:8443/apis/oauth.openshift.io/v1: Get \"https://10.130.0.47:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.27:8443/apis/user.openshift.io/v1: Get \"https://10.128.0.27:8443/apis/user.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.130.0.47:8443/apis/oauth.openshift.io/v1: Get \"https://10.130.0.47:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.130.0.47:8443/apis/user.openshift.io/v1: Get \"https://10.130.0.47:8443/apis/user.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-oauth-apiserver 55m Normal Started pod/apiserver-9b9694fdc-kb6ks Started container oauth-apiserver openshift-oauth-apiserver 55m Normal Created pod/apiserver-9b9694fdc-kb6ks Created container oauth-apiserver openshift-oauth-apiserver 55m Normal Started pod/apiserver-9b9694fdc-kb6ks Started container fix-audit-permissions openshift-oauth-apiserver 55m Normal Created pod/apiserver-9b9694fdc-kb6ks Created container fix-audit-permissions openshift-apiserver 55m Normal Killing pod/apiserver-565b67b9f7-w2dv2 Stopping container openshift-apiserver openshift-apiserver-operator 55m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 3, desired generation is 4.") openshift-apiserver 55m Normal ScalingReplicaSet deployment/apiserver Scaled up replica set apiserver-6977bc9f6b to 1 from 0 openshift-apiserver-operator 55m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 3, desired generation is 4." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 3, desired generation is 4." openshift-apiserver 55m Normal ScalingReplicaSet deployment/apiserver Scaled down replica set apiserver-565b67b9f7 to 2 from 3 openshift-apiserver 55m Normal SuccessfulCreate replicaset/apiserver-6977bc9f6b Created pod: apiserver-6977bc9f6b-6c47k openshift-apiserver 55m Normal SuccessfulDelete replicaset/apiserver-565b67b9f7 Deleted pod: apiserver-565b67b9f7-w2dv2 openshift-apiserver 55m Normal Killing pod/apiserver-565b67b9f7-w2dv2 Stopping container openshift-apiserver-check-endpoints openshift-etcd-operator 55m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-serving-ca-4 -n openshift-etcd because it was missing openshift-kube-apiserver-operator 55m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/revision-status-5 -n openshift-kube-apiserver because it was missing openshift-etcd-operator 55m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-peer-client-ca-4 -n openshift-etcd because it was missing openshift-kube-apiserver-operator 55m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]" to "GuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" is terminated: Error: W0321 12:17:49.463275 1 cmd.go:213] Using insecure, self-signed certificates\nStaticPodsDegraded: I0321 12:17:49.463453 1 crypto.go:601] Generating new CA for check-endpoints-signer@1679401069 cert, and key in /tmp/serving-cert-1496520025/serving-signer.crt, /tmp/serving-cert-1496520025/serving-signer.key\nStaticPodsDegraded: I0321 12:17:49.782269 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0321 12:17:59.784963 1 builder.go:230] unable to get owner reference (falling back to namespace): Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-197-197.ec2.internal\": net/http: TLS handshake timeout\nStaticPodsDegraded: I0321 12:17:59.785101 1 builder.go:262] check-endpoints version 4.13.0-202303180002.p0.g5cec361.assembly.stream-5cec361-5cec361179f3658986890a87d0b51f40a1da89ad\nStaticPodsDegraded: I0321 12:17:59.785646 1 dynamic_serving_content.go:113] \"Loaded a new cert/key pair\" name=\"serving-cert::/tmp/serving-cert-1496520025/tls.crt::/tmp/serving-cert-1496520025/tls.key\"\nStaticPodsDegraded: F0321 12:18:09.097210 1 cmd.go:138] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\": dial tcp [::1]:6443: connect: connection refused - error from a previous attempt: read tcp [::1]:58698->[::1]:6443: read: connection reset by peer\nStaticPodsDegraded: " openshift-apiserver-operator 55m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/apps.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/build.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": dial tcp 10.128.0.29:8443: i/o timeout\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/template.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/template.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/authorization.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/build.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/image.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/image.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": dial tcp 10.128.0.29:8443: i/o timeout\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/template.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/template.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" openshift-oauth-apiserver 55m Normal SuccessfulCreate replicaset/apiserver-9b9694fdc Created pod: apiserver-9b9694fdc-sl5wc openshift-oauth-apiserver 55m Normal ScalingReplicaSet deployment/apiserver Scaled up replica set apiserver-9b9694fdc to 2 from 1 openshift-kube-apiserver-operator 55m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" is terminated: Error: W0321 12:17:49.463275 1 cmd.go:213] Using insecure, self-signed certificates\nStaticPodsDegraded: I0321 12:17:49.463453 1 crypto.go:601] Generating new CA for check-endpoints-signer@1679401069 cert, and key in /tmp/serving-cert-1496520025/serving-signer.crt, /tmp/serving-cert-1496520025/serving-signer.key\nStaticPodsDegraded: I0321 12:17:49.782269 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0321 12:17:59.784963 1 builder.go:230] unable to get owner reference (falling back to namespace): Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-197-197.ec2.internal\": net/http: TLS handshake timeout\nStaticPodsDegraded: I0321 12:17:59.785101 1 builder.go:262] check-endpoints version 4.13.0-202303180002.p0.g5cec361.assembly.stream-5cec361-5cec361179f3658986890a87d0b51f40a1da89ad\nStaticPodsDegraded: I0321 12:17:59.785646 1 dynamic_serving_content.go:113] \"Loaded a new cert/key pair\" name=\"serving-cert::/tmp/serving-cert-1496520025/tls.crt::/tmp/serving-cert-1496520025/tls.key\"\nStaticPodsDegraded: F0321 12:18:09.097210 1 cmd.go:138] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\": dial tcp [::1]:6443: connect: connection refused - error from a previous attempt: read tcp [::1]:58698->[::1]:6443: read: connection reset by peer\nStaticPodsDegraded: " to "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" is terminated: Error: W0321 12:17:49.463275 1 cmd.go:213] Using insecure, self-signed certificates\nStaticPodsDegraded: I0321 12:17:49.463453 1 crypto.go:601] Generating new CA for check-endpoints-signer@1679401069 cert, and key in /tmp/serving-cert-1496520025/serving-signer.crt, /tmp/serving-cert-1496520025/serving-signer.key\nStaticPodsDegraded: I0321 12:17:49.782269 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0321 12:17:59.784963 1 builder.go:230] unable to get owner reference (falling back to namespace): Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-197-197.ec2.internal\": net/http: TLS handshake timeout\nStaticPodsDegraded: I0321 12:17:59.785101 1 builder.go:262] check-endpoints version 4.13.0-202303180002.p0.g5cec361.assembly.stream-5cec361-5cec361179f3658986890a87d0b51f40a1da89ad\nStaticPodsDegraded: I0321 12:17:59.785646 1 dynamic_serving_content.go:113] \"Loaded a new cert/key pair\" name=\"serving-cert::/tmp/serving-cert-1496520025/tls.crt::/tmp/serving-cert-1496520025/tls.key\"\nStaticPodsDegraded: F0321 12:18:09.097210 1 cmd.go:138] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\": dial tcp [::1]:6443: connect: connection refused - error from a previous attempt: read tcp [::1]:58698->[::1]:6443: read: connection reset by peer\nStaticPodsDegraded: " openshift-oauth-apiserver 55m Normal SuccessfulDelete replicaset/apiserver-89645c77 Deleted pod: apiserver-89645c77-fdwmw openshift-apiserver-operator 55m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" openshift-oauth-apiserver 55m Normal ScalingReplicaSet deployment/apiserver Scaled down replica set apiserver-89645c77 to 1 from 2 openshift-apiserver-operator 55m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 3, desired generation is 4." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." openshift-etcd-operator 55m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-metrics-proxy-serving-ca-4 -n openshift-etcd because it was missing openshift-kube-apiserver-operator 55m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-pod-5 -n openshift-kube-apiserver because it was missing openshift-oauth-apiserver 55m Normal Killing pod/apiserver-89645c77-fdwmw Stopping container oauth-apiserver openshift-authentication-operator 55m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation" openshift-etcd-operator 55m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-metrics-proxy-client-ca-4 -n openshift-etcd because it was missing openshift-etcd-operator 55m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-endpoints-4 -n openshift-etcd because it was missing openshift-kube-apiserver-operator 55m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/config-5 -n openshift-kube-apiserver because it was missing openshift-apiserver-operator 55m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/authorization.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/build.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/image.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/image.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": dial tcp 10.128.0.29:8443: i/o timeout\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/template.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/template.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/authorization.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/image.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/image.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/security.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/template.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/template.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" openshift-oauth-apiserver 55m Normal Created pod/apiserver-9b9694fdc-sl5wc Created container fix-audit-permissions openshift-oauth-apiserver 55m Normal Created pod/apiserver-9b9694fdc-sl5wc Created container oauth-apiserver openshift-oauth-apiserver 55m Normal Pulled pod/apiserver-9b9694fdc-sl5wc Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" already present on machine openshift-oauth-apiserver 55m Normal Started pod/apiserver-9b9694fdc-sl5wc Started container fix-audit-permissions openshift-oauth-apiserver 55m Normal Pulled pod/apiserver-9b9694fdc-sl5wc Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" already present on machine openshift-etcd-operator 55m Normal RevisionTriggered deployment/etcd-operator new revision 5 triggered by "configmap/etcd-pod has changed" openshift-oauth-apiserver 55m Normal Started pod/apiserver-9b9694fdc-sl5wc Started container oauth-apiserver openshift-etcd-operator 55m Normal RevisionCreate deployment/etcd-operator Revision 3 created because configmap/etcd-endpoints has changed openshift-kube-apiserver-operator 55m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-5 -n openshift-kube-apiserver because it was missing openshift-etcd-operator 55m Normal SecretCreated deployment/etcd-operator Created Secret/etcd-all-certs-4 -n openshift-etcd because it was missing openshift-oauth-apiserver 55m Normal AddedInterface pod/apiserver-9b9694fdc-sl5wc Add eth0 [10.128.0.35/23] from ovn-kubernetes openshift-etcd-operator 55m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/revision-status-5 -n openshift-etcd because it was missing openshift-etcd-operator 55m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 1 nodes are at revision 2; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 2 nodes are at revision 0; 1 nodes are at revision 2; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 nodes are at revision 2; 0 nodes have achieved new revision 3\nEtcdMembersAvailable: 2 members are available" to "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 nodes are at revision 2; 0 nodes have achieved new revision 4\nEtcdMembersAvailable: 2 members are available" openshift-authentication-operator 55m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.130.0.47:8443/apis/oauth.openshift.io/v1: Get \"https://10.130.0.47:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.130.0.47:8443/apis/user.openshift.io/v1: Get \"https://10.130.0.47:8443/apis/user.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.27:8443/apis/oauth.openshift.io/v1: Get \"https://10.128.0.27:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.130.0.47:8443/apis/user.openshift.io/v1: Get \"https://10.130.0.47:8443/apis/user.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-kube-apiserver-operator 55m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/bound-sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing openshift-apiserver-operator 55m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/authorization.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/image.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/image.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/security.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/template.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/template.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/authorization.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/image.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/image.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/security.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" openshift-etcd-operator 55m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-pod-5 -n openshift-etcd because it was missing openshift-apiserver 55m Warning ProbeError pod/apiserver-565b67b9f7-w2dv2 Readiness probe error: HTTP probe failed with statuscode: 500... openshift-etcd-operator 55m Normal PodCreated deployment/etcd-operator Created Pod/installer-4-ip-10-0-140-6.ec2.internal -n openshift-etcd because it was missing openshift-authentication-operator 55m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.27:8443/apis/oauth.openshift.io/v1: Get \"https://10.128.0.27:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.130.0.47:8443/apis/user.openshift.io/v1: Get \"https://10.130.0.47:8443/apis/user.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.27:8443/apis/oauth.openshift.io/v1: Get \"https://10.128.0.27:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.130.0.52:8443/apis/user.openshift.io/v1: Get \"https://10.130.0.52:8443/apis/user.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-kube-apiserver-operator 55m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/etcd-serving-ca-5 -n openshift-kube-apiserver because it was missing openshift-apiserver 55m Warning Unhealthy pod/apiserver-565b67b9f7-w2dv2 Readiness probe failed: HTTP probe failed with statuscode: 500 openshift-etcd 55m Normal Started pod/installer-4-ip-10-0-140-6.ec2.internal Started container installer openshift-etcd 55m Normal Created pod/installer-4-ip-10-0-140-6.ec2.internal Created container installer openshift-kube-apiserver-operator 55m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-server-ca-5 -n openshift-kube-apiserver because it was missing openshift-etcd 55m Normal Pulled pod/installer-4-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-oauth-apiserver 55m Normal Killing pod/apiserver-89645c77-szcw6 Stopping container oauth-apiserver openshift-oauth-apiserver 55m Normal SuccessfulCreate replicaset/apiserver-9b9694fdc Created pod: apiserver-9b9694fdc-g7gxw openshift-oauth-apiserver 55m Normal SuccessfulDelete replicaset/apiserver-89645c77 Deleted pod: apiserver-89645c77-szcw6 openshift-oauth-apiserver 55m Normal ScalingReplicaSet deployment/apiserver Scaled down replica set apiserver-89645c77 to 0 from 1 openshift-oauth-apiserver 55m Normal ScalingReplicaSet deployment/apiserver Scaled up replica set apiserver-9b9694fdc to 3 from 2 openshift-etcd-operator 55m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-serving-ca-5 -n openshift-etcd because it was missing openshift-etcd 55m Normal AddedInterface pod/installer-4-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.36/23] from ovn-kubernetes openshift-kube-apiserver-operator 55m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" is terminated: Error: W0321 12:17:49.463275 1 cmd.go:213] Using insecure, self-signed certificates\nStaticPodsDegraded: I0321 12:17:49.463453 1 crypto.go:601] Generating new CA for check-endpoints-signer@1679401069 cert, and key in /tmp/serving-cert-1496520025/serving-signer.crt, /tmp/serving-cert-1496520025/serving-signer.key\nStaticPodsDegraded: I0321 12:17:49.782269 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0321 12:17:59.784963 1 builder.go:230] unable to get owner reference (falling back to namespace): Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-197-197.ec2.internal\": net/http: TLS handshake timeout\nStaticPodsDegraded: I0321 12:17:59.785101 1 builder.go:262] check-endpoints version 4.13.0-202303180002.p0.g5cec361.assembly.stream-5cec361-5cec361179f3658986890a87d0b51f40a1da89ad\nStaticPodsDegraded: I0321 12:17:59.785646 1 dynamic_serving_content.go:113] \"Loaded a new cert/key pair\" name=\"serving-cert::/tmp/serving-cert-1496520025/tls.crt::/tmp/serving-cert-1496520025/tls.key\"\nStaticPodsDegraded: F0321 12:18:09.097210 1 cmd.go:138] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\": dial tcp [::1]:6443: connect: connection refused - error from a previous attempt: read tcp [::1]:58698->[::1]:6443: read: connection reset by peer\nStaticPodsDegraded: " to "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)" openshift-oauth-apiserver 55m Warning ProbeError pod/apiserver-89645c77-szcw6 Readiness probe error: Get "https://10.129.0.28:8443/readyz": dial tcp 10.129.0.28:8443: connect: connection refused... default 55m Normal NodeAllocatableEnforced node/ip-10-0-160-152.ec2.internal Updated Node Allocatable limit across pods default 55m Normal NodeHasSufficientPID node/ip-10-0-160-152.ec2.internal Node ip-10-0-160-152.ec2.internal status is now: NodeHasSufficientPID default 55m Normal NodeHasNoDiskPressure node/ip-10-0-160-152.ec2.internal Node ip-10-0-160-152.ec2.internal status is now: NodeHasNoDiskPressure default 55m Normal Starting node/ip-10-0-160-152.ec2.internal Starting kubelet. default 55m Warning Status degraded clusteroperator/machine-api minimum worker replica count (2) not yet met: current running replicas 0, waiting for [qeaisrhods-c13-28wr5-worker-us-east-1a-cp7f7 qeaisrhods-c13-28wr5-worker-us-east-1a-tfwzm] openshift-kube-controller-manager 55m Normal StaticPodInstallerCompleted pod/installer-4-ip-10-0-140-6.ec2.internal Successfully installed revision 4 openshift-etcd-operator 55m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-peer-client-ca-5 -n openshift-etcd because it was missing openshift-oauth-apiserver 55m Warning Unhealthy pod/apiserver-89645c77-szcw6 Readiness probe failed: Get "https://10.129.0.28:8443/readyz": dial tcp 10.129.0.28:8443: connect: connection refused default 55m Normal NodeHasSufficientMemory node/ip-10-0-160-152.ec2.internal Node ip-10-0-160-152.ec2.internal status is now: NodeHasSufficientMemory openshift-ovn-kubernetes 55m Normal SuccessfulCreate daemonset/ovnkube-node Created pod: ovnkube-node-8sb9g openshift-kube-controller-manager 55m Normal Pulling pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" openshift-multus 55m Normal SuccessfulCreate daemonset/multus Created pod: multus-d7w6w openshift-kube-apiserver-operator 55m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kubelet-serving-ca-5 -n openshift-kube-apiserver because it was missing openshift-multus 55m Normal SuccessfulCreate daemonset/multus-additional-cni-plugins Created pod: multus-additional-cni-plugins-j5mgq openshift-multus 55m Normal SuccessfulCreate daemonset/network-metrics-daemon Created pod: network-metrics-daemon-74bvc openshift-kube-scheduler-operator 55m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" to "GuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" openshift-machine-config-operator 55m Normal SuccessfulCreate daemonset/machine-config-daemon Created pod: machine-config-daemon-w98lz openshift-network-diagnostics 55m Normal SuccessfulCreate daemonset/network-check-target Created pod: network-check-target-w7m4g openshift-authentication-operator 55m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.27:8443/apis/oauth.openshift.io/v1: Get \"https://10.128.0.27:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.130.0.52:8443/apis/user.openshift.io/v1: Get \"https://10.130.0.52:8443/apis/user.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.27:8443/apis/oauth.openshift.io/v1: Get \"https://10.128.0.27:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.130.0.52:8443/apis/user.openshift.io/v1: Get \"https://10.130.0.52:8443/apis/user.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 1 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-cluster-csi-drivers 55m Normal SuccessfulCreate daemonset/aws-ebs-csi-driver-node Created pod: aws-ebs-csi-driver-node-2p86w openshift-dns 55m Normal SuccessfulCreate daemonset/node-resolver Created pod: node-resolver-f7qjl openshift-authentication-operator 55m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-9b9694fdc-g7gxw pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready",Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well") openshift-apiserver-operator 55m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/authorization.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/authorization.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/image.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/image.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/security.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/image.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/image.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/security.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" openshift-kube-controller-manager-operator 55m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing PodIP in operand kube-controller-manager-ip-10-0-140-6.ec2.internal on node ip-10-0-140-6.ec2.internal]" default 55m Warning ErrorReconcilingNode node/ip-10-0-160-152.ec2.internal nodeAdd: error adding node "ip-10-0-160-152.ec2.internal": could not find "k8s.ovn.org/node-subnets" annotation openshift-cluster-node-tuning-operator 55m Normal SuccessfulCreate daemonset/tuned Created pod: tuned-t8kzn openshift-machine-config-operator 55m Normal Pulling pod/machine-config-daemon-w98lz Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" openshift-apiserver 55m Warning Unhealthy pod/apiserver-565b67b9f7-w2dv2 Readiness probe failed: Get "https://10.130.0.48:8443/readyz": dial tcp 10.130.0.48:8443: connect: connection refused openshift-etcd-operator 55m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-metrics-proxy-serving-ca-5 -n openshift-etcd because it was missing openshift-multus 55m Normal Pulling pod/multus-additional-cni-plugins-j5mgq Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633" openshift-apiserver 55m Warning ProbeError pod/apiserver-565b67b9f7-w2dv2 Readiness probe error: Get "https://10.130.0.48:8443/readyz": dial tcp 10.130.0.48:8443: connect: connection refused... openshift-machine-config-operator 55m Normal Created pod/machine-config-daemon-w98lz Created container machine-config-daemon openshift-oauth-apiserver 55m Normal Pulled pod/apiserver-9b9694fdc-g7gxw Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" already present on machine openshift-network-diagnostics 55m Warning ErrorUpdatingResource pod/network-check-target-w7m4g addLogicalPort failed for openshift-network-diagnostics/network-check-target-w7m4g: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-160-152.ec2.internal" openshift-cluster-node-tuning-operator 55m Normal Pulling pod/tuned-t8kzn Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" openshift-oauth-apiserver 55m Normal Created pod/apiserver-9b9694fdc-g7gxw Created container oauth-apiserver openshift-dns 55m Normal Pulling pod/node-resolver-f7qjl Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" openshift-multus 55m Normal Pulling pod/multus-d7w6w Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" openshift-machine-config-operator 55m Normal Started pod/machine-config-daemon-w98lz Started container machine-config-daemon openshift-apiserver-operator 55m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.130.0.48:8443/apis/image.openshift.io/v1: Get \"https://10.130.0.48:8443/apis/image.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/security.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/security.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" openshift-multus 55m Warning ErrorUpdatingResource pod/network-metrics-daemon-74bvc addLogicalPort failed for openshift-multus/network-metrics-daemon-74bvc: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-160-152.ec2.internal" openshift-oauth-apiserver 55m Normal Started pod/apiserver-9b9694fdc-g7gxw Started container fix-audit-permissions openshift-cluster-csi-drivers 55m Normal Pulling pod/aws-ebs-csi-driver-node-2p86w Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" openshift-ovn-kubernetes 55m Normal Pulling pod/ovnkube-node-8sb9g Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" openshift-oauth-apiserver 55m Normal Pulled pod/apiserver-9b9694fdc-g7gxw Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" already present on machine openshift-oauth-apiserver 55m Normal Started pod/apiserver-9b9694fdc-g7gxw Started container oauth-apiserver openshift-kube-apiserver 55m Normal StaticPodInstallerCompleted pod/installer-4-ip-10-0-197-197.ec2.internal Successfully installed revision 4 openshift-kube-apiserver-operator 55m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/sa-token-signing-certs-5 -n openshift-kube-apiserver because it was missing openshift-machine-config-operator 55m Normal Pulled pod/machine-config-daemon-w98lz Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" already present on machine openshift-oauth-apiserver 55m Normal Created pod/apiserver-9b9694fdc-g7gxw Created container fix-audit-permissions openshift-oauth-apiserver 55m Normal AddedInterface pod/apiserver-9b9694fdc-g7gxw Add eth0 [10.129.0.32/23] from ovn-kubernetes openshift-etcd-operator 55m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-metrics-proxy-client-ca-5 -n openshift-etcd because it was missing openshift-apiserver-operator 55m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/security.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/build.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/security.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" default 55m Normal RegisteredNode node/ip-10-0-160-152.ec2.internal Node ip-10-0-160-152.ec2.internal event: Registered Node ip-10-0-160-152.ec2.internal in Controller openshift-etcd-operator 55m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-endpoints-5 -n openshift-etcd because it was missing openshift-kube-apiserver-operator 55m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-audit-policies-5 -n openshift-kube-apiserver because it was missing default 55m Warning Status degraded clusteroperator/machine-api minimum worker replica count (2) not yet met: current running replicas 1, waiting for [qeaisrhods-c13-28wr5-worker-us-east-1a-cp7f7] openshift-authentication-operator 55m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-9b9694fdc-g7gxw pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-9b9694fdc-g7gxw pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" openshift-etcd-operator 55m Normal RevisionCreate deployment/etcd-operator Revision 4 created because configmap/etcd-pod has changed openshift-etcd-operator 55m Normal SecretCreated deployment/etcd-operator Created Secret/etcd-all-certs-5 -n openshift-etcd because it was missing openshift-kube-scheduler-operator 55m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" to "GuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" openshift-apiserver 55m Normal Pulled pod/apiserver-6977bc9f6b-6c47k Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" already present on machine openshift-apiserver 55m Normal AddedInterface pod/apiserver-6977bc9f6b-6c47k Add eth0 [10.130.0.53/23] from ovn-kubernetes openshift-apiserver-operator 55m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/build.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/security.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/build.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" openshift-apiserver 55m Normal Started pod/apiserver-6977bc9f6b-6c47k Started container fix-audit-permissions openshift-apiserver 55m Normal Created pod/apiserver-6977bc9f6b-6c47k Created container fix-audit-permissions openshift-kube-apiserver-operator 55m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/etcd-client-5 -n openshift-kube-apiserver because it was missing openshift-dns 55m Normal SuccessfulCreate daemonset/node-resolver Created pod: node-resolver-vfr6q default 55m Normal NodeHasNoDiskPressure node/ip-10-0-232-8.ec2.internal Node ip-10-0-232-8.ec2.internal status is now: NodeHasNoDiskPressure openshift-cluster-node-tuning-operator 55m Normal SuccessfulCreate daemonset/tuned Created pod: tuned-5mn5s openshift-machine-config-operator 55m Normal SuccessfulCreate daemonset/machine-config-daemon Created pod: machine-config-daemon-drlvb openshift-cluster-csi-drivers 55m Normal SuccessfulCreate daemonset/aws-ebs-csi-driver-node Created pod: aws-ebs-csi-driver-node-8w5jv openshift-apiserver 55m Normal Created pod/apiserver-6977bc9f6b-6c47k Created container openshift-apiserver openshift-ovn-kubernetes 55m Normal SuccessfulCreate daemonset/ovnkube-node Created pod: ovnkube-node-x4z8l openshift-apiserver 55m Normal Pulled pod/apiserver-6977bc9f6b-6c47k Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-apiserver 55m Normal Started pod/apiserver-6977bc9f6b-6c47k Started container openshift-apiserver openshift-apiserver 55m Normal Pulled pod/apiserver-6977bc9f6b-6c47k Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" already present on machine openshift-apiserver 55m Normal Started pod/apiserver-6977bc9f6b-6c47k Started container openshift-apiserver-check-endpoints openshift-apiserver 55m Normal Created pod/apiserver-6977bc9f6b-6c47k Created container openshift-apiserver-check-endpoints default 55m Normal NodeHasSufficientPID node/ip-10-0-232-8.ec2.internal Node ip-10-0-232-8.ec2.internal status is now: NodeHasSufficientPID openshift-multus 55m Normal SuccessfulCreate daemonset/network-metrics-daemon Created pod: network-metrics-daemon-f6tv8 openshift-etcd 55m Normal Killing pod/installer-4-ip-10-0-140-6.ec2.internal Stopping container installer openshift-kube-scheduler 55m Normal StaticPodInstallerCompleted pod/installer-6-ip-10-0-140-6.ec2.internal Successfully installed revision 6 default 55m Normal Starting node/ip-10-0-232-8.ec2.internal Starting kubelet. default 55m Normal NodeHasSufficientMemory node/ip-10-0-232-8.ec2.internal Node ip-10-0-232-8.ec2.internal status is now: NodeHasSufficientMemory openshift-multus 55m Normal SuccessfulCreate daemonset/multus Created pod: multus-ztsxl openshift-network-diagnostics 55m Normal SuccessfulCreate daemonset/network-check-target Created pod: network-check-target-2799t default 55m Normal NodeAllocatableEnforced node/ip-10-0-232-8.ec2.internal Updated Node Allocatable limit across pods default 55m Warning ErrorReconcilingNode node/ip-10-0-232-8.ec2.internal nodeAdd: error adding node "ip-10-0-232-8.ec2.internal": could not find "k8s.ovn.org/node-subnets" annotation openshift-authentication-operator 55m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.27:8443/apis/oauth.openshift.io/v1: Get \"https://10.128.0.27:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.130.0.52:8443/apis/user.openshift.io/v1: Get \"https://10.130.0.52:8443/apis/user.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 1 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.27:8443/apis/oauth.openshift.io/v1: Get \"https://10.128.0.27:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.130.0.52:8443/apis/user.openshift.io/v1: Get \"https://10.130.0.52:8443/apis/user.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 2 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-multus 55m Normal SuccessfulCreate daemonset/multus-additional-cni-plugins Created pod: multus-additional-cni-plugins-l7zm7 openshift-etcd-operator 55m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 1 nodes are at revision 2; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 2 nodes are at revision 0; 1 nodes are at revision 2; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 nodes are at revision 2; 0 nodes have achieved new revision 4\nEtcdMembersAvailable: 2 members are available" to "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 nodes are at revision 2; 0 nodes have achieved new revision 5\nEtcdMembersAvailable: 2 members are available" openshift-authentication-operator 55m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.27:8443/apis/oauth.openshift.io/v1: Get \"https://10.128.0.27:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.130.0.52:8443/apis/user.openshift.io/v1: Get \"https://10.130.0.52:8443/apis/user.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 2 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/oauth.openshift.io/v1: Get \"https://10.128.0.35:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.130.0.52:8443/apis/user.openshift.io/v1: Get \"https://10.130.0.52:8443/apis/user.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 2 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-kube-scheduler 55m Normal Started pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Started container kube-scheduler openshift-kube-scheduler 55m Normal Created pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Created container wait-for-host-port openshift-kube-controller-manager 55m Normal Pulling pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fec19020202ba3b7f09460bc0cb52fc001941fbb3a3fa7d1ca34a2e9ebf0e9" openshift-kube-controller-manager 55m Normal Pulled pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" in 6.926420799s (6.926431055s including waiting) openshift-kube-scheduler 55m Normal Started pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Started container wait-for-host-port openshift-kube-scheduler 55m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-authentication-operator 55m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/oauth.openshift.io/v1: Get \"https://10.128.0.35:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.130.0.52:8443/apis/user.openshift.io/v1: Get \"https://10.130.0.52:8443/apis/user.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 2 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/oauth.openshift.io/v1: Get \"https://10.128.0.35:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.130.0.52:8443/apis/user.openshift.io/v1: Get \"https://10.130.0.52:8443/apis/user.openshift.io/v1\": context deadline exceeded\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 2 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-apiserver 55m Warning FastControllerResync node/ip-10-0-197-197.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling openshift-apiserver-operator 55m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/build.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/build.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/route.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" openshift-apiserver 55m Warning FastControllerResync node/ip-10-0-197-197.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling openshift-multus 55m Warning ErrorUpdatingResource pod/network-metrics-daemon-f6tv8 addLogicalPort failed for openshift-multus/network-metrics-daemon-f6tv8: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-232-8.ec2.internal" openshift-kube-scheduler 55m Normal Pulling pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" openshift-kube-scheduler 55m Normal Created pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Created container kube-scheduler openshift-kube-scheduler 55m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 55m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" in 249.095848ms (249.108834ms including waiting) openshift-network-diagnostics 55m Warning ErrorUpdatingResource pod/network-check-target-2799t addLogicalPort failed for openshift-network-diagnostics/network-check-target-2799t: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-232-8.ec2.internal" openshift-kube-scheduler 55m Normal Created pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Created container kube-scheduler-cert-syncer openshift-kube-scheduler 55m Normal Started pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Started container kube-scheduler-cert-syncer openshift-kube-scheduler 55m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine default 55m Normal RegisteredNode node/ip-10-0-232-8.ec2.internal Node ip-10-0-232-8.ec2.internal event: Registered Node ip-10-0-232-8.ec2.internal in Controller openshift-kube-scheduler 55m Normal Created pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Created container kube-scheduler-recovery-controller openshift-kube-scheduler 55m Normal Started pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Started container kube-scheduler-recovery-controller openshift-multus 55m Normal Pulling pod/multus-ztsxl Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" openshift-multus 55m Normal Pulling pod/multus-additional-cni-plugins-l7zm7 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633" openshift-machine-config-operator 55m Normal Pulled pod/machine-config-daemon-drlvb Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" already present on machine openshift-ovn-kubernetes 55m Normal Pulling pod/ovnkube-node-x4z8l Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" openshift-authentication-operator 55m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-9b9694fdc-g7gxw pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" openshift-cluster-node-tuning-operator 55m Normal Pulling pod/tuned-5mn5s Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" openshift-cluster-csi-drivers 55m Normal Pulling pod/aws-ebs-csi-driver-node-8w5jv Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" openshift-dns 55m Normal Pulling pod/node-resolver-vfr6q Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" openshift-kube-apiserver-operator 55m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/localhost-recovery-serving-certkey-5 -n openshift-kube-apiserver because it was missing openshift-kube-controller-manager 55m Normal Started pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Started container kube-controller-manager-recovery-controller openshift-machine-config-operator 55m Normal Created pod/machine-config-daemon-drlvb Created container machine-config-daemon openshift-kube-controller-manager 55m Normal Created pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Created container cluster-policy-controller openshift-kube-apiserver-operator 55m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)" to "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-cert-regeneration-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-cert-syncer\" is terminated: Error: iserver-ip-10-0-197-197.ec2.internal.174e6e53617025ba\", GenerateName:\"\", Namespace:\"openshift-kube-apiserver\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"openshift-kube-apiserver\", Name:\"kube-apiserver-ip-10-0-197-197.ec2.internal\", UID:\"\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}, Reason:\"FastControllerResync\", Message:\"Controller \\\"CertSyncController\\\" resync interval is set to 0s which might lead to client request throttling\", Source:v1.EventSource{Component:\"cert-syncer-certsynccontroller\", Host:\"\"}, FirstTimestamp:time.Date(2023, time.March, 21, 12, 16, 55, 451854266, time.Local), LastTimestamp:time.Date(2023, time.March, 21, 12, 16, 55, 451854266, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'Post \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp [::1]:6443: connect: connection refused'(may retry after sleeping)\nStaticPodsDegraded: W0321 12:18:17.819603 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:18:17.819654 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-insecure-readyz\" is terminated: Completed: " openshift-machine-config-operator 55m Normal Started pod/machine-config-daemon-drlvb Started container machine-config-daemon default 55m Normal Status upgrade clusteroperator/machine-api Progressing towards operator: 4.13.0-rc.0 openshift-kube-controller-manager 55m Normal Created pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Created container kube-controller-manager-recovery-controller openshift-kube-controller-manager 55m Normal Pulled pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 55m Normal Started pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Started container kube-controller-manager-cert-syncer openshift-kube-controller-manager 55m Normal Created pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Created container kube-controller-manager-cert-syncer openshift-kube-controller-manager 55m Normal Pulled pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 55m Normal Started pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Started container cluster-policy-controller openshift-etcd-operator 55m Normal PodCreated deployment/etcd-operator Created Pod/installer-5-ip-10-0-140-6.ec2.internal -n openshift-etcd because it was missing openshift-kube-scheduler-operator 55m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" to "GuardControllerDegraded: [Missing PodIP in operand openshift-kube-scheduler-ip-10-0-140-6.ec2.internal on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" openshift-machine-config-operator 55m Normal Pulling pod/machine-config-daemon-drlvb Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" openshift-kube-controller-manager 55m Normal Pulled pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fec19020202ba3b7f09460bc0cb52fc001941fbb3a3fa7d1ca34a2e9ebf0e9" in 1.805184307s (1.805195116s including waiting) openshift-kube-controller-manager 55m Warning ClusterInfrastructureStatus pod/kube-controller-manager-ip-10-0-140-6.ec2.internal unable to get cluster infrastructure status, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope openshift-etcd 55m Normal Created pod/installer-5-ip-10-0-140-6.ec2.internal Created container installer openshift-etcd 55m Normal AddedInterface pod/installer-5-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.37/23] from ovn-kubernetes openshift-etcd 55m Normal Pulled pod/installer-5-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-kube-apiserver-operator 55m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/localhost-recovery-client-token-5 -n openshift-kube-apiserver because it was missing openshift-etcd 55m Normal Started pod/installer-5-ip-10-0-140-6.ec2.internal Started container installer openshift-apiserver 55m Normal ScalingReplicaSet deployment/apiserver Scaled down replica set apiserver-565b67b9f7 to 1 from 2 openshift-apiserver 55m Normal ScalingReplicaSet deployment/apiserver Scaled up replica set apiserver-6977bc9f6b to 2 from 1 openshift-apiserver 55m Normal SuccessfulDelete replicaset/apiserver-565b67b9f7 Deleted pod: apiserver-565b67b9f7-wvhp4 openshift-apiserver 55m Normal SuccessfulCreate replicaset/apiserver-6977bc9f6b Created pod: apiserver-6977bc9f6b-wgtnw openshift-apiserver 55m Normal Killing pod/apiserver-565b67b9f7-wvhp4 Stopping container openshift-apiserver-check-endpoints openshift-kube-apiserver-operator 55m Normal RevisionTriggered deployment/kube-apiserver-operator new revision 5 triggered by "required configmap/config has changed" openshift-kube-apiserver-operator 55m Normal RevisionCreate deployment/kube-apiserver-operator Revision 4 created because required configmap/config has changed openshift-kube-apiserver-operator 55m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/webhook-authenticator-5 -n openshift-kube-apiserver because it was missing openshift-kube-controller-manager 55m Normal Created pod/kube-controller-manager-guard-ip-10-0-140-6.ec2.internal Created container guard openshift-kube-controller-manager 55m Normal Started pod/kube-controller-manager-guard-ip-10-0-140-6.ec2.internal Started container guard openshift-kube-controller-manager 55m Normal AddedInterface pod/kube-controller-manager-guard-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.38/23] from ovn-kubernetes openshift-kube-controller-manager 55m Normal Pulled pod/kube-controller-manager-guard-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-apiserver-operator 55m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/authorization.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/build.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/route.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/authorization.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/build.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/route.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" openshift-kube-controller-manager-operator 55m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: [Missing operand on node ip-10-0-197-197.ec2.internal, Missing PodIP in operand kube-controller-manager-ip-10-0-140-6.ec2.internal on node ip-10-0-140-6.ec2.internal]" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: Missing operand on node ip-10-0-197-197.ec2.internal" openshift-apiserver-operator 55m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/authorization.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/build.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/route.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/authorization.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/route.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" openshift-kube-apiserver 55m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container setup openshift-kube-apiserver 55m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container setup openshift-kube-scheduler 55m Normal AddedInterface pod/openshift-kube-scheduler-guard-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.39/23] from ovn-kubernetes openshift-kube-apiserver-operator 55m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-cert-regeneration-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-cert-syncer\" is terminated: Error: iserver-ip-10-0-197-197.ec2.internal.174e6e53617025ba\", GenerateName:\"\", Namespace:\"openshift-kube-apiserver\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"openshift-kube-apiserver\", Name:\"kube-apiserver-ip-10-0-197-197.ec2.internal\", UID:\"\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}, Reason:\"FastControllerResync\", Message:\"Controller \\\"CertSyncController\\\" resync interval is set to 0s which might lead to client request throttling\", Source:v1.EventSource{Component:\"cert-syncer-certsynccontroller\", Host:\"\"}, FirstTimestamp:time.Date(2023, time.March, 21, 12, 16, 55, 451854266, time.Local), LastTimestamp:time.Date(2023, time.March, 21, 12, 16, 55, 451854266, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'Post \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp [::1]:6443: connect: connection refused'(may retry after sleeping)\nStaticPodsDegraded: W0321 12:18:17.819603 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:18:17.819654 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-insecure-readyz\" is terminated: Completed: " to "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing PodIP in operand kube-apiserver-ip-10-0-197-197.ec2.internal on node ip-10-0-197-197.ec2.internal]\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-cert-regeneration-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-cert-syncer\" is terminated: Error: iserver-ip-10-0-197-197.ec2.internal.174e6e53617025ba\", GenerateName:\"\", Namespace:\"openshift-kube-apiserver\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"openshift-kube-apiserver\", Name:\"kube-apiserver-ip-10-0-197-197.ec2.internal\", UID:\"\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}, Reason:\"FastControllerResync\", Message:\"Controller \\\"CertSyncController\\\" resync interval is set to 0s which might lead to client request throttling\", Source:v1.EventSource{Component:\"cert-syncer-certsynccontroller\", Host:\"\"}, FirstTimestamp:time.Date(2023, time.March, 21, 12, 16, 55, 451854266, time.Local), LastTimestamp:time.Date(2023, time.March, 21, 12, 16, 55, 451854266, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'Post \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp [::1]:6443: connect: connection refused'(may retry after sleeping)\nStaticPodsDegraded: W0321 12:18:17.819603 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:18:17.819654 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-insecure-readyz\" is terminated: Completed: " openshift-kube-scheduler 55m Normal Created pod/openshift-kube-scheduler-guard-ip-10-0-140-6.ec2.internal Created container guard openshift-kube-apiserver-operator 55m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-cert-regeneration-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-cert-syncer\" is terminated: Error: iserver-ip-10-0-197-197.ec2.internal.174e6e53617025ba\", GenerateName:\"\", Namespace:\"openshift-kube-apiserver\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"openshift-kube-apiserver\", Name:\"kube-apiserver-ip-10-0-197-197.ec2.internal\", UID:\"\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}, Reason:\"FastControllerResync\", Message:\"Controller \\\"CertSyncController\\\" resync interval is set to 0s which might lead to client request throttling\", Source:v1.EventSource{Component:\"cert-syncer-certsynccontroller\", Host:\"\"}, FirstTimestamp:time.Date(2023, time.March, 21, 12, 16, 55, 451854266, time.Local), LastTimestamp:time.Date(2023, time.March, 21, 12, 16, 55, 451854266, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'Post \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp [::1]:6443: connect: connection refused'(may retry after sleeping)\nStaticPodsDegraded: W0321 12:18:17.819603 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:18:17.819654 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-insecure-readyz\" is terminated: Completed: " to "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nRevisionControllerDegraded: conflicting latestAvailableRevision 5\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-cert-regeneration-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-cert-syncer\" is terminated: Error: iserver-ip-10-0-197-197.ec2.internal.174e6e53617025ba\", GenerateName:\"\", Namespace:\"openshift-kube-apiserver\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"openshift-kube-apiserver\", Name:\"kube-apiserver-ip-10-0-197-197.ec2.internal\", UID:\"\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}, Reason:\"FastControllerResync\", Message:\"Controller \\\"CertSyncController\\\" resync interval is set to 0s which might lead to client request throttling\", Source:v1.EventSource{Component:\"cert-syncer-certsynccontroller\", Host:\"\"}, FirstTimestamp:time.Date(2023, time.March, 21, 12, 16, 55, 451854266, time.Local), LastTimestamp:time.Date(2023, time.March, 21, 12, 16, 55, 451854266, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'Post \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp [::1]:6443: connect: connection refused'(may retry after sleeping)\nStaticPodsDegraded: W0321 12:18:17.819603 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:18:17.819654 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-insecure-readyz\" is terminated: Completed: " openshift-kube-apiserver-operator 55m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nRevisionControllerDegraded: conflicting latestAvailableRevision 5\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-cert-regeneration-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-cert-syncer\" is terminated: Error: iserver-ip-10-0-197-197.ec2.internal.174e6e53617025ba\", GenerateName:\"\", Namespace:\"openshift-kube-apiserver\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"openshift-kube-apiserver\", Name:\"kube-apiserver-ip-10-0-197-197.ec2.internal\", UID:\"\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}, Reason:\"FastControllerResync\", Message:\"Controller \\\"CertSyncController\\\" resync interval is set to 0s which might lead to client request throttling\", Source:v1.EventSource{Component:\"cert-syncer-certsynccontroller\", Host:\"\"}, FirstTimestamp:time.Date(2023, time.March, 21, 12, 16, 55, 451854266, time.Local), LastTimestamp:time.Date(2023, time.March, 21, 12, 16, 55, 451854266, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'Post \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp [::1]:6443: connect: connection refused'(may retry after sleeping)\nStaticPodsDegraded: W0321 12:18:17.819603 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:18:17.819654 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-insecure-readyz\" is terminated: Completed: " to "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-cert-regeneration-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-cert-syncer\" is terminated: Error: iserver-ip-10-0-197-197.ec2.internal.174e6e53617025ba\", GenerateName:\"\", Namespace:\"openshift-kube-apiserver\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"openshift-kube-apiserver\", Name:\"kube-apiserver-ip-10-0-197-197.ec2.internal\", UID:\"\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}, Reason:\"FastControllerResync\", Message:\"Controller \\\"CertSyncController\\\" resync interval is set to 0s which might lead to client request throttling\", Source:v1.EventSource{Component:\"cert-syncer-certsynccontroller\", Host:\"\"}, FirstTimestamp:time.Date(2023, time.March, 21, 12, 16, 55, 451854266, time.Local), LastTimestamp:time.Date(2023, time.March, 21, 12, 16, 55, 451854266, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'Post \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp [::1]:6443: connect: connection refused'(may retry after sleeping)\nStaticPodsDegraded: W0321 12:18:17.819603 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:18:17.819654 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-insecure-readyz\" is terminated: Completed: " openshift-kube-scheduler 55m Normal Started pod/openshift-kube-scheduler-guard-ip-10-0-140-6.ec2.internal Started container guard openshift-kube-scheduler 55m Normal Pulled pod/openshift-kube-scheduler-guard-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-apiserver 55m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-apiserver 55m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver openshift-kube-apiserver 55m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver openshift-kube-apiserver 55m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver-cert-regeneration-controller openshift-kube-apiserver 55m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver-cert-syncer openshift-kube-apiserver 55m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver-cert-syncer openshift-kube-storage-version-migrator-operator 55m Normal LeaderElection configmap/openshift-kube-storage-version-migrator-operator-lock kube-storage-version-migrator-operator-7f8b95cf5f-x5hzl_9c0c6b25-f78a-4fbb-8953-1c18d2c32278 became leader openshift-kube-storage-version-migrator-operator 55m Warning FastControllerResync deployment/kube-storage-version-migrator-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver 55m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-apiserver 55m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver-cert-regeneration-controller openshift-kube-storage-version-migrator-operator 55m Warning FastControllerResync deployment/kube-storage-version-migrator-operator Controller "StaticConditionsController" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver 55m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 55m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 55m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 55m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver-check-endpoints openshift-kube-apiserver 55m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver-insecure-readyz openshift-kube-scheduler-operator 55m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: [Missing PodIP in operand openshift-kube-scheduler-ip-10-0-140-6.ec2.internal on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-197-197.ec2.internal]" to "GuardControllerDegraded: Missing operand on node ip-10-0-197-197.ec2.internal" openshift-kube-apiserver-operator 55m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 5",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 5" openshift-apiserver-operator 55m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/authorization.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/route.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/authorization.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" openshift-kube-apiserver 55m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver-check-endpoints openshift-kube-apiserver 55m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver-insecure-readyz openshift-kube-apiserver 55m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-network-diagnostics 55m Warning FailedMount pod/network-check-target-w7m4g MountVolume.SetUp failed for volume "kube-api-access-r8rz8" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] openshift-multus 55m Warning FailedMount pod/network-metrics-daemon-74bvc MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered openshift-authentication-operator 55m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/oauth.openshift.io/v1: Get \"https://10.128.0.35:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.130.0.52:8443/apis/user.openshift.io/v1: Get \"https://10.130.0.52:8443/apis/user.openshift.io/v1\": context deadline exceeded\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 2 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.130.0.52:8443/apis/oauth.openshift.io/v1: Get \"https://10.130.0.52:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.130.0.52:8443/apis/user.openshift.io/v1: Get \"https://10.130.0.52:8443/apis/user.openshift.io/v1\": context deadline exceeded\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 2 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-kube-apiserver 55m Warning FastControllerResync node/ip-10-0-197-197.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver 55m Warning FastControllerResync node/ip-10-0-197-197.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling openshift-apiserver-operator 55m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/authorization.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/security.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/security.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/template.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/template.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/authorization.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" openshift-machine-config-operator 55m Normal Pulled pod/machine-config-daemon-w98lz Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" in 16.904726457s (16.904742533s including waiting) default 55m Warning ErrorReconcilingNode node/ip-10-0-232-8.ec2.internal [k8s.ovn.org/node-chassis-id annotation not found for node ip-10-0-232-8.ec2.internal, macAddress annotation not found for node "ip-10-0-232-8.ec2.internal" , k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-232-8.ec2.internal"] default 55m Warning ErrorReconcilingNode node/ip-10-0-160-152.ec2.internal [k8s.ovn.org/node-chassis-id annotation not found for node ip-10-0-160-152.ec2.internal, macAddress annotation not found for node "ip-10-0-160-152.ec2.internal" , k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-160-152.ec2.internal"] openshift-kube-apiserver-operator 55m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal, Missing PodIP in operand kube-apiserver-ip-10-0-197-197.ec2.internal on node ip-10-0-197-197.ec2.internal]\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-cert-regeneration-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-cert-syncer\" is terminated: Error: iserver-ip-10-0-197-197.ec2.internal.174e6e53617025ba\", GenerateName:\"\", Namespace:\"openshift-kube-apiserver\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"openshift-kube-apiserver\", Name:\"kube-apiserver-ip-10-0-197-197.ec2.internal\", UID:\"\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}, Reason:\"FastControllerResync\", Message:\"Controller \\\"CertSyncController\\\" resync interval is set to 0s which might lead to client request throttling\", Source:v1.EventSource{Component:\"cert-syncer-certsynccontroller\", Host:\"\"}, FirstTimestamp:time.Date(2023, time.March, 21, 12, 16, 55, 451854266, time.Local), LastTimestamp:time.Date(2023, time.March, 21, 12, 16, 55, 451854266, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'Post \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp [::1]:6443: connect: connection refused'(may retry after sleeping)\nStaticPodsDegraded: W0321 12:18:17.819603 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:18:17.819654 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-insecure-readyz\" is terminated: Completed: " to "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-cert-regeneration-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-cert-syncer\" is terminated: Error: iserver-ip-10-0-197-197.ec2.internal.174e6e53617025ba\", GenerateName:\"\", Namespace:\"openshift-kube-apiserver\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"openshift-kube-apiserver\", Name:\"kube-apiserver-ip-10-0-197-197.ec2.internal\", UID:\"\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}, Reason:\"FastControllerResync\", Message:\"Controller \\\"CertSyncController\\\" resync interval is set to 0s which might lead to client request throttling\", Source:v1.EventSource{Component:\"cert-syncer-certsynccontroller\", Host:\"\"}, FirstTimestamp:time.Date(2023, time.March, 21, 12, 16, 55, 451854266, time.Local), LastTimestamp:time.Date(2023, time.March, 21, 12, 16, 55, 451854266, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'Post \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp [::1]:6443: connect: connection refused'(may retry after sleeping)\nStaticPodsDegraded: W0321 12:18:17.819603 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:18:17.819654 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-insecure-readyz\" is terminated: Completed: " openshift-kube-apiserver-operator 55m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-cert-regeneration-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-cert-syncer\" is terminated: Error: iserver-ip-10-0-197-197.ec2.internal.174e6e53617025ba\", GenerateName:\"\", Namespace:\"openshift-kube-apiserver\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"openshift-kube-apiserver\", Name:\"kube-apiserver-ip-10-0-197-197.ec2.internal\", UID:\"\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}, Reason:\"FastControllerResync\", Message:\"Controller \\\"CertSyncController\\\" resync interval is set to 0s which might lead to client request throttling\", Source:v1.EventSource{Component:\"cert-syncer-certsynccontroller\", Host:\"\"}, FirstTimestamp:time.Date(2023, time.March, 21, 12, 16, 55, 451854266, time.Local), LastTimestamp:time.Date(2023, time.March, 21, 12, 16, 55, 451854266, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'Post \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp [::1]:6443: connect: connection refused'(may retry after sleeping)\nStaticPodsDegraded: W0321 12:18:17.819603 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:18:17.819654 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" is waiting: CrashLoopBackOff: back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver(dbd2d936179f69319e9e4db0fbbce1d4)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-insecure-readyz\" is terminated: Completed: " to "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" openshift-apiserver 55m Warning Unhealthy pod/apiserver-565b67b9f7-wvhp4 Readiness probe failed: HTTP probe failed with statuscode: 500 openshift-controller-manager-operator 55m Normal LeaderElection configmap/openshift-controller-manager-operator-lock openshift-controller-manager-operator-6548869cc5-9kqx5_15199a37-c941-4cdd-b302-5b13e2f0f27f became leader openshift-multus 55m Warning FailedMount pod/network-metrics-daemon-f6tv8 MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered openshift-controller-manager-operator 55m Normal LeaderElection lease/openshift-controller-manager-operator-lock openshift-controller-manager-operator-6548869cc5-9kqx5_15199a37-c941-4cdd-b302-5b13e2f0f27f became leader openshift-apiserver 55m Warning ProbeError pod/apiserver-565b67b9f7-wvhp4 Readiness probe error: HTTP probe failed with statuscode: 500... openshift-machine-config-operator 55m Normal Started pod/machine-config-daemon-w98lz Started container oauth-proxy openshift-machine-config-operator 55m Normal Created pod/machine-config-daemon-w98lz Created container oauth-proxy openshift-dns 55m Normal Pulled pod/node-resolver-f7qjl Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" in 21.409152561s (21.409175111s including waiting) openshift-multus 55m Normal Pulled pod/multus-additional-cni-plugins-j5mgq Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633" in 21.412883812s (21.412891846s including waiting) openshift-kube-controller-manager-operator 55m Normal NodeCurrentRevisionChanged deployment/kube-controller-manager-operator Updated node "ip-10-0-140-6.ec2.internal" from revision 0 to 4 because static pod is ready openshift-ovn-kubernetes 55m Normal Pulled pod/ovnkube-node-8sb9g Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" in 21.571917934s (21.571924571s including waiting) openshift-ovn-kubernetes 55m Normal Pulled pod/ovnkube-node-8sb9g Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-network-diagnostics 55m Warning FailedMount pod/network-check-target-2799t MountVolume.SetUp failed for volume "kube-api-access-xt96j" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] openshift-cluster-node-tuning-operator 55m Normal Pulled pod/tuned-t8kzn Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" in 21.416147971s (21.416153715s including waiting) openshift-kube-controller-manager-operator 55m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 1 nodes are at revision 4" to "NodeInstallerProgressing: 1 nodes are at revision 0; 2 nodes are at revision 4",Available message changed from "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 nodes are at revision 4" to "StaticPodsAvailable: 2 nodes are active; 1 nodes are at revision 0; 2 nodes are at revision 4" openshift-cluster-node-tuning-operator 55m Normal Created pod/tuned-t8kzn Created container tuned openshift-cluster-node-tuning-operator 55m Normal Started pod/tuned-t8kzn Started container tuned openshift-cluster-csi-drivers 55m Normal Pulled pod/aws-ebs-csi-driver-node-2p86w Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" in 21.404226692s (21.404232564s including waiting) openshift-multus 55m Normal Pulled pod/multus-d7w6w Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" in 21.391269648s (21.391275006s including waiting) openshift-cluster-csi-drivers 55m Normal Pulled pod/aws-ebs-csi-driver-node-8w5jv Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" in 15.65742844s (15.65743825s including waiting) openshift-ovn-kubernetes 55m Normal Created pod/ovnkube-node-8sb9g Created container ovn-acl-logging openshift-ovn-kubernetes 55m Normal Started pod/ovnkube-node-8sb9g Started container ovn-acl-logging kube-system 55m Required control plane pods have been created openshift-ovn-kubernetes 55m Normal Pulling pod/ovnkube-node-8sb9g Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" openshift-ovn-kubernetes 55m Normal Pulled pod/ovnkube-node-8sb9g Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" in 1.556848934s (1.556861324s including waiting) openshift-ovn-kubernetes 55m Normal Pulled pod/ovnkube-node-8sb9g Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 55m Normal Started pod/ovnkube-node-8sb9g Started container kube-rbac-proxy-ovn-metrics openshift-ovn-kubernetes 55m Normal Created pod/ovnkube-node-8sb9g Created container kube-rbac-proxy-ovn-metrics openshift-multus 55m Normal Pulled pod/multus-additional-cni-plugins-l7zm7 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633" in 17.571491476s (17.571507712s including waiting) openshift-ovn-kubernetes 55m Normal Pulled pod/ovnkube-node-8sb9g Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-ovn-kubernetes 55m Normal Started pod/ovnkube-node-8sb9g Started container kube-rbac-proxy openshift-ovn-kubernetes 55m Normal Created pod/ovnkube-node-8sb9g Created container kube-rbac-proxy default 55m Warning ErrorReconcilingNode node/ip-10-0-160-152.ec2.internal error creating gateway for node ip-10-0-160-152.ec2.internal: failed to init shared interface gateway: failed to create MAC Binding for dummy nexthop ip-10-0-160-152.ec2.internal: error getting datapath GR_ip-10-0-160-152.ec2.internal: object not found openshift-multus 55m Normal Started pod/multus-additional-cni-plugins-j5mgq Started container egress-router-binary-copy openshift-multus 55m Normal Created pod/multus-additional-cni-plugins-j5mgq Created container egress-router-binary-copy openshift-multus 55m Normal Created pod/multus-d7w6w Created container kube-multus openshift-multus 55m Normal Started pod/multus-d7w6w Started container kube-multus openshift-cluster-csi-drivers 55m Normal Created pod/aws-ebs-csi-driver-node-2p86w Created container csi-driver openshift-dns 55m Normal Started pod/node-resolver-f7qjl Started container dns-node-resolver openshift-dns 55m Normal Created pod/node-resolver-f7qjl Created container dns-node-resolver openshift-dns 55m Normal Pulled pod/node-resolver-vfr6q Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" in 18.592755779s (18.592761159s including waiting) openshift-cluster-csi-drivers 55m Normal Pulling pod/aws-ebs-csi-driver-node-2p86w Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" openshift-machine-config-operator 55m Normal Pulled pod/machine-config-daemon-drlvb Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" in 17.483519118s (17.483531706s including waiting) openshift-multus 55m Normal Pulling pod/multus-additional-cni-plugins-j5mgq Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be088cb9218a248cd4a616132c8daf531502c7b85404aadb0b73e76f8dba01df" openshift-cluster-csi-drivers 55m Normal Started pod/aws-ebs-csi-driver-node-2p86w Started container csi-driver openshift-ovn-kubernetes 55m Normal Created pod/ovnkube-node-8sb9g Created container ovnkube-node openshift-ovn-kubernetes 55m Normal Started pod/ovnkube-node-8sb9g Started container ovnkube-node openshift-kube-controller-manager-operator 55m Normal NodeTargetRevisionChanged deployment/kube-controller-manager-operator Updating node "ip-10-0-197-197.ec2.internal" from revision 0 to 4 because node ip-10-0-197-197.ec2.internal static pod not found openshift-apiserver 55m Warning Unhealthy pod/apiserver-565b67b9f7-wvhp4 Readiness probe failed: Get "https://10.128.0.29:8443/readyz": dial tcp 10.128.0.29:8443: connect: connection refused openshift-apiserver 55m Warning ProbeError pod/apiserver-565b67b9f7-wvhp4 Readiness probe error: Get "https://10.128.0.29:8443/readyz": dial tcp 10.128.0.29:8443: connect: connection refused... openshift-ovn-kubernetes 55m Normal Started pod/ovnkube-node-8sb9g Started container ovn-controller openshift-cluster-csi-drivers 55m Normal Pulling pod/aws-ebs-csi-driver-node-2p86w Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" openshift-cluster-csi-drivers 55m Normal Started pod/aws-ebs-csi-driver-node-2p86w Started container csi-node-driver-registrar openshift-cluster-csi-drivers 55m Normal Created pod/aws-ebs-csi-driver-node-2p86w Created container csi-node-driver-registrar openshift-ovn-kubernetes 55m Normal Pulled pod/ovnkube-node-8sb9g Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-cluster-csi-drivers 55m Normal Pulled pod/aws-ebs-csi-driver-node-2p86w Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" in 1.716802472s (1.716815238s including waiting) openshift-ovn-kubernetes 55m Normal Created pod/ovnkube-node-8sb9g Created container ovn-controller openshift-authentication-operator 55m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.130.0.52:8443/apis/oauth.openshift.io/v1: Get \"https://10.130.0.52:8443/apis/oauth.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.130.0.52:8443/apis/user.openshift.io/v1: Get \"https://10.130.0.52:8443/apis/user.openshift.io/v1\": context deadline exceeded\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 2 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 2 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." openshift-apiserver 55m Normal Started pod/apiserver-6977bc9f6b-wgtnw Started container fix-audit-permissions openshift-multus 55m Warning NetworkNotReady pod/network-metrics-daemon-74bvc network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? default 55m Normal NodeReady node/ip-10-0-160-152.ec2.internal Node ip-10-0-160-152.ec2.internal status is now: NodeReady openshift-network-diagnostics 55m Warning NetworkNotReady pod/network-check-target-w7m4g network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? openshift-apiserver 55m Normal AddedInterface pod/apiserver-6977bc9f6b-wgtnw Add eth0 [10.128.0.40/23] from ovn-kubernetes openshift-authentication-operator 55m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 2 worker nodes, 3 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" openshift-apiserver 55m Normal Pulled pod/apiserver-6977bc9f6b-wgtnw Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" already present on machine openshift-ingress-canary 55m Normal SuccessfulCreate daemonset/ingress-canary Created pod: ingress-canary-bn5dn openshift-apiserver 55m Normal Created pod/apiserver-6977bc9f6b-wgtnw Created container fix-audit-permissions openshift-dns 55m Normal SuccessfulCreate daemonset/dns-default Created pod: dns-default-jf2vx openshift-apiserver 55m Normal Pulled pod/apiserver-6977bc9f6b-wgtnw Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" already present on machine openshift-apiserver 55m Normal Created pod/apiserver-6977bc9f6b-wgtnw Created container openshift-apiserver openshift-apiserver 55m Normal Killing pod/apiserver-565b67b9f7-wvhp4 Stopping container openshift-apiserver openshift-kube-controller-manager-operator 55m Normal PodCreated deployment/kube-controller-manager-operator Created Pod/installer-4-ip-10-0-197-197.ec2.internal -n openshift-kube-controller-manager because it was missing openshift-cluster-csi-drivers 55m Normal Pulled pod/aws-ebs-csi-driver-node-2p86w Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" in 923.207026ms (923.215744ms including waiting) openshift-cluster-csi-drivers 55m Normal Created pod/aws-ebs-csi-driver-node-2p86w Created container csi-liveness-probe openshift-apiserver 55m Normal Pulled pod/apiserver-6977bc9f6b-wgtnw Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-apiserver 55m Normal Started pod/apiserver-6977bc9f6b-wgtnw Started container openshift-apiserver openshift-cluster-csi-drivers 55m Normal Started pod/aws-ebs-csi-driver-node-2p86w Started container csi-liveness-probe openshift-ingress 55m Normal AddedInterface pod/router-default-699d8c97f-9xbcx Add eth0 [10.131.0.8/23] from ovn-kubernetes openshift-kube-controller-manager 55m Normal AddedInterface pod/installer-4-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.54/23] from ovn-kubernetes openshift-ovn-kubernetes 55m Normal Started pod/ovnkube-node-x4z8l Started container ovn-acl-logging openshift-ovn-kubernetes 55m Normal Created pod/ovnkube-node-x4z8l Created container ovn-acl-logging openshift-ovn-kubernetes 55m Normal Pulled pod/ovnkube-node-x4z8l Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-dns 55m Normal AddedInterface pod/dns-default-jf2vx Add eth0 [10.131.0.9/23] from ovn-kubernetes openshift-apiserver 55m Normal Created pod/apiserver-6977bc9f6b-wgtnw Created container openshift-apiserver-check-endpoints openshift-ovn-kubernetes 55m Normal Pulled pod/ovnkube-node-x4z8l Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" in 22.230881555s (22.230893157s including waiting) openshift-kube-controller-manager 55m Normal Pulled pod/installer-4-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-monitoring 55m Normal AddedInterface pod/prometheus-operator-admission-webhook-5c549f4449-v9x8h Add eth0 [10.131.0.6/23] from ovn-kubernetes openshift-apiserver 55m Normal Started pod/apiserver-6977bc9f6b-wgtnw Started container openshift-apiserver-check-endpoints openshift-machine-config-operator 55m Normal Created pod/machine-config-daemon-drlvb Created container oauth-proxy openshift-kube-controller-manager 55m Normal Started pod/installer-4-ip-10-0-197-197.ec2.internal Started container installer openshift-ingress 55m Normal Pulling pod/router-default-699d8c97f-6nwwk Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0743d54d3acaf6558295618248ff446b4352dde0234d52465d7578c7c261e6fd" openshift-ovn-kubernetes 55m Normal Pulling pod/ovnkube-node-x4z8l Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" openshift-cluster-node-tuning-operator 55m Normal Created pod/tuned-5mn5s Created container tuned openshift-network-diagnostics 55m Normal AddedInterface pod/network-check-source-677bdb7d9-4sw4t Add eth0 [10.131.0.12/23] from ovn-kubernetes openshift-ingress 55m Normal Pulling pod/router-default-699d8c97f-9xbcx Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0743d54d3acaf6558295618248ff446b4352dde0234d52465d7578c7c261e6fd" openshift-monitoring 55m Normal Pulling pod/prometheus-operator-admission-webhook-5c549f4449-v9x8h Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e2218fd1d860bdb72a28d8fc34e1d5e7c3674bf1d0005583d70800dcd79d2" openshift-operator-lifecycle-manager 55m Normal Pulling pod/collect-profiles-27990015-4vlzz Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:497121bfdb3b293af2c939af393bfbda39aeace9d754f7288a244cd7ec2662d7" openshift-ingress 55m Normal AddedInterface pod/router-default-699d8c97f-6nwwk Add eth0 [10.131.0.7/23] from ovn-kubernetes openshift-multus 55m Normal Pulled pod/multus-ztsxl Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" in 21.8063081s (21.806315107s including waiting) openshift-network-diagnostics 55m Normal Pulling pod/network-check-source-677bdb7d9-4sw4t Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" openshift-cluster-node-tuning-operator 55m Normal Started pod/tuned-5mn5s Started container tuned openshift-operator-lifecycle-manager 55m Normal AddedInterface pod/collect-profiles-27990015-4vlzz Add eth0 [10.131.0.11/23] from ovn-kubernetes openshift-ingress-canary 55m Normal AddedInterface pod/ingress-canary-bn5dn Add eth0 [10.131.0.10/23] from ovn-kubernetes openshift-machine-config-operator 55m Normal Started pod/machine-config-daemon-drlvb Started container oauth-proxy openshift-dns 55m Normal Pulling pod/dns-default-jf2vx Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a604c96952da5e59f104a305e0c8303d474e33ef345b8c534fd677f189d05299" openshift-kube-controller-manager 55m Normal Created pod/installer-4-ip-10-0-197-197.ec2.internal Created container installer openshift-cluster-node-tuning-operator 55m Normal Pulled pod/tuned-5mn5s Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" in 21.793279793s (21.793285765s including waiting) openshift-ingress-canary 55m Normal Pulling pod/ingress-canary-bn5dn Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6de5d1e6d774c20a0c3bd4d4c544234519bc4528e5535f24d0cf78a6788b4056" openshift-apiserver 55m Warning FastControllerResync node/ip-10-0-140-6.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling openshift-apiserver 55m Warning FastControllerResync node/ip-10-0-140-6.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling openshift-ovn-kubernetes 55m Normal Pulled pod/ovnkube-node-x4z8l Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" in 1.938706417s (1.938720595s including waiting) openshift-ovn-kubernetes 55m Normal Pulled pod/ovnkube-node-x4z8l Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-ovn-kubernetes 55m Normal Started pod/ovnkube-node-x4z8l Started container kube-rbac-proxy openshift-ovn-kubernetes 55m Normal Created pod/ovnkube-node-x4z8l Created container kube-rbac-proxy openshift-kube-apiserver-operator 55m Normal PodCreated deployment/kube-apiserver-operator Created Pod/installer-5-ip-10-0-197-197.ec2.internal -n openshift-kube-apiserver because it was missing default 55m Warning ErrorReconcilingNode node/ip-10-0-232-8.ec2.internal error creating gateway for node ip-10-0-232-8.ec2.internal: failed to init shared interface gateway: failed to create MAC Binding for dummy nexthop ip-10-0-232-8.ec2.internal: error getting datapath GR_ip-10-0-232-8.ec2.internal: object not found openshift-multus 55m Normal Created pod/multus-ztsxl Created container kube-multus openshift-dns 55m Normal Created pod/node-resolver-vfr6q Created container dns-node-resolver openshift-multus 55m Normal Started pod/multus-ztsxl Started container kube-multus openshift-network-diagnostics 55m Normal TaintManagerEviction pod/network-check-source-677bdb7d9-4sw4t Cancelling deletion of Pod openshift-network-diagnostics/network-check-source-677bdb7d9-4sw4t openshift-monitoring 55m Normal TaintManagerEviction pod/prometheus-operator-admission-webhook-5c549f4449-v9x8h Cancelling deletion of Pod openshift-monitoring/prometheus-operator-admission-webhook-5c549f4449-v9x8h openshift-dns 55m Normal Started pod/node-resolver-vfr6q Started container dns-node-resolver openshift-kube-apiserver 55m Normal Pulled pod/installer-5-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-ovn-kubernetes 55m Normal Created pod/ovnkube-node-x4z8l Created container ovnkube-node openshift-ovn-kubernetes 55m Normal Pulled pod/ovnkube-node-x4z8l Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 55m Normal Started pod/ovnkube-node-x4z8l Started container kube-rbac-proxy-ovn-metrics openshift-ovn-kubernetes 55m Normal Created pod/ovnkube-node-x4z8l Created container kube-rbac-proxy-ovn-metrics openshift-ovn-kubernetes 55m Normal Started pod/ovnkube-node-x4z8l Started container ovnkube-node openshift-cluster-csi-drivers 55m Normal Created pod/aws-ebs-csi-driver-node-8w5jv Created container csi-driver openshift-kube-apiserver 55m Normal Started pod/installer-5-ip-10-0-197-197.ec2.internal Started container installer openshift-multus 55m Normal Created pod/multus-additional-cni-plugins-l7zm7 Created container egress-router-binary-copy openshift-multus 55m Normal Started pod/multus-additional-cni-plugins-l7zm7 Started container egress-router-binary-copy openshift-multus 55m Normal Pulling pod/multus-additional-cni-plugins-l7zm7 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be088cb9218a248cd4a616132c8daf531502c7b85404aadb0b73e76f8dba01df" openshift-kube-apiserver 55m Normal Created pod/installer-5-ip-10-0-197-197.ec2.internal Created container installer openshift-cluster-csi-drivers 55m Normal Started pod/aws-ebs-csi-driver-node-8w5jv Started container csi-driver openshift-cluster-csi-drivers 55m Normal Pulling pod/aws-ebs-csi-driver-node-8w5jv Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" openshift-ingress 55m Normal TaintManagerEviction pod/router-default-699d8c97f-9xbcx Cancelling deletion of Pod openshift-ingress/router-default-699d8c97f-9xbcx openshift-operator-lifecycle-manager 55m Normal TaintManagerEviction pod/collect-profiles-27990015-4vlzz Cancelling deletion of Pod openshift-operator-lifecycle-manager/collect-profiles-27990015-4vlzz openshift-ingress 55m Normal TaintManagerEviction pod/router-default-699d8c97f-6nwwk Cancelling deletion of Pod openshift-ingress/router-default-699d8c97f-6nwwk openshift-kube-apiserver 55m Normal AddedInterface pod/installer-5-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.55/23] from ovn-kubernetes openshift-apiserver 55m Normal SuccessfulDelete replicaset/apiserver-565b67b9f7 Deleted pod: apiserver-565b67b9f7-lvnhl openshift-cluster-csi-drivers 55m Normal Pulling pod/aws-ebs-csi-driver-node-8w5jv Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" openshift-cluster-csi-drivers 55m Normal Started pod/aws-ebs-csi-driver-node-8w5jv Started container csi-node-driver-registrar openshift-cluster-csi-drivers 55m Normal Pulled pod/aws-ebs-csi-driver-node-8w5jv Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" in 899.220758ms (899.248421ms including waiting) openshift-apiserver 55m Normal Killing pod/apiserver-565b67b9f7-lvnhl Stopping container openshift-apiserver openshift-ovn-kubernetes 55m Normal Pulled pod/ovnkube-node-x4z8l Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-apiserver 55m Normal ScalingReplicaSet deployment/apiserver Scaled down replica set apiserver-565b67b9f7 to 0 from 1 openshift-apiserver 55m Normal Killing pod/apiserver-565b67b9f7-lvnhl Stopping container openshift-apiserver-check-endpoints openshift-cluster-csi-drivers 55m Normal Created pod/aws-ebs-csi-driver-node-8w5jv Created container csi-node-driver-registrar openshift-apiserver 55m Normal ScalingReplicaSet deployment/apiserver Scaled up replica set apiserver-6977bc9f6b to 3 from 2 openshift-apiserver 55m Normal SuccessfulCreate replicaset/apiserver-6977bc9f6b Created pod: apiserver-6977bc9f6b-b9qrr openshift-etcd-operator 55m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 nodes are at revision 2; 0 nodes have achieved new revision 5\nEtcdMembersAvailable: 2 members are available" to "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 nodes are at revision 2; 0 nodes have achieved new revision 5\nEtcdMembersAvailable: 3 members are available" openshift-multus 55m Warning NetworkNotReady pod/network-metrics-daemon-f6tv8 network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? openshift-network-diagnostics 55m Warning NetworkNotReady pod/network-check-target-2799t network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? openshift-cluster-csi-drivers 55m Normal Pulled pod/aws-ebs-csi-driver-node-8w5jv Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" in 1.540151419s (1.540164784s including waiting) openshift-ovn-kubernetes 55m Normal Started pod/ovnkube-node-x4z8l Started container ovn-controller openshift-ovn-kubernetes 55m Normal Created pod/ovnkube-node-x4z8l Created container ovn-controller openshift-ingress-canary 55m Normal SuccessfulCreate daemonset/ingress-canary Created pod: ingress-canary-2zk7z openshift-multus 55m Normal Pulled pod/multus-additional-cni-plugins-j5mgq Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be088cb9218a248cd4a616132c8daf531502c7b85404aadb0b73e76f8dba01df" in 9.884224729s (9.884237155s including waiting) openshift-cluster-csi-drivers 55m Normal Started pod/aws-ebs-csi-driver-node-8w5jv Started container csi-liveness-probe openshift-cluster-csi-drivers 55m Normal Created pod/aws-ebs-csi-driver-node-8w5jv Created container csi-liveness-probe openshift-dns 55m Normal SuccessfulCreate daemonset/dns-default Created pod: dns-default-f7bt7 default 55m Normal NodeReady node/ip-10-0-232-8.ec2.internal Node ip-10-0-232-8.ec2.internal status is now: NodeReady openshift-network-diagnostics 55m Normal Created pod/network-check-source-677bdb7d9-4sw4t Created container check-endpoints openshift-monitoring 55m Normal Started pod/prometheus-operator-admission-webhook-5c549f4449-v9x8h Started container prometheus-operator-admission-webhook openshift-monitoring 55m Normal Created pod/prometheus-operator-admission-webhook-5c549f4449-v9x8h Created container prometheus-operator-admission-webhook openshift-monitoring 55m Normal Pulled pod/prometheus-operator-admission-webhook-5c549f4449-v9x8h Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e2218fd1d860bdb72a28d8fc34e1d5e7c3674bf1d0005583d70800dcd79d2" in 7.511409058s (7.511423185s including waiting) openshift-network-diagnostics 55m Normal AddedInterface pod/network-check-target-w7m4g Add eth0 [10.131.0.5/23] from ovn-kubernetes openshift-network-diagnostics 55m Normal Pulled pod/network-check-source-677bdb7d9-4sw4t Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" in 7.474679655s (7.474686335s including waiting) openshift-multus 55m Normal AddedInterface pod/network-metrics-daemon-74bvc Add eth0 [10.131.0.4/23] from ovn-kubernetes openshift-ingress-canary 55m Normal Pulled pod/ingress-canary-bn5dn Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6de5d1e6d774c20a0c3bd4d4c544234519bc4528e5535f24d0cf78a6788b4056" in 7.49016926s (7.490177587s including waiting) openshift-ingress-canary 55m Normal Created pod/ingress-canary-bn5dn Created container serve-healthcheck-canary openshift-ingress 55m Normal Started pod/router-default-699d8c97f-9xbcx Started container router openshift-ingress 55m Normal Created pod/router-default-699d8c97f-9xbcx Created container router openshift-ingress-canary 55m Normal Started pod/ingress-canary-bn5dn Started container serve-healthcheck-canary openshift-multus 55m Normal Created pod/multus-additional-cni-plugins-j5mgq Created container cni-plugins openshift-operator-lifecycle-manager 55m Normal Started pod/collect-profiles-27990015-4vlzz Started container collect-profiles openshift-operator-lifecycle-manager 55m Normal Created pod/collect-profiles-27990015-4vlzz Created container collect-profiles openshift-multus 55m Normal Started pod/multus-additional-cni-plugins-j5mgq Started container cni-plugins openshift-ingress 55m Normal Pulled pod/router-default-699d8c97f-9xbcx Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0743d54d3acaf6558295618248ff446b4352dde0234d52465d7578c7c261e6fd" in 7.534983798s (7.534992636s including waiting) openshift-ingress 55m Normal Created pod/router-default-699d8c97f-6nwwk Created container router openshift-dns 55m Normal Created pod/dns-default-jf2vx Created container dns openshift-dns 55m Normal Pulled pod/dns-default-jf2vx Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a604c96952da5e59f104a305e0c8303d474e33ef345b8c534fd677f189d05299" in 7.49339293s (7.4934026s including waiting) openshift-operator-lifecycle-manager 55m Normal Pulled pod/collect-profiles-27990015-4vlzz Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:497121bfdb3b293af2c939af393bfbda39aeace9d754f7288a244cd7ec2662d7" in 7.474969903s (7.474976709s including waiting) openshift-ingress 55m Normal Pulled pod/router-default-699d8c97f-6nwwk Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0743d54d3acaf6558295618248ff446b4352dde0234d52465d7578c7c261e6fd" in 7.533327511s (7.533336868s including waiting) openshift-ingress-canary 55m Normal AddedInterface pod/ingress-canary-2zk7z Add eth0 [10.128.2.6/23] from ovn-kubernetes openshift-multus 55m Normal Pulling pod/multus-additional-cni-plugins-l7zm7 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dec1af3c24148179ab45f9137ba74876f4287121f07aa32dc7a3643da749a9" openshift-multus 55m Normal Pulled pod/multus-additional-cni-plugins-l7zm7 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be088cb9218a248cd4a616132c8daf531502c7b85404aadb0b73e76f8dba01df" in 4.657782473s (4.65783098s including waiting) openshift-network-diagnostics 55m Warning FastControllerResync node/ip-10-0-160-152.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling openshift-dns 55m Normal Pulled pod/dns-default-jf2vx Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-network-diagnostics 55m Normal Started pod/network-check-target-w7m4g Started container network-check-target-container openshift-dns 55m Normal Started pod/dns-default-jf2vx Started container kube-rbac-proxy openshift-dns 55m Normal Created pod/dns-default-jf2vx Created container kube-rbac-proxy openshift-dns 55m Normal Started pod/dns-default-jf2vx Started container dns openshift-network-diagnostics 55m Normal Pulled pod/network-check-target-w7m4g Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" already present on machine openshift-ingress 55m Normal Started pod/router-default-699d8c97f-6nwwk Started container router openshift-ingress-canary 55m Normal Pulling pod/ingress-canary-2zk7z Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6de5d1e6d774c20a0c3bd4d4c544234519bc4528e5535f24d0cf78a6788b4056" openshift-multus 55m Normal Pulling pod/multus-additional-cni-plugins-j5mgq Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dec1af3c24148179ab45f9137ba74876f4287121f07aa32dc7a3643da749a9" openshift-multus 55m Normal Pulling pod/network-metrics-daemon-74bvc Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5e01ac462952a60254c8a9a0f43cce8b323da60014d37a49250a2e586142a5" openshift-network-diagnostics 55m Normal Started pod/network-check-source-677bdb7d9-4sw4t Started container check-endpoints openshift-network-diagnostics 55m Warning FastControllerResync node/ip-10-0-160-152.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling openshift-network-diagnostics 55m Normal Created pod/network-check-target-w7m4g Created container network-check-target-container openshift-dns 55m Normal AddedInterface pod/dns-default-f7bt7 Add eth0 [10.128.2.7/23] from ovn-kubernetes openshift-dns 55m Normal Pulling pod/dns-default-f7bt7 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a604c96952da5e59f104a305e0c8303d474e33ef345b8c534fd677f189d05299" openshift-multus 55m Normal Created pod/multus-additional-cni-plugins-l7zm7 Created container cni-plugins openshift-multus 55m Normal Started pod/multus-additional-cni-plugins-l7zm7 Started container cni-plugins openshift-multus 55m Normal Created pod/multus-additional-cni-plugins-j5mgq Created container bond-cni-plugin openshift-multus 55m Normal Created pod/multus-additional-cni-plugins-l7zm7 Created container bond-cni-plugin openshift-network-diagnostics 55m Normal AddedInterface pod/network-check-target-2799t Add eth0 [10.128.2.5/23] from ovn-kubernetes openshift-network-diagnostics 55m Normal Pulling pod/network-check-target-2799t Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" openshift-multus 55m Normal Pulled pod/network-metrics-daemon-74bvc Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5e01ac462952a60254c8a9a0f43cce8b323da60014d37a49250a2e586142a5" in 1.35860696s (1.358624196s including waiting) openshift-multus 55m Normal Created pod/network-metrics-daemon-74bvc Created container network-metrics-daemon openshift-multus 55m Normal Started pod/network-metrics-daemon-74bvc Started container network-metrics-daemon openshift-multus 55m Normal Pulled pod/multus-additional-cni-plugins-l7zm7 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dec1af3c24148179ab45f9137ba74876f4287121f07aa32dc7a3643da749a9" in 727.640985ms (727.653748ms including waiting) openshift-multus 55m Normal AddedInterface pod/network-metrics-daemon-f6tv8 Add eth0 [10.128.2.4/23] from ovn-kubernetes openshift-multus 55m Normal Pulling pod/network-metrics-daemon-f6tv8 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5e01ac462952a60254c8a9a0f43cce8b323da60014d37a49250a2e586142a5" openshift-multus 55m Normal Started pod/multus-additional-cni-plugins-j5mgq Started container bond-cni-plugin openshift-multus 55m Normal Pulled pod/multus-additional-cni-plugins-j5mgq Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dec1af3c24148179ab45f9137ba74876f4287121f07aa32dc7a3643da749a9" in 720.065091ms (720.072652ms including waiting) openshift-multus 55m Normal Pulling pod/multus-additional-cni-plugins-j5mgq Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4903dfa59fc0fcc3521f6431d51c6ed0ef563b0c63614a2df44ee4d447ab6c5d" openshift-multus 55m Normal Started pod/multus-additional-cni-plugins-j5mgq Started container routeoverride-cni openshift-apiserver-operator 55m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-565b67b9f7-lvnhl pod)" openshift-multus 55m Normal Started pod/multus-additional-cni-plugins-l7zm7 Started container bond-cni-plugin openshift-multus 55m Normal Pulling pod/multus-additional-cni-plugins-l7zm7 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4903dfa59fc0fcc3521f6431d51c6ed0ef563b0c63614a2df44ee4d447ab6c5d" openshift-multus 55m Normal Pulling pod/multus-additional-cni-plugins-j5mgq Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" openshift-multus 55m Normal Pulled pod/multus-additional-cni-plugins-j5mgq Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4903dfa59fc0fcc3521f6431d51c6ed0ef563b0c63614a2df44ee4d447ab6c5d" in 768.383045ms (768.394505ms including waiting) openshift-multus 55m Normal Created pod/multus-additional-cni-plugins-j5mgq Created container routeoverride-cni openshift-dns 55m Normal Pulled pod/dns-default-f7bt7 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a604c96952da5e59f104a305e0c8303d474e33ef345b8c534fd677f189d05299" in 2.758015504s (2.758029829s including waiting) openshift-dns 55m Normal Pulled pod/dns-default-f7bt7 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-multus 55m Normal Created pod/network-metrics-daemon-f6tv8 Created container network-metrics-daemon openshift-ingress-canary 55m Normal Created pod/ingress-canary-2zk7z Created container serve-healthcheck-canary openshift-ingress-canary 55m Normal Pulled pod/ingress-canary-2zk7z Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6de5d1e6d774c20a0c3bd4d4c544234519bc4528e5535f24d0cf78a6788b4056" in 2.756793011s (2.756802948s including waiting) openshift-multus 55m Normal Pulled pod/network-metrics-daemon-f6tv8 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5e01ac462952a60254c8a9a0f43cce8b323da60014d37a49250a2e586142a5" in 2.034418615s (2.034430783s including waiting) openshift-dns 55m Normal Started pod/dns-default-f7bt7 Started container dns openshift-multus 55m Normal Started pod/network-metrics-daemon-f6tv8 Started container network-metrics-daemon openshift-dns 55m Normal Created pod/dns-default-f7bt7 Created container dns openshift-ingress-canary 55m Normal Started pod/ingress-canary-2zk7z Started container serve-healthcheck-canary openshift-dns 54m Normal Started pod/dns-default-f7bt7 Started container kube-rbac-proxy openshift-multus 54m Normal Pulled pod/multus-additional-cni-plugins-l7zm7 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4903dfa59fc0fcc3521f6431d51c6ed0ef563b0c63614a2df44ee4d447ab6c5d" in 1.649806217s (1.649820016s including waiting) openshift-network-diagnostics 54m Normal Started pod/network-check-target-2799t Started container network-check-target-container openshift-network-diagnostics 54m Normal Created pod/network-check-target-2799t Created container network-check-target-container openshift-dns 54m Normal Created pod/dns-default-f7bt7 Created container kube-rbac-proxy openshift-network-diagnostics 54m Normal Pulled pod/network-check-target-2799t Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" in 3.095393642s (3.09539905s including waiting) default 54m Normal NodeDone node/ip-10-0-160-152.ec2.internal Setting node ip-10-0-160-152.ec2.internal, currentConfig rendered-worker-e5630006427036c937f2156f999e7beb to Done default 54m Normal Uncordon node/ip-10-0-160-152.ec2.internal Update completed for config rendered-worker-e5630006427036c937f2156f999e7beb and node has been uncordoned openshift-multus 54m Normal Started pod/multus-additional-cni-plugins-j5mgq Started container whereabouts-cni-bincopy openshift-multus 54m Normal Created pod/multus-additional-cni-plugins-j5mgq Created container whereabouts-cni-bincopy openshift-multus 54m Normal Pulled pod/multus-additional-cni-plugins-j5mgq Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" in 1.774904542s (1.774915439s including waiting) openshift-multus 54m Normal Created pod/multus-additional-cni-plugins-l7zm7 Created container routeoverride-cni openshift-multus 54m Normal Started pod/multus-additional-cni-plugins-l7zm7 Started container routeoverride-cni openshift-multus 54m Normal Pulling pod/multus-additional-cni-plugins-l7zm7 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" default 54m Normal ConfigDriftMonitorStarted node/ip-10-0-160-152.ec2.internal Config Drift Monitor started, watching against rendered-worker-e5630006427036c937f2156f999e7beb openshift-operator-lifecycle-manager 54m Normal SawCompletedJob cronjob/collect-profiles Saw completed job: collect-profiles-27990015, status: Complete openshift-operator-lifecycle-manager 54m Normal Completed job/collect-profiles-27990015 Job completed openshift-multus 54m Normal Pulled pod/multus-additional-cni-plugins-j5mgq Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" already present on machine openshift-multus 54m Normal Created pod/multus-additional-cni-plugins-j5mgq Created container whereabouts-cni openshift-multus 54m Normal Started pod/multus-additional-cni-plugins-j5mgq Started container whereabouts-cni openshift-multus 54m Normal Pulled pod/multus-additional-cni-plugins-l7zm7 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" in 1.653878777s (1.65389361s including waiting) openshift-monitoring 54m Normal Pulling pod/prometheus-operator-admission-webhook-5c549f4449-d5c7w Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e2218fd1d860bdb72a28d8fc34e1d5e7c3674bf1d0005583d70800dcd79d2" openshift-multus 54m Normal Created pod/multus-additional-cni-plugins-l7zm7 Created container whereabouts-cni-bincopy openshift-multus 54m Normal Pulled pod/multus-additional-cni-plugins-j5mgq Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" already present on machine openshift-multus 54m Normal Created pod/multus-additional-cni-plugins-j5mgq Created container kube-multus-additional-cni-plugins openshift-monitoring 54m Normal AddedInterface pod/prometheus-operator-admission-webhook-5c549f4449-d5c7w Add eth0 [10.128.2.8/23] from ovn-kubernetes openshift-multus 54m Normal Started pod/multus-additional-cni-plugins-l7zm7 Started container whereabouts-cni-bincopy openshift-multus 54m Normal Pulled pod/multus-additional-cni-plugins-l7zm7 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" already present on machine openshift-kube-apiserver 54m Normal LeaderElection lease/cert-regeneration-controller-lock ip-10-0-197-197_bcc2be9a-8dc9-42bf-930e-141f3fa8e873 became leader openshift-multus 54m Normal Created pod/multus-additional-cni-plugins-l7zm7 Created container whereabouts-cni openshift-multus 54m Normal Started pod/multus-additional-cni-plugins-l7zm7 Started container whereabouts-cni openshift-multus 54m Normal Pulled pod/multus-additional-cni-plugins-l7zm7 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" already present on machine openshift-monitoring 54m Normal Created pod/prometheus-operator-admission-webhook-5c549f4449-d5c7w Created container prometheus-operator-admission-webhook openshift-monitoring 54m Normal Pulled pod/prometheus-operator-admission-webhook-5c549f4449-d5c7w Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e2218fd1d860bdb72a28d8fc34e1d5e7c3674bf1d0005583d70800dcd79d2" in 1.259330725s (1.259344924s including waiting) openshift-monitoring 54m Normal Started pod/prometheus-operator-admission-webhook-5c549f4449-d5c7w Started container prometheus-operator-admission-webhook openshift-multus 54m Normal Created pod/multus-additional-cni-plugins-l7zm7 Created container kube-multus-additional-cni-plugins openshift-apiserver 54m Warning Unhealthy pod/apiserver-565b67b9f7-lvnhl Readiness probe failed: HTTP probe failed with statuscode: 500 openshift-apiserver 54m Warning ProbeError pod/apiserver-565b67b9f7-lvnhl Readiness probe error: HTTP probe failed with statuscode: 500... openshift-ingress 54m Warning Unhealthy pod/router-default-699d8c97f-9xbcx Startup probe failed: HTTP probe failed with statuscode: 500 openshift-ingress 54m Warning ProbeError pod/router-default-699d8c97f-9xbcx Startup probe error: HTTP probe failed with statuscode: 500... openshift-apiserver 54m Normal Pulled pod/apiserver-6977bc9f6b-b9qrr Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" already present on machine openshift-apiserver 54m Normal Pulled pod/apiserver-6977bc9f6b-b9qrr Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" already present on machine openshift-apiserver 54m Warning ErrorUpdatingResource pod/apiserver-6977bc9f6b-b9qrr addLogicalPort failed for openshift-apiserver/apiserver-6977bc9f6b-b9qrr: failed to update annotation on pod openshift-apiserver/apiserver-6977bc9f6b-b9qrr: Operation cannot be fulfilled on pods "apiserver-6977bc9f6b-b9qrr": the object has been modified; please apply your changes to the latest version and try again openshift-apiserver 54m Normal AddedInterface pod/apiserver-6977bc9f6b-b9qrr Add eth0 [10.129.0.34/23] from ovn-kubernetes default 54m Normal ConfigDriftMonitorStarted node/ip-10-0-232-8.ec2.internal Config Drift Monitor started, watching against rendered-worker-e5630006427036c937f2156f999e7beb openshift-apiserver 54m Normal Started pod/apiserver-6977bc9f6b-b9qrr Started container fix-audit-permissions default 54m Normal Uncordon node/ip-10-0-232-8.ec2.internal Update completed for config rendered-worker-e5630006427036c937f2156f999e7beb and node has been uncordoned openshift-apiserver-operator 54m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-565b67b9f7-lvnhl pod)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-6977bc9f6b-b9qrr pod)" default 54m Normal NodeDone node/ip-10-0-232-8.ec2.internal Setting node ip-10-0-232-8.ec2.internal, currentConfig rendered-worker-e5630006427036c937f2156f999e7beb to Done openshift-apiserver 54m Normal Created pod/apiserver-6977bc9f6b-b9qrr Created container fix-audit-permissions openshift-apiserver 54m Normal Started pod/apiserver-6977bc9f6b-b9qrr Started container openshift-apiserver-check-endpoints openshift-apiserver 54m Normal Started pod/apiserver-6977bc9f6b-b9qrr Started container openshift-apiserver openshift-dns 54m Warning TopologyAwareHintsDisabled service/dns-default Insufficient Node information: allocatable CPU or zone not specified on one or more nodes, addressType: IPv4 openshift-apiserver 54m Normal Pulled pod/apiserver-6977bc9f6b-b9qrr Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-apiserver 54m Normal Created pod/apiserver-6977bc9f6b-b9qrr Created container openshift-apiserver openshift-apiserver 54m Normal Created pod/apiserver-6977bc9f6b-b9qrr Created container openshift-apiserver-check-endpoints openshift-apiserver 54m Warning FastControllerResync node/ip-10-0-239-132.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling openshift-apiserver 54m Warning FastControllerResync node/ip-10-0-239-132.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling openshift-apiserver-operator 54m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-6977bc9f6b-b9qrr pod)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-6977bc9f6b-b9qrr pod)" openshift-kube-scheduler-operator 54m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 1 nodes are at revision 6" to "NodeInstallerProgressing: 1 nodes are at revision 0; 2 nodes are at revision 6",Available message changed from "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 nodes are at revision 6" to "StaticPodsAvailable: 2 nodes are active; 1 nodes are at revision 0; 2 nodes are at revision 6" openshift-kube-scheduler-operator 54m Normal NodeCurrentRevisionChanged deployment/openshift-kube-scheduler-operator Updated node "ip-10-0-140-6.ec2.internal" from revision 0 to 6 because static pod is ready openshift-apiserver-operator 54m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-6977bc9f6b-b9qrr pod)" to "All is well" openshift-apiserver-operator 54m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/apps.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/apps.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/authorization.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/authorization.openshift.io/v1\": context deadline exceeded\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/build.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/build.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.29:8443/apis/image.openshift.io/v1: Get \"https://10.128.0.29:8443/apis/image.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/project.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/project.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/quota.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/quota.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.129.0.29:8443/apis/route.openshift.io/v1: Get \"https://10.129.0.29:8443/apis/route.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" to "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" openshift-kube-scheduler-operator 54m Normal NodeTargetRevisionChanged deployment/openshift-kube-scheduler-operator Updating node "ip-10-0-197-197.ec2.internal" from revision 0 to 6 because node ip-10-0-197-197.ec2.internal static pod not found openshift-monitoring 54m Warning FailedToUpdateEndpointSlices service/prometheus-operator Error updating Endpoint Slices for Service openshift-monitoring/prometheus-operator: failed to create EndpointSlice for Service openshift-monitoring/prometheus-operator: Internal error occurred: admission plugin "OwnerReferencesPermissionEnforcement" failed to complete validation in 13s openshift-etcd 54m Normal Killing pod/etcd-ip-10-0-140-6.ec2.internal Stopping container etcd-metrics openshift-etcd 54m Normal Killing pod/etcd-ip-10-0-140-6.ec2.internal Stopping container etcdctl openshift-kube-scheduler 54m Normal Pulled pod/installer-6-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 54m Normal AddedInterface pod/installer-6-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.56/23] from ovn-kubernetes openshift-etcd 54m Normal StaticPodInstallerCompleted pod/installer-5-ip-10-0-140-6.ec2.internal Successfully installed revision 5 openshift-kube-scheduler-operator 54m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/installer-6-ip-10-0-197-197.ec2.internal -n openshift-kube-scheduler because it was missing openshift-etcd 54m Normal Killing pod/etcd-ip-10-0-140-6.ec2.internal Stopping container etcd-readyz openshift-kube-scheduler 54m Normal Created pod/installer-6-ip-10-0-197-197.ec2.internal Created container installer openshift-kube-scheduler 54m Normal Started pod/installer-6-ip-10-0-197-197.ec2.internal Started container installer openshift-kube-controller-manager 54m Normal StaticPodInstallerCompleted pod/installer-4-ip-10-0-197-197.ec2.internal Successfully installed revision 4 openshift-kube-controller-manager-operator 54m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: Missing operand on node ip-10-0-197-197.ec2.internal" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: Missing PodIP in operand kube-controller-manager-ip-10-0-197-197.ec2.internal on node ip-10-0-197-197.ec2.internal" openshift-kube-controller-manager 54m Normal Pulling pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fec19020202ba3b7f09460bc0cb52fc001941fbb3a3fa7d1ca34a2e9ebf0e9" openshift-kube-controller-manager-operator 54m Normal SATokenSignerControllerStuck deployment/kube-controller-manager-operator unexpected addresses: 10.0.8.110 openshift-kube-controller-manager 54m Normal Pulled pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 54m Normal Created pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Created container cluster-policy-controller openshift-kube-controller-manager 54m Normal Pulled pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 54m Normal Created pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Created container kube-controller-manager-cert-syncer openshift-kube-controller-manager 54m Normal Started pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Started container kube-controller-manager-recovery-controller openshift-kube-controller-manager 54m Normal Pulled pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fec19020202ba3b7f09460bc0cb52fc001941fbb3a3fa7d1ca34a2e9ebf0e9" in 1.675742148s (1.675786379s including waiting) openshift-kube-controller-manager 54m Normal Started pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Started container cluster-policy-controller openshift-kube-controller-manager 54m Normal Started pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Started container kube-controller-manager-cert-syncer openshift-kube-controller-manager 54m Normal Created pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Created container kube-controller-manager-recovery-controller openshift-kube-controller-manager 54m Warning ClusterInfrastructureStatus pod/kube-controller-manager-ip-10-0-197-197.ec2.internal unable to get cluster infrastructure status, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope kube-system 54m Required control plane pods have been created openshift-kube-apiserver 54m Normal Killing pod/kube-apiserver-ip-10-0-197-197.ec2.internal Stopping container kube-apiserver openshift-kube-apiserver 54m Normal Killing pod/kube-apiserver-ip-10-0-197-197.ec2.internal Stopping container kube-apiserver-insecure-readyz openshift-kube-apiserver 54m Normal Killing pod/kube-apiserver-ip-10-0-197-197.ec2.internal Stopping container kube-apiserver-cert-regeneration-controller openshift-kube-apiserver 54m Normal Killing pod/kube-apiserver-ip-10-0-197-197.ec2.internal Stopping container kube-apiserver-cert-syncer openshift-kube-apiserver 54m Normal StaticPodInstallerCompleted pod/installer-5-ip-10-0-197-197.ec2.internal Successfully installed revision 5 openshift-kube-controller-manager-operator 54m Normal PodCreated deployment/kube-controller-manager-operator Created Pod/kube-controller-manager-guard-ip-10-0-197-197.ec2.internal -n openshift-kube-controller-manager because it was missing openshift-kube-apiserver 54m Normal Killing pod/kube-apiserver-ip-10-0-197-197.ec2.internal Stopping container kube-apiserver-check-endpoints openshift-kube-controller-manager 54m Normal Pulled pod/kube-controller-manager-guard-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 54m Normal Started pod/kube-controller-manager-guard-ip-10-0-197-197.ec2.internal Started container guard openshift-kube-controller-manager 54m Normal AddedInterface pod/kube-controller-manager-guard-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.57/23] from ovn-kubernetes openshift-kube-controller-manager 54m Normal Created pod/kube-controller-manager-guard-ip-10-0-197-197.ec2.internal Created container guard openshift-kube-apiserver 54m Warning ProbeError pod/kube-apiserver-ip-10-0-197-197.ec2.internal Readiness probe error: Get "https://10.0.197.197:17697/healthz": dial tcp 10.0.197.197:17697: connect: connection refused... openshift-kube-apiserver 54m Warning Unhealthy pod/kube-apiserver-ip-10-0-197-197.ec2.internal Readiness probe failed: Get "https://10.0.197.197:17697/healthz": dial tcp 10.0.197.197:17697: connect: connection refused openshift-kube-controller-manager-operator 54m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nGuardControllerDegraded: Missing PodIP in operand kube-controller-manager-ip-10-0-197-197.ec2.internal on node ip-10-0-197-197.ec2.internal" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host" openshift-cluster-version 54m Normal LeaderElection configmap/version ip-10-0-239-132_063075d3-acac-4f6b-b733-12f6fcdcf93f became leader openshift-cluster-version 54m Normal LeaderElection lease/version ip-10-0-239-132_063075d3-acac-4f6b-b733-12f6fcdcf93f became leader openshift-kube-controller-manager-operator 54m Normal SATokenSignerControllerOK deployment/kube-controller-manager-operator found expected kube-apiserver endpoints openshift-cluster-version 54m Normal LoadPayload clusterversion/version Loading payload version="4.13.0-rc.0" image="quay.io/openshift-release-dev/ocp-release@sha256:e686d3cd173d9848fc304da0ebe4d348c6e3be902989f500c5382590e2e41a11" openshift-cluster-version 54m Normal RetrievePayload clusterversion/version Retrieving and verifying payload version="4.13.0-rc.0" image="quay.io/openshift-release-dev/ocp-release@sha256:e686d3cd173d9848fc304da0ebe4d348c6e3be902989f500c5382590e2e41a11" openshift-etcd-operator 54m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node ip-10-0-197-197.ec2.internal" to "ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nGuardControllerDegraded: Missing operand on node ip-10-0-197-197.ec2.internal" openshift-cluster-version 54m Normal PayloadLoaded clusterversion/version Payload loaded version="4.13.0-rc.0" image="quay.io/openshift-release-dev/ocp-release@sha256:e686d3cd173d9848fc304da0ebe4d348c6e3be902989f500c5382590e2e41a11" architecture="amd64" openshift-kube-controller-manager-operator 54m Normal NodeCurrentRevisionChanged deployment/kube-controller-manager-operator Updated node "ip-10-0-197-197.ec2.internal" from revision 0 to 4 because static pod is ready openshift-kube-controller-manager-operator 54m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 4"),Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 nodes are at revision 0; 2 nodes are at revision 4" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 4" openshift-kube-controller-manager-operator 54m Normal SecretCreated deployment/kube-controller-manager-operator Created Secret/next-service-account-private-key -n openshift-kube-controller-manager-operator because it was missing openshift-kube-controller-manager-operator 54m Normal ConfigMapUpdated deployment/kube-controller-manager-operator Updated ConfigMap/sa-token-signing-certs -n openshift-config-managed:... openshift-kube-apiserver-operator 54m Normal ConfigMapUpdated deployment/kube-apiserver-operator Updated ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver:... openshift-kube-apiserver-operator 54m Normal RevisionTriggered deployment/kube-apiserver-operator new revision 6 triggered by "required configmap/sa-token-signing-certs has changed" openshift-kube-apiserver-operator 54m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/revision-status-6 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 54m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-pod-6 -n openshift-kube-apiserver because it was missing kube-system 54m Normal LeaderElection configmap/kube-controller-manager ip-10-0-197-197_9bbc0123-a7bc-4052-8379-4b4558159bf3 became leader openshift-kube-apiserver-operator 54m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/config-6 -n openshift-kube-apiserver because it was missing kube-system 54m Normal LeaderElection lease/kube-controller-manager ip-10-0-197-197_9bbc0123-a7bc-4052-8379-4b4558159bf3 became leader openshift-kube-apiserver-operator 54m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-6 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 54m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/bound-sa-token-signing-certs-6 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 54m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/etcd-serving-ca-6 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 54m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-server-ca-6 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 54m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kubelet-serving-ca-6 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 54m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/sa-token-signing-certs-6 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 54m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-audit-policies-6 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 54m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/etcd-client-6 -n openshift-kube-apiserver because it was missing openshift-etcd-operator 54m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nGuardControllerDegraded: Missing operand on node ip-10-0-197-197.ec2.internal" to "ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nGuardControllerDegraded: Missing operand on node ip-10-0-197-197.ec2.internal\nStaticPodsDegraded: pod/etcd-ip-10-0-140-6.ec2.internal container \"etcd\" is terminated: Completed: \nStaticPodsDegraded: pod/etcd-ip-10-0-140-6.ec2.internal container \"etcd-metrics\" is terminated: Error: \"etcdmain/grpc_proxy.go:558\",\"msg\":\"gRPC proxy listening for metrics\",\"address\":\"https://0.0.0.0:9979\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:18:04.162Z\",\"caller\":\"etcdmain/grpc_proxy.go:261\",\"msg\":\"started gRPC proxy\",\"address\":\"127.0.0.1:9977\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:18:04.162Z\",\"caller\":\"etcdmain/main.go:44\",\"msg\":\"notifying init daemon\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:18:04.162Z\",\"caller\":\"etcdmain/main.go:50\",\"msg\":\"successfully notified init daemon\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:18:04.162Z\",\"caller\":\"etcdmain/grpc_proxy.go:251\",\"msg\":\"gRPC proxy server metrics URL serving\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:18:05.161Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to IDLE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:18:05.161Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000240880, IDLE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:18:05.161Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to CONNECTING\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:18:05.161Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel picks a new address \\\"10.0.140.6:9978\\\" to connect\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:18:05.161Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000240880, CONNECTING\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:18:05.169Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to READY\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:18:05.169Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000240880, READY\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:18:05.170Z\",\"call\nStaticPodsDegraded: pod/etcd-ip-10-0-140-6.ec2.internal container \"etcd-readyz\" is terminated: Completed: \nStaticPodsDegraded: pod/etcd-ip-10-0-140-6.ec2.internal container \"etcdctl\" is terminated: Error: " openshift-kube-scheduler 54m Normal StaticPodInstallerCompleted pod/installer-6-ip-10-0-197-197.ec2.internal Successfully installed revision 6 openshift-kube-scheduler 54m Normal Created pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Created container wait-for-host-port openshift-kube-scheduler-operator 54m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node ip-10-0-197-197.ec2.internal" to "GuardControllerDegraded: Missing PodIP in operand openshift-kube-scheduler-ip-10-0-197-197.ec2.internal on node ip-10-0-197-197.ec2.internal" openshift-kube-scheduler 54m Normal Started pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Started container wait-for-host-port openshift-kube-scheduler 54m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-scheduler 54m Normal Created pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Created container kube-scheduler openshift-kube-scheduler 54m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 54m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-scheduler 54m Normal Started pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Started container kube-scheduler openshift-kube-apiserver-operator 54m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/localhost-recovery-serving-certkey-6 -n openshift-kube-apiserver because it was missing openshift-kube-scheduler 54m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 54m Normal Created pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Created container kube-scheduler-cert-syncer openshift-kube-scheduler 54m Normal Created pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Created container kube-scheduler-recovery-controller openshift-kube-scheduler 54m Normal Started pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Started container kube-scheduler-recovery-controller openshift-kube-scheduler 54m Normal Started pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Started container kube-scheduler-cert-syncer openshift-kube-controller-manager-operator 54m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/revision-status-5 -n openshift-kube-controller-manager because it was missing openshift-kube-apiserver-operator 54m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/localhost-recovery-client-token-6 -n openshift-kube-apiserver because it was missing openshift-kube-controller-manager-operator 54m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/kube-controller-manager-pod-5 -n openshift-kube-controller-manager because it was missing openshift-kube-scheduler-operator 54m Normal RevisionTriggered deployment/openshift-kube-scheduler-operator new revision 7 triggered by "secret/localhost-recovery-client-token has changed" openshift-kube-controller-manager-operator 54m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/config-5 -n openshift-kube-controller-manager because it was missing openshift-kube-apiserver-operator 54m Normal RevisionTriggered deployment/kube-apiserver-operator new revision 7 triggered by "required secret/localhost-recovery-client-token has changed" openshift-kube-apiserver-operator 54m Normal RevisionCreate deployment/kube-apiserver-operator Revision 5 created because required configmap/sa-token-signing-certs has changed openshift-kube-controller-manager-operator 54m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/cluster-policy-controller-config-5 -n openshift-kube-controller-manager because it was missing openshift-kube-apiserver-operator 54m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/webhook-authenticator-6 -n openshift-kube-apiserver because it was missing openshift-kube-scheduler-operator 54m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/revision-status-7 -n openshift-kube-scheduler because it was missing openshift-kube-controller-manager-operator 54m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/controller-manager-kubeconfig-5 -n openshift-kube-controller-manager because it was missing openshift-kube-controller-manager-operator 54m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/kube-controller-cert-syncer-kubeconfig-5 -n openshift-kube-controller-manager because it was missing openshift-kube-scheduler-operator 54m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/kube-scheduler-pod-7 -n openshift-kube-scheduler because it was missing openshift-kube-controller-manager-operator 54m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/serviceaccount-ca-5 -n openshift-kube-controller-manager because it was missing openshift-kube-apiserver-operator 54m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/revision-status-7 -n openshift-kube-apiserver because it was missing openshift-kube-controller-manager-operator 54m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/service-ca-5 -n openshift-kube-controller-manager because it was missing openshift-kube-controller-manager-operator 54m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/recycler-config-5 -n openshift-kube-controller-manager because it was missing openshift-kube-scheduler-operator 54m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/config-7 -n openshift-kube-scheduler because it was missing openshift-etcd 53m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container setup openshift-etcd-operator 53m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nGuardControllerDegraded: Missing operand on node ip-10-0-197-197.ec2.internal\nStaticPodsDegraded: pod/etcd-ip-10-0-140-6.ec2.internal container \"etcd\" is terminated: Completed: \nStaticPodsDegraded: pod/etcd-ip-10-0-140-6.ec2.internal container \"etcd-metrics\" is terminated: Error: \"etcdmain/grpc_proxy.go:558\",\"msg\":\"gRPC proxy listening for metrics\",\"address\":\"https://0.0.0.0:9979\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:18:04.162Z\",\"caller\":\"etcdmain/grpc_proxy.go:261\",\"msg\":\"started gRPC proxy\",\"address\":\"127.0.0.1:9977\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:18:04.162Z\",\"caller\":\"etcdmain/main.go:44\",\"msg\":\"notifying init daemon\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:18:04.162Z\",\"caller\":\"etcdmain/main.go:50\",\"msg\":\"successfully notified init daemon\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:18:04.162Z\",\"caller\":\"etcdmain/grpc_proxy.go:251\",\"msg\":\"gRPC proxy server metrics URL serving\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:18:05.161Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to IDLE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:18:05.161Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000240880, IDLE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:18:05.161Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to CONNECTING\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:18:05.161Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel picks a new address \\\"10.0.140.6:9978\\\" to connect\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:18:05.161Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000240880, CONNECTING\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:18:05.169Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to READY\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:18:05.169Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000240880, READY\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:18:05.170Z\",\"call\nStaticPodsDegraded: pod/etcd-ip-10-0-140-6.ec2.internal container \"etcd-readyz\" is terminated: Completed: \nStaticPodsDegraded: pod/etcd-ip-10-0-140-6.ec2.internal container \"etcdctl\" is terminated: Error: " to "ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nGuardControllerDegraded: Missing operand on node ip-10-0-197-197.ec2.internal" openshift-kube-controller-manager-operator 53m Normal SecretCreated deployment/kube-controller-manager-operator Created Secret/service-account-private-key-5 -n openshift-kube-controller-manager because it was missing openshift-kube-controller-manager-operator 53m Normal SecretCreated deployment/kube-controller-manager-operator Created Secret/serving-cert-5 -n openshift-kube-controller-manager because it was missing openshift-etcd 53m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 53m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container setup openshift-kube-apiserver-operator 53m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-pod-7 -n openshift-kube-apiserver because it was missing openshift-kube-scheduler-operator 53m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/serviceaccount-ca-7 -n openshift-kube-scheduler because it was missing openshift-kube-controller-manager-operator 53m Normal SecretCreated deployment/kube-controller-manager-operator Created Secret/localhost-recovery-client-token-5 -n openshift-kube-controller-manager because it was missing openshift-kube-apiserver-operator 53m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/config-7 -n openshift-kube-apiserver because it was missing openshift-etcd 53m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcd-ensure-env-vars openshift-etcd 53m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcd-ensure-env-vars openshift-kube-scheduler 53m Normal AddedInterface pod/openshift-kube-scheduler-guard-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.58/23] from ovn-kubernetes openshift-kube-scheduler 53m Normal Pulled pod/openshift-kube-scheduler-guard-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-etcd 53m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-kube-scheduler-operator 53m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/openshift-kube-scheduler-guard-ip-10-0-197-197.ec2.internal -n openshift-kube-scheduler because it was missing openshift-kube-controller-manager-operator 53m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nRevisionControllerDegraded: conflicting latestAvailableRevision 5" openshift-kube-controller-manager-operator 53m Normal RevisionCreate deployment/kube-controller-manager-operator Revision 4 created because secret/localhost-recovery-client-token has changed openshift-kube-scheduler-operator 53m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/scheduler-kubeconfig-7 -n openshift-kube-scheduler because it was missing openshift-kube-controller-manager-operator 53m Normal RevisionTriggered deployment/kube-controller-manager-operator new revision 5 triggered by "secret/localhost-recovery-client-token has changed" openshift-kube-scheduler-operator 53m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready") openshift-kube-controller-manager-operator 53m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nRevisionControllerDegraded: conflicting latestAvailableRevision 5" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host" openshift-etcd 53m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcd-resources-copy openshift-kube-apiserver-operator 53m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 5" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 6",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 6" openshift-kube-scheduler 53m Normal Created pod/openshift-kube-scheduler-guard-ip-10-0-197-197.ec2.internal Created container guard openshift-kube-scheduler 53m Normal Started pod/openshift-kube-scheduler-guard-ip-10-0-197-197.ec2.internal Started container guard openshift-etcd 53m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 53m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcd-resources-copy openshift-apiserver-operator 53m Warning OpenShiftAPICheckFailed deployment/openshift-apiserver-operator "apps.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request openshift-etcd 53m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 53m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcdctl default 53m Normal RegisteredNode node/ip-10-0-197-197.ec2.internal Node ip-10-0-197-197.ec2.internal event: Registered Node ip-10-0-197-197.ec2.internal in Controller default 53m Normal RegisteredNode node/ip-10-0-160-152.ec2.internal Node ip-10-0-160-152.ec2.internal event: Registered Node ip-10-0-160-152.ec2.internal in Controller openshift-etcd 53m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcdctl openshift-etcd 53m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 53m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcd openshift-etcd 53m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcd openshift-ingress 53m Normal EnsuringLoadBalancer service/router-default Ensuring load balancer default 53m Normal RegisteredNode node/ip-10-0-239-132.ec2.internal Node ip-10-0-239-132.ec2.internal event: Registered Node ip-10-0-239-132.ec2.internal in Controller openshift-kube-scheduler-operator 53m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-7 -n openshift-kube-scheduler because it was missing openshift-etcd 53m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd 53m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcd-metrics default 53m Normal RegisteredNode node/ip-10-0-232-8.ec2.internal Node ip-10-0-232-8.ec2.internal event: Registered Node ip-10-0-232-8.ec2.internal in Controller openshift-etcd 53m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 53m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcd-readyz openshift-etcd 53m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcd-readyz default 53m Normal RegisteredNode node/ip-10-0-140-6.ec2.internal Node ip-10-0-140-6.ec2.internal event: Registered Node ip-10-0-140-6.ec2.internal in Controller openshift-etcd 53m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcd-metrics openshift-kube-apiserver-operator 53m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-7 -n openshift-kube-apiserver because it was missing openshift-ingress-operator 53m Warning BackOff pod/ingress-operator-6486794b49-42ddh Back-off restarting failed container ingress-operator in pod ingress-operator-6486794b49-42ddh_openshift-ingress-operator(46eadeed-a6cd-4d31-9694-23ee794a96d8) openshift-dns 53m Warning TopologyAwareHintsDisabled service/dns-default Insufficient Node information: allocatable CPU or zone not specified on one or more nodes, addressType: IPv4 openshift-kube-apiserver-operator 53m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/bound-sa-token-signing-certs-7 -n openshift-kube-apiserver because it was missing openshift-kube-scheduler-operator 53m Normal SecretCreated deployment/openshift-kube-scheduler-operator Created Secret/serving-cert-7 -n openshift-kube-scheduler because it was missing openshift-kube-controller-manager-operator 53m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 3 nodes are at revision 4; 0 nodes have achieved new revision 5"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 4" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 4; 0 nodes have achieved new revision 5" openshift-kube-scheduler-operator 53m Normal RevisionCreate deployment/openshift-kube-scheduler-operator Revision 6 created because secret/localhost-recovery-client-token has changed openshift-etcd-operator 53m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nGuardControllerDegraded: Missing operand on node ip-10-0-197-197.ec2.internal" to "GuardControllerDegraded: Missing operand on node ip-10-0-197-197.ec2.internal" openshift-kube-apiserver-operator 53m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/etcd-serving-ca-7 -n openshift-kube-apiserver because it was missing openshift-kube-scheduler-operator 53m Normal SecretCreated deployment/openshift-kube-scheduler-operator Created Secret/localhost-recovery-client-token-7 -n openshift-kube-scheduler because it was missing openshift-kube-controller-manager-operator 53m Normal NodeTargetRevisionChanged deployment/kube-controller-manager-operator Updating node "ip-10-0-239-132.ec2.internal" from revision 4 to 5 because node ip-10-0-239-132.ec2.internal with revision 4 is the oldest openshift-apiserver-operator 53m Warning OpenShiftAPICheckFailed deployment/openshift-apiserver-operator "authorization.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request openshift-apiserver-operator 53m Warning OpenShiftAPICheckFailed deployment/openshift-apiserver-operator "build.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request openshift-ingress 53m Normal EnsuredLoadBalancer service/router-default Ensured load balancer openshift-kube-apiserver-operator 53m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-server-ca-7 -n openshift-kube-apiserver because it was missing openshift-etcd-operator 53m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node ip-10-0-197-197.ec2.internal" to "EtcdMembersDegraded: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy\nGuardControllerDegraded: Missing operand on node ip-10-0-197-197.ec2.internal" openshift-kube-controller-manager-operator 53m Normal PodCreated deployment/kube-controller-manager-operator Created Pod/installer-5-ip-10-0-239-132.ec2.internal -n openshift-kube-controller-manager because it was missing openshift-etcd-operator 53m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 nodes are at revision 2; 0 nodes have achieved new revision 5\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 nodes are at revision 2; 0 nodes have achieved new revision 5\nEtcdMembersAvailable: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy" openshift-kube-scheduler-operator 53m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 nodes are at revision 0; 2 nodes are at revision 6" to "NodeInstallerProgressing: 1 nodes are at revision 0; 2 nodes are at revision 6; 0 nodes have achieved new revision 7",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 nodes are at revision 0; 2 nodes are at revision 6" to "StaticPodsAvailable: 2 nodes are active; 1 nodes are at revision 0; 2 nodes are at revision 6; 0 nodes have achieved new revision 7" openshift-kube-controller-manager 53m Normal Started pod/installer-5-ip-10-0-239-132.ec2.internal Started container installer openshift-kube-scheduler 53m Normal Pulled pod/revision-pruner-7-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-controller-manager 53m Normal AddedInterface pod/installer-5-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.35/23] from ovn-kubernetes openshift-kube-controller-manager 53m Normal Pulled pod/installer-5-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 53m Normal Created pod/installer-5-ip-10-0-239-132.ec2.internal Created container installer openshift-kube-scheduler 53m Normal AddedInterface pod/revision-pruner-7-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.36/23] from ovn-kubernetes openshift-kube-apiserver-operator 53m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kubelet-serving-ca-7 -n openshift-kube-apiserver because it was missing openshift-kube-scheduler-operator 53m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/revision-pruner-7-ip-10-0-239-132.ec2.internal -n openshift-kube-scheduler because it was missing openshift-kube-scheduler 53m Normal Created pod/revision-pruner-7-ip-10-0-239-132.ec2.internal Created container pruner openshift-kube-apiserver-operator 53m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/sa-token-signing-certs-7 -n openshift-kube-apiserver because it was missing openshift-kube-scheduler 53m Normal Started pod/revision-pruner-7-ip-10-0-239-132.ec2.internal Started container pruner openshift-kube-apiserver-operator 53m Normal PodCreated deployment/kube-apiserver-operator Created Pod/revision-pruner-6-ip-10-0-197-197.ec2.internal -n openshift-kube-apiserver because it was missing openshift-kube-scheduler-operator 53m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/revision-pruner-7-ip-10-0-140-6.ec2.internal -n openshift-kube-scheduler because it was missing openshift-kube-apiserver 53m Normal Created pod/revision-pruner-6-ip-10-0-197-197.ec2.internal Created container pruner openshift-kube-apiserver 53m Normal AddedInterface pod/revision-pruner-6-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.59/23] from ovn-kubernetes openshift-kube-apiserver 53m Normal Pulled pod/revision-pruner-6-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-scheduler 53m Normal Pulled pod/revision-pruner-7-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-apiserver-operator 53m Warning OpenShiftAPICheckFailed deployment/openshift-apiserver-operator "image.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request openshift-kube-apiserver 53m Normal Started pod/revision-pruner-6-ip-10-0-197-197.ec2.internal Started container pruner openshift-kube-apiserver-operator 53m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-audit-policies-7 -n openshift-kube-apiserver because it was missing openshift-kube-scheduler 53m Normal AddedInterface pod/revision-pruner-7-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.41/23] from ovn-kubernetes openshift-kube-scheduler 53m Normal Started pod/revision-pruner-7-ip-10-0-140-6.ec2.internal Started container pruner openshift-kube-scheduler 53m Normal Created pod/revision-pruner-7-ip-10-0-140-6.ec2.internal Created container pruner openshift-kube-apiserver-operator 53m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/etcd-client-7 -n openshift-kube-apiserver because it was missing openshift-kube-scheduler-operator 53m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/installer-7-ip-10-0-197-197.ec2.internal -n openshift-kube-scheduler because it was missing openshift-kube-scheduler 53m Normal AddedInterface pod/revision-pruner-7-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.61/23] from ovn-kubernetes openshift-kube-scheduler 53m Normal Created pod/installer-7-ip-10-0-197-197.ec2.internal Created container installer openshift-ingress-operator 53m Normal Started pod/ingress-operator-6486794b49-42ddh Started container ingress-operator openshift-ingress-operator 53m Normal Pulled pod/ingress-operator-6486794b49-42ddh Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6de5d1e6d774c20a0c3bd4d4c544234519bc4528e5535f24d0cf78a6788b4056" already present on machine openshift-kube-scheduler 53m Normal Started pod/installer-7-ip-10-0-197-197.ec2.internal Started container installer openshift-kube-scheduler-operator 53m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/revision-pruner-7-ip-10-0-197-197.ec2.internal -n openshift-kube-scheduler because it was missing openshift-ingress-operator 53m Normal Created pod/ingress-operator-6486794b49-42ddh Created container ingress-operator openshift-kube-scheduler 53m Normal Pulled pod/revision-pruner-7-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 53m Normal Pulled pod/installer-7-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 53m Normal AddedInterface pod/installer-7-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.60/23] from ovn-kubernetes openshift-kube-apiserver-operator 53m Normal PodCreated deployment/kube-apiserver-operator Created Pod/revision-pruner-6-ip-10-0-239-132.ec2.internal -n openshift-kube-apiserver because it was missing openshift-kube-apiserver 53m Warning FailedCreatePodSandBox pod/revision-pruner-6-ip-10-0-239-132.ec2.internal Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-6-ip-10-0-239-132.ec2.internal_openshift-kube-apiserver_d71d69c5-b6c1-41af-ac58-ec01daebec4e_0(9897dca16ec273b9ea4abf55ea494d1ad3592f8817442e01b22ed0d4924b9625): error adding pod openshift-kube-apiserver_revision-pruner-6-ip-10-0-239-132.ec2.internal to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-6-ip-10-0-239-132.ec2.internal/d71d69c5-b6c1-41af-ac58-ec01daebec4e]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-6-ip-10-0-239-132.ec2.internal?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-kube-scheduler 53m Normal Created pod/revision-pruner-7-ip-10-0-197-197.ec2.internal Created container pruner openshift-kube-scheduler 53m Normal Started pod/revision-pruner-7-ip-10-0-197-197.ec2.internal Started container pruner openshift-kube-apiserver 53m Normal AddedInterface pod/revision-pruner-6-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.37/23] from ovn-kubernetes openshift-kube-apiserver-operator 53m Normal PodCreated deployment/kube-apiserver-operator Created Pod/installer-6-ip-10-0-197-197.ec2.internal -n openshift-kube-apiserver because it was missing openshift-kube-apiserver 53m Normal Created pod/revision-pruner-6-ip-10-0-239-132.ec2.internal Created container pruner openshift-apiserver-operator 53m Warning OpenShiftAPICheckFailed deployment/openshift-apiserver-operator "project.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request openshift-kube-apiserver 53m Normal Started pod/revision-pruner-6-ip-10-0-239-132.ec2.internal Started container pruner openshift-kube-apiserver 53m Normal Pulled pod/revision-pruner-6-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 53m Warning FailedCreatePodSandBox pod/installer-6-ip-10-0-197-197.ec2.internal Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-6-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver_7bdb6741-39c6-4ba8-836d-c516d1155ed2_0(2c1306cc694b33bde118a2a9e7ff5aaec987745431b8da30fd5e56786ef78c49): error adding pod openshift-kube-apiserver_installer-6-ip-10-0-197-197.ec2.internal to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-kube-apiserver/installer-6-ip-10-0-197-197.ec2.internal/7bdb6741-39c6-4ba8-836d-c516d1155ed2]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-ip-10-0-197-197.ec2.internal?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-kube-apiserver 53m Warning FailedCreatePodSandBox pod/installer-6-ip-10-0-197-197.ec2.internal Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-6-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver_7bdb6741-39c6-4ba8-836d-c516d1155ed2_0(cc380eebc049a01a402dc9c6b767667bc5547392f34fa064d1797924ee0e385a): error adding pod openshift-kube-apiserver_installer-6-ip-10-0-197-197.ec2.internal to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-kube-apiserver/installer-6-ip-10-0-197-197.ec2.internal/7bdb6741-39c6-4ba8-836d-c516d1155ed2]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-6-ip-10-0-197-197.ec2.internal?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-kube-apiserver-operator 53m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/localhost-recovery-serving-certkey-7 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver 53m Normal AddedInterface pod/revision-pruner-6-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.42/23] from ovn-kubernetes openshift-kube-apiserver-operator 53m Normal PodCreated deployment/kube-apiserver-operator Created Pod/revision-pruner-6-ip-10-0-140-6.ec2.internal -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 53m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/localhost-recovery-client-token-7 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver 53m Normal Pulled pod/revision-pruner-6-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 53m Normal Created pod/revision-pruner-6-ip-10-0-140-6.ec2.internal Created container pruner openshift-kube-apiserver 53m Normal Started pod/revision-pruner-6-ip-10-0-140-6.ec2.internal Started container pruner openshift-apiserver-operator 53m Warning OpenShiftAPICheckFailed deployment/openshift-apiserver-operator "quota.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request openshift-etcd-operator 53m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 1 nodes are at revision 2; 0 nodes have achieved new revision 5" to "NodeInstallerProgressing: 1 nodes are at revision 0; 1 nodes are at revision 2; 1 nodes are at revision 5",Available message changed from "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 nodes are at revision 2; 0 nodes have achieved new revision 5\nEtcdMembersAvailable: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy" to "StaticPodsAvailable: 2 nodes are active; 1 nodes are at revision 0; 1 nodes are at revision 2; 1 nodes are at revision 5\nEtcdMembersAvailable: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy" openshift-kube-apiserver-operator 53m Normal RevisionCreate deployment/kube-apiserver-operator Revision 6 created because required secret/localhost-recovery-client-token has changed openshift-etcd-operator 53m Normal NodeCurrentRevisionChanged deployment/etcd-operator Updated node "ip-10-0-140-6.ec2.internal" from revision 0 to 5 because static pod is ready openshift-kube-apiserver-operator 53m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/webhook-authenticator-7 -n openshift-kube-apiserver because it was missing openshift-etcd-operator 53m Normal NodeTargetRevisionChanged deployment/etcd-operator Updating node "ip-10-0-197-197.ec2.internal" from revision 0 to 5 because node ip-10-0-197-197.ec2.internal static pod not found openshift-kube-apiserver 53m Normal AddedInterface pod/revision-pruner-7-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.63/23] from ovn-kubernetes openshift-kube-apiserver 53m Normal Pulled pod/revision-pruner-7-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver-operator 53m Normal PodCreated deployment/kube-apiserver-operator Created Pod/revision-pruner-7-ip-10-0-197-197.ec2.internal -n openshift-kube-apiserver because it was missing openshift-apiserver-operator 53m Warning OpenShiftAPICheckFailed deployment/openshift-apiserver-operator "route.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request openshift-etcd 53m Normal Pulled pod/installer-5-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd-operator 53m Normal PodCreated deployment/etcd-operator Created Pod/installer-5-ip-10-0-197-197.ec2.internal -n openshift-etcd because it was missing openshift-kube-apiserver 53m Normal Started pod/revision-pruner-7-ip-10-0-197-197.ec2.internal Started container pruner openshift-kube-apiserver 53m Normal Created pod/revision-pruner-7-ip-10-0-197-197.ec2.internal Created container pruner openshift-etcd 53m Normal AddedInterface pod/installer-5-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.64/23] from ovn-kubernetes openshift-etcd 53m Normal Started pod/installer-5-ip-10-0-197-197.ec2.internal Started container installer openshift-etcd 53m Normal Created pod/installer-5-ip-10-0-197-197.ec2.internal Created container installer openshift-kube-apiserver-operator 53m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 6" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 7",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 6" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 7" openshift-kube-apiserver-operator 53m Normal PodCreated deployment/kube-apiserver-operator Created Pod/revision-pruner-7-ip-10-0-239-132.ec2.internal -n openshift-kube-apiserver because it was missing openshift-kube-apiserver 53m Normal Created pod/revision-pruner-7-ip-10-0-239-132.ec2.internal Created container pruner openshift-kube-apiserver 53m Normal Started pod/revision-pruner-7-ip-10-0-239-132.ec2.internal Started container pruner openshift-kube-apiserver 53m Normal Pulled pod/revision-pruner-7-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 53m Normal AddedInterface pod/revision-pruner-7-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.38/23] from ovn-kubernetes openshift-cloud-credential-operator 53m Normal ScalingReplicaSet deployment/pod-identity-webhook Scaled up replica set pod-identity-webhook-b645775d7 to 2 openshift-cloud-credential-operator 53m Normal PodDisruptionBudgetCreated deployment/cloud-credential-operator Created PodDisruptionBudget.policy/pod-identity-webhook -n openshift-cloud-credential-operator because it was missing openshift-cloud-credential-operator 53m Normal MutatingWebhookConfigurationUpdated deployment/cloud-credential-operator Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/pod-identity-webhook because it changed openshift-cloud-credential-operator 53m Warning FailedMount pod/pod-identity-webhook-b645775d7-24tr2 MountVolume.SetUp failed for volume "webhook-certs" : secret "pod-identity-webhook" not found openshift-apiserver-operator 53m Warning OpenShiftAPICheckFailed deployment/openshift-apiserver-operator "security.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request openshift-kube-apiserver 53m Normal AddedInterface pod/installer-6-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.62/23] from ovn-kubernetes openshift-cloud-credential-operator 53m Normal ClusterRoleCreated deployment/cloud-credential-operator Created ClusterRole.rbac.authorization.k8s.io/pod-identity-webhook because it was missing openshift-kube-apiserver 53m Normal Pulled pod/installer-6-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-cloud-credential-operator 53m Normal SuccessfulCreate replicaset/pod-identity-webhook-b645775d7 Created pod: pod-identity-webhook-b645775d7-js8hv openshift-cloud-credential-operator 53m Normal DeploymentCreated deployment/cloud-credential-operator Created Deployment.apps/pod-identity-webhook -n openshift-cloud-credential-operator because it was missing openshift-cloud-credential-operator 53m Normal SuccessfulCreate replicaset/pod-identity-webhook-b645775d7 Created pod: pod-identity-webhook-b645775d7-24tr2 openshift-cloud-credential-operator 53m Normal ServiceAccountCreated deployment/cloud-credential-operator Created ServiceAccount/pod-identity-webhook -n openshift-cloud-credential-operator because it was missing openshift-cloud-credential-operator 53m Normal RoleCreated deployment/cloud-credential-operator Created Role.rbac.authorization.k8s.io/pod-identity-webhook -n openshift-cloud-credential-operator because it was missing openshift-cloud-credential-operator 53m Normal ServiceCreated deployment/cloud-credential-operator Created Service/pod-identity-webhook -n openshift-cloud-credential-operator because it was missing openshift-cloud-credential-operator 53m Normal RoleBindingCreated deployment/cloud-credential-operator Created RoleBinding.rbac.authorization.k8s.io/pod-identity-webhook -n openshift-cloud-credential-operator because it was missing openshift-cloud-credential-operator 53m Normal ClusterRoleBindingCreated deployment/cloud-credential-operator Created ClusterRoleBinding.rbac.authorization.k8s.io/pod-identity-webhook because it was missing openshift-cloud-credential-operator 53m Warning FailedMount pod/pod-identity-webhook-b645775d7-js8hv MountVolume.SetUp failed for volume "webhook-certs" : secret "pod-identity-webhook" not found openshift-cloud-credential-operator 53m Normal MutatingWebhookConfigurationCreated deployment/cloud-credential-operator Created MutatingWebhookConfiguration.admissionregistration.k8s.io/pod-identity-webhook because it was missing openshift-cloud-credential-operator 53m Normal Pulling pod/pod-identity-webhook-b645775d7-js8hv Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e248571068c87bc5b2f69bd4fc2bc3934d8bcd2b2a7fecadc754a30e06ac592" openshift-cloud-credential-operator 53m Normal AddedInterface pod/pod-identity-webhook-b645775d7-24tr2 Add eth0 [10.128.0.43/23] from ovn-kubernetes openshift-cloud-credential-operator 53m Normal Pulling pod/pod-identity-webhook-b645775d7-24tr2 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e248571068c87bc5b2f69bd4fc2bc3934d8bcd2b2a7fecadc754a30e06ac592" openshift-kube-apiserver-operator 53m Normal PodCreated deployment/kube-apiserver-operator Created Pod/revision-pruner-7-ip-10-0-140-6.ec2.internal -n openshift-kube-apiserver because it was missing openshift-kube-apiserver 53m Normal Created pod/installer-6-ip-10-0-197-197.ec2.internal Created container installer openshift-kube-apiserver 53m Warning FailedCreatePodSandBox pod/revision-pruner-7-ip-10-0-140-6.ec2.internal Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-7-ip-10-0-140-6.ec2.internal_openshift-kube-apiserver_bccd6742-3ebe-4e7e-9c54-c9760e606349_0(c57e56230459f75c81644239c40a7728fb4606b8beef57a86634003dd0fa6d2f): error adding pod openshift-kube-apiserver_revision-pruner-7-ip-10-0-140-6.ec2.internal to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-7-ip-10-0-140-6.ec2.internal/bccd6742-3ebe-4e7e-9c54-c9760e606349]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-7-ip-10-0-140-6.ec2.internal?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-kube-apiserver 53m Normal Started pod/installer-6-ip-10-0-197-197.ec2.internal Started container installer openshift-cloud-credential-operator 53m Normal AddedInterface pod/pod-identity-webhook-b645775d7-js8hv Add eth0 [10.129.0.39/23] from ovn-kubernetes openshift-cloud-credential-operator 53m Normal Pulled pod/pod-identity-webhook-b645775d7-24tr2 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e248571068c87bc5b2f69bd4fc2bc3934d8bcd2b2a7fecadc754a30e06ac592" in 1.161602294s (1.161615316s including waiting) openshift-cloud-credential-operator 53m Normal Pulled pod/pod-identity-webhook-b645775d7-js8hv Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e248571068c87bc5b2f69bd4fc2bc3934d8bcd2b2a7fecadc754a30e06ac592" in 1.355000768s (1.355013257s including waiting) openshift-kube-apiserver 53m Warning FailedCreatePodSandBox pod/revision-pruner-7-ip-10-0-140-6.ec2.internal Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-7-ip-10-0-140-6.ec2.internal_openshift-kube-apiserver_bccd6742-3ebe-4e7e-9c54-c9760e606349_0(737007dc625d00c3ba84d503372b0bd2a9bf1c876dbf0fe1fae29128d1b018fa): error adding pod openshift-kube-apiserver_revision-pruner-7-ip-10-0-140-6.ec2.internal to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-7-ip-10-0-140-6.ec2.internal/bccd6742-3ebe-4e7e-9c54-c9760e606349]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-7-ip-10-0-140-6.ec2.internal?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-cloud-credential-operator 53m Normal Started pod/pod-identity-webhook-b645775d7-js8hv Started container pod-identity-webhook openshift-cloud-credential-operator 53m Normal Created pod/pod-identity-webhook-b645775d7-24tr2 Created container pod-identity-webhook openshift-cloud-credential-operator 53m Normal Started pod/pod-identity-webhook-b645775d7-24tr2 Started container pod-identity-webhook openshift-cloud-credential-operator 53m Normal Created pod/pod-identity-webhook-b645775d7-js8hv Created container pod-identity-webhook openshift-authentication-operator 53m Warning OpenShiftAPICheckFailed deployment/authentication-operator "user.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request openshift-apiserver-operator 53m Warning OpenShiftAPICheckFailed deployment/openshift-apiserver-operator "template.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request openshift-kube-apiserver 53m Normal Killing pod/installer-6-ip-10-0-197-197.ec2.internal Stopping container installer openshift-kube-controller-manager 53m Normal Killing pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Stopping container kube-controller-manager-recovery-controller openshift-kube-controller-manager 53m Normal Killing pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Stopping container kube-controller-manager openshift-authentication-operator 53m Warning OpenShiftAPICheckFailed deployment/authentication-operator "oauth.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request openshift-kube-controller-manager 53m Normal StaticPodInstallerCompleted pod/installer-5-ip-10-0-239-132.ec2.internal Successfully installed revision 5 openshift-kube-controller-manager 53m Normal Killing pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Stopping container kube-controller-manager-cert-syncer openshift-kube-controller-manager 53m Normal Killing pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Stopping container cluster-policy-controller openshift-kube-apiserver-operator 53m Normal PodCreated deployment/kube-apiserver-operator Created Pod/installer-7-ip-10-0-197-197.ec2.internal -n openshift-kube-apiserver because it was missing openshift-monitoring 53m Warning FailedToUpdateEndpointSlices service/prometheus-operator Error updating Endpoint Slices for Service openshift-monitoring/prometheus-operator: failed to create EndpointSlice for Service openshift-monitoring/prometheus-operator: Internal error occurred: admission plugin "OwnerReferencesPermissionEnforcement" failed to complete validation in 13s openshift-cloud-credential-operator 53m Warning FailedToUpdateEndpointSlices service/pod-identity-webhook Error updating Endpoint Slices for Service openshift-cloud-credential-operator/pod-identity-webhook: failed to create EndpointSlice for Service openshift-cloud-credential-operator/pod-identity-webhook: Internal error occurred: admission plugin "OwnerReferencesPermissionEnforcement" failed to complete validation in 13s openshift-kube-scheduler 53m Normal StaticPodInstallerCompleted pod/installer-7-ip-10-0-197-197.ec2.internal Successfully installed revision 7 openshift-cloud-credential-operator 53m Warning FailedToUpdateEndpointSlices service/pod-identity-webhook Error updating Endpoint Slices for Service openshift-cloud-credential-operator/pod-identity-webhook: failed to create EndpointSlice for Service openshift-cloud-credential-operator/pod-identity-webhook: Post "https://api-int.qeaisrhods-c13.abmw.s1.devshift.org:6443/apis/discovery.k8s.io/v1/namespaces/openshift-cloud-credential-operator/endpointslices": dial tcp 10.0.209.0:6443: connect: connection refused openshift-authentication-operator 53m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" openshift-authentication-operator 53m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" openshift-operator-lifecycle-manager 53m Normal InstallSucceeded clusterserviceversion/packageserver install strategy completed with no errors openshift-kube-scheduler 53m Normal Killing pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Stopping container kube-scheduler-recovery-controller openshift-kube-scheduler 53m Normal Killing pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Stopping container kube-scheduler-cert-syncer openshift-kube-scheduler 53m Normal Killing pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Stopping container kube-scheduler openshift-authentication-operator 53m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.130.0.52:8443/apis/oauth.openshift.io/v1: Get \"https://10.130.0.52:8443/apis/oauth.openshift.io/v1\": dial tcp 10.130.0.52:8443: i/o timeout\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" openshift-authentication-operator 53m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.130.0.52:8443/apis/oauth.openshift.io/v1: Get \"https://10.130.0.52:8443/apis/oauth.openshift.io/v1\": dial tcp 10.130.0.52:8443: i/o timeout\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" openshift-monitoring 53m Normal ScalingReplicaSet deployment/prometheus-operator Scaled up replica set prometheus-operator-f4cf7fb47 to 1 openshift-kube-scheduler-operator 53m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal container \"kube-scheduler\" is terminated: Completed: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal container \"kube-scheduler-cert-syncer\" is terminated: Error: ft-kube-scheduler/configmaps?limit=500&resourceVersion=0\": x509: certificate signed by unknown authority\nStaticPodsDegraded: E0321 12:20:39.466502 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\": x509: certificate signed by unknown authority\nStaticPodsDegraded: E0321 12:20:43.864044 1 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"openshift-kube-scheduler-ip-10-0-197-197.ec2.internal.174e6e7e2182d3cb\", GenerateName:\"\", Namespace:\"openshift-kube-scheduler\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"openshift-kube-scheduler\", Name:\"openshift-kube-scheduler-ip-10-0-197-197.ec2.internal\", UID:\"\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}, Reason:\"FastControllerResync\", Message:\"Controller \\\"CertSyncController\\\" resync interval is set to 0s which might lead to client request throttling\", Source:v1.EventSource{Component:\"cert-syncer-certsynccontroller\", Host:\"\"}, FirstTimestamp:time.Date(2023, time.March, 21, 12, 19, 59, 62930379, time.Local), LastTimestamp:time.Date(2023, time.March, 21, 12, 19, 59, 62930379, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'Post \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/events\": x509: certificate signed by unknown authority'(may retry after sleeping)\nStaticPodsDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal container \"kube-scheduler-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" openshift-monitoring 53m Normal SuccessfulCreate replicaset/prometheus-operator-f4cf7fb47 Created pod: prometheus-operator-f4cf7fb47-bhql4 openshift-kube-controller-manager-operator 53m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"cluster-policy-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager-cert-syncer\" is terminated: Error: \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:19:50.877177 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:19:50.877231 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:20:13.195453 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:20:13.195489 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:20:45.067713 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:20:45.067767 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager-recovery-controller\" is terminated: Completed: " openshift-kube-apiserver 53m Normal Pulled pod/installer-7-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 53m Normal AddedInterface pod/installer-7-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.66/23] from ovn-kubernetes openshift-kube-apiserver 53m Warning ErrorAddingResource pod/installer-7-ip-10-0-197-197.ec2.internal addLogicalPort failed for openshift-kube-apiserver/installer-7-ip-10-0-197-197.ec2.internal: failed to update annotation on pod openshift-kube-apiserver/installer-7-ip-10-0-197-197.ec2.internal: Put "https://api-int.qeaisrhods-c13.abmw.s1.devshift.org:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-ip-10-0-197-197.ec2.internal": dial tcp 10.0.209.0:6443: connect: connection refused openshift-authentication-operator 53m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" openshift-authentication-operator 53m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" openshift-kube-apiserver 53m Normal Created pod/installer-7-ip-10-0-197-197.ec2.internal Created container installer openshift-kube-apiserver 53m Normal Started pod/installer-7-ip-10-0-197-197.ec2.internal Started container installer openshift-authentication-operator 53m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: failed to retrieve route from cache: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" openshift-authentication-operator 53m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: ingress.config/cluster does not yet have status for the \"openshift-authentication/oauth-openshift\" route\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" openshift-etcd-operator 53m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 nodes are at revision 0; 1 nodes are at revision 2; 1 nodes are at revision 5\nEtcdMembersAvailable: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy" to "StaticPodsAvailable: 2 nodes are active; 1 nodes are at revision 0; 1 nodes are at revision 2; 1 nodes are at revision 5\nEtcdMembersAvailable: 3 members are available" openshift-etcd-operator 53m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersDegraded: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy\nGuardControllerDegraded: Missing operand on node ip-10-0-197-197.ec2.internal" to "GuardControllerDegraded: Missing operand on node ip-10-0-197-197.ec2.internal" openshift-monitoring 53m Normal Pulling pod/prometheus-operator-f4cf7fb47-bhql4 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f0c9dc9888697e244d61cd89f8fe5a61dcb09dc100889be738db21b2fc5bbf7" openshift-authentication-operator 53m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" openshift-monitoring 53m Normal AddedInterface pod/prometheus-operator-f4cf7fb47-bhql4 Add eth0 [10.128.0.45/23] from ovn-kubernetes openshift-authentication-operator 53m Normal SecretCreated deployment/authentication-operator Created Secret/v4-0-config-system-session -n openshift-authentication because it was missing openshift-authentication-operator 53m Normal ConfigMapCreated deployment/authentication-operator Created ConfigMap/v4-0-config-system-metadata -n openshift-authentication because it was missing openshift-kube-apiserver 53m Warning FailedCreatePodSandBox pod/revision-pruner-7-ip-10-0-140-6.ec2.internal Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-7-ip-10-0-140-6.ec2.internal_openshift-kube-apiserver_bccd6742-3ebe-4e7e-9c54-c9760e606349_0(177594a29b5f19a1639fa17732ebbe25396ffbbadbb2c6e3476da0bc09b96a78): error adding pod openshift-kube-apiserver_revision-pruner-7-ip-10-0-140-6.ec2.internal to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-7-ip-10-0-140-6.ec2.internal/bccd6742-3ebe-4e7e-9c54-c9760e606349]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-7-ip-10-0-140-6.ec2.internal?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-kube-apiserver-operator 53m Normal ObservedConfigChanged deployment/kube-apiserver-operator Writing updated observed config:   map[string]any{... openshift-kube-controller-manager 53m Normal Created pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Created container cluster-policy-controller openshift-kube-controller-manager 53m Normal Pulled pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 53m Normal Created pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Created container kube-controller-manager-cert-syncer openshift-kube-controller-manager 53m Normal Started pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Started container cluster-policy-controller openshift-kube-controller-manager 53m Normal Started pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Started container kube-controller-manager-cert-syncer openshift-kube-controller-manager 53m Normal Pulled pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-monitoring 53m Normal Started pod/prometheus-operator-f4cf7fb47-bhql4 Started container kube-rbac-proxy openshift-kube-controller-manager 53m Normal Created pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Created container kube-controller-manager-recovery-controller openshift-monitoring 53m Normal Pulled pod/prometheus-operator-f4cf7fb47-bhql4 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 53m Normal Started pod/prometheus-operator-f4cf7fb47-bhql4 Started container prometheus-operator openshift-monitoring 53m Normal Pulled pod/prometheus-operator-f4cf7fb47-bhql4 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f0c9dc9888697e244d61cd89f8fe5a61dcb09dc100889be738db21b2fc5bbf7" in 1.405928032s (1.405940331s including waiting) openshift-monitoring 53m Normal Created pod/prometheus-operator-f4cf7fb47-bhql4 Created container kube-rbac-proxy openshift-monitoring 53m Normal Created pod/prometheus-operator-f4cf7fb47-bhql4 Created container prometheus-operator openshift-kube-controller-manager 53m Normal Started pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Started container kube-controller-manager-recovery-controller openshift-kube-controller-manager 53m Normal Pulled pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fec19020202ba3b7f09460bc0cb52fc001941fbb3a3fa7d1ca34a2e9ebf0e9" already present on machine openshift-authentication-operator 53m Normal ConfigMapCreated deployment/authentication-operator Created ConfigMap/oauth-openshift -n openshift-config-managed because it was missing openshift-authentication-operator 53m Normal DeploymentCreated deployment/authentication-operator Created Deployment.apps/oauth-openshift -n openshift-authentication because it was missing openshift-authentication 53m Normal SuccessfulCreate replicaset/oauth-openshift-5fdc498fc9 Created pod: oauth-openshift-5fdc498fc9-vjtw8 openshift-kube-controller-manager 53m Warning ClusterInfrastructureStatus pod/kube-controller-manager-ip-10-0-239-132.ec2.internal unable to get cluster infrastructure status, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope openshift-authentication 53m Normal SuccessfulCreate replicaset/oauth-openshift-5fdc498fc9 Created pod: oauth-openshift-5fdc498fc9-2ktk4 openshift-authentication 53m Normal ScalingReplicaSet deployment/oauth-openshift Scaled up replica set oauth-openshift-5fdc498fc9 to 3 openshift-authentication 53m Normal SuccessfulCreate replicaset/oauth-openshift-5fdc498fc9 Created pod: oauth-openshift-5fdc498fc9-pbpqd openshift-authentication 53m Warning FailedMount pod/oauth-openshift-5fdc498fc9-2ktk4 MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" : configmap "v4-0-config-system-cliconfig" not found openshift-authentication 53m Warning FailedMount pod/oauth-openshift-5fdc498fc9-pbpqd MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" : configmap "v4-0-config-system-cliconfig" not found openshift-monitoring 53m Normal ScalingReplicaSet deployment/openshift-state-metrics Scaled up replica set openshift-state-metrics-66f87c88bd to 1 openshift-authentication 53m Warning FailedCreatePodSandBox pod/oauth-openshift-5fdc498fc9-2ktk4 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5fdc498fc9-2ktk4_openshift-authentication_5144e58d-2ff7-464a-b2df-2d71df4e9ae6_0(06250d7be76778912ecde6e6bbecd699823a05b57b8f8dc0ea1482b3dd5b83af): error adding pod openshift-authentication_oauth-openshift-5fdc498fc9-2ktk4 to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-authentication/oauth-openshift-5fdc498fc9-2ktk4/5144e58d-2ff7-464a-b2df-2d71df4e9ae6]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5fdc498fc9-2ktk4?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-authentication-operator 53m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" openshift-authentication-operator 53m Normal ConfigMapCreated deployment/authentication-operator Created ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication because it was missing openshift-authentication-operator 53m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Progressing changed from False to True ("OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1."),Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" openshift-monitoring 53m Normal SuccessfulCreate replicaset/openshift-state-metrics-66f87c88bd Created pod: openshift-state-metrics-66f87c88bd-jg7dn openshift-monitoring 53m Normal SuccessfulCreate daemonset/node-exporter Created pod: node-exporter-58wsk openshift-monitoring 53m Normal SuccessfulCreate daemonset/node-exporter Created pod: node-exporter-ztvgk openshift-monitoring 53m Normal ScalingReplicaSet deployment/kube-state-metrics Scaled up replica set kube-state-metrics-55f6dbfb8b to 1 openshift-monitoring 53m Normal SuccessfulCreate daemonset/node-exporter Created pod: node-exporter-g4hdx openshift-monitoring 53m Normal SuccessfulCreate daemonset/node-exporter Created pod: node-exporter-cghbq openshift-authentication 53m Warning FailedMount pod/oauth-openshift-5fdc498fc9-vjtw8 MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" : configmap "v4-0-config-system-cliconfig" not found openshift-monitoring 53m Normal ServiceAccountCreated deployment/cluster-monitoring-operator Created ServiceAccount/openshift-state-metrics -n openshift-monitoring because it was missing openshift-monitoring 53m Normal ServiceAccountCreated deployment/cluster-monitoring-operator Created ServiceAccount/kube-state-metrics -n openshift-monitoring because it was missing openshift-monitoring 53m Normal ServiceAccountCreated deployment/cluster-monitoring-operator Created ServiceAccount/telemeter-client -n openshift-monitoring because it was missing openshift-monitoring 53m Warning FailedMount pod/node-exporter-cghbq MountVolume.SetUp failed for volume "node-exporter-tls" : secret "node-exporter-tls" not found openshift-monitoring 53m Normal ServiceAccountCreated deployment/cluster-monitoring-operator Created ServiceAccount/prometheus-adapter -n openshift-monitoring because it was missing openshift-monitoring 53m Normal Pulling pod/node-exporter-g4hdx Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" openshift-monitoring 53m Normal Pulling pod/node-exporter-ztvgk Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" openshift-monitoring 53m Normal Pulling pod/node-exporter-jhj5d Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" openshift-monitoring 53m Normal SuccessfulCreate replicaset/kube-state-metrics-55f6dbfb8b Created pod: kube-state-metrics-55f6dbfb8b-phfp9 openshift-kube-apiserver-operator 53m Normal RevisionTriggered deployment/kube-apiserver-operator new revision 8 triggered by "optional configmap/oauth-metadata has been created" openshift-kube-apiserver-operator 53m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/oauth-metadata -n openshift-kube-apiserver because it was missing openshift-monitoring 53m Normal ServiceAccountCreated deployment/cluster-monitoring-operator Created ServiceAccount/node-exporter -n openshift-monitoring because it was missing openshift-monitoring 53m Normal SuccessfulCreate daemonset/node-exporter Created pod: node-exporter-jhj5d openshift-monitoring 52m Normal Created pod/node-exporter-g4hdx Created container init-textfile openshift-monitoring 52m Normal Pulling pod/node-exporter-cghbq Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" openshift-monitoring 52m Warning FailedMount pod/openshift-state-metrics-66f87c88bd-jg7dn MountVolume.SetUp failed for volume "kube-api-access-dqtx5" : failed to sync configmap cache: timed out waiting for the condition openshift-monitoring 52m Warning FailedMount pod/openshift-state-metrics-66f87c88bd-jg7dn MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition openshift-monitoring 52m Warning FailedMount pod/node-exporter-58wsk MountVolume.SetUp failed for volume "kube-api-access-t9dv6" : failed to sync configmap cache: timed out waiting for the condition openshift-monitoring 52m Normal ServiceAccountCreated deployment/cluster-monitoring-operator Created ServiceAccount/thanos-querier -n openshift-monitoring because it was missing openshift-monitoring 52m Normal ServiceAccountCreated deployment/cluster-monitoring-operator Created ServiceAccount/alertmanager-main -n openshift-monitoring because it was missing openshift-monitoring 52m Warning FailedMount pod/kube-state-metrics-55f6dbfb8b-phfp9 MountVolume.SetUp failed for volume "kube-api-access-bhhx7" : failed to sync configmap cache: timed out waiting for the condition openshift-authentication-operator 52m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" openshift-monitoring 52m Normal Pulled pod/node-exporter-g4hdx Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" in 1.068812386s (1.068826351s including waiting) openshift-monitoring 52m Normal NoPods poddisruptionbudget/alertmanager-main No matching pods found openshift-monitoring 52m Normal Started pod/node-exporter-g4hdx Started container init-textfile openshift-monitoring 52m Normal Pulled pod/node-exporter-jhj5d Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" in 1.131840367s (1.131851821s including waiting) openshift-monitoring 52m Normal Pulled pod/node-exporter-ztvgk Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" in 1.134250145s (1.134262718s including waiting) openshift-kube-scheduler 52m Warning ProbeError pod/openshift-kube-scheduler-guard-ip-10-0-197-197.ec2.internal Readiness probe error: Get "https://10.0.197.197:10259/healthz": dial tcp 10.0.197.197:10259: connect: connection refused... openshift-monitoring 52m Normal Started pod/node-exporter-jhj5d Started container init-textfile openshift-monitoring 52m Normal Pulled pod/node-exporter-jhj5d Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" already present on machine openshift-monitoring 52m Normal Created pod/node-exporter-jhj5d Created container node-exporter openshift-monitoring 52m Normal Started pod/node-exporter-jhj5d Started container node-exporter openshift-monitoring 52m Normal Pulled pod/node-exporter-jhj5d Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 52m Normal Created pod/node-exporter-jhj5d Created container kube-rbac-proxy openshift-monitoring 52m Normal Started pod/node-exporter-jhj5d Started container kube-rbac-proxy openshift-monitoring 52m Normal Created pod/openshift-state-metrics-66f87c88bd-jg7dn Created container kube-rbac-proxy-main openshift-monitoring 52m Normal Pulling pod/openshift-state-metrics-66f87c88bd-jg7dn Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:907363827442bc34c33be580ea3ac30198ca65f46a95eb80b2c5255e24d173f3" openshift-monitoring 52m Normal Created pod/node-exporter-jhj5d Created container init-textfile openshift-monitoring 52m Normal Created pod/node-exporter-ztvgk Created container init-textfile openshift-monitoring 52m Normal Started pod/node-exporter-ztvgk Started container init-textfile openshift-monitoring 52m Normal Pulled pod/node-exporter-ztvgk Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" already present on machine openshift-kube-scheduler 52m Normal Created pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Created container wait-for-host-port openshift-monitoring 52m Normal Created pod/openshift-state-metrics-66f87c88bd-jg7dn Created container kube-rbac-proxy-self openshift-monitoring 52m Normal ScalingReplicaSet deployment/prometheus-adapter Scaled up replica set prometheus-adapter-5b77f96bd4 to 2 openshift-monitoring 52m Normal Created pod/node-exporter-g4hdx Created container kube-rbac-proxy openshift-monitoring 52m Normal Started pod/openshift-state-metrics-66f87c88bd-jg7dn Started container kube-rbac-proxy-self openshift-monitoring 52m Normal SuccessfulCreate replicaset/prometheus-adapter-5b77f96bd4 Created pod: prometheus-adapter-5b77f96bd4-lkn8s openshift-monitoring 52m Normal SuccessfulCreate replicaset/prometheus-adapter-5b77f96bd4 Created pod: prometheus-adapter-5b77f96bd4-vm8xp openshift-kube-scheduler 52m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-authentication 52m Normal AddedInterface pod/oauth-openshift-5fdc498fc9-vjtw8 Add eth0 [10.129.0.40/23] from ovn-kubernetes openshift-kube-scheduler 52m Warning Unhealthy pod/openshift-kube-scheduler-guard-ip-10-0-197-197.ec2.internal Readiness probe failed: Get "https://10.0.197.197:10259/healthz": dial tcp 10.0.197.197:10259: connect: connection refused openshift-monitoring 52m Normal Pulled pod/node-exporter-g4hdx Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 52m Normal Pulled pod/openshift-state-metrics-66f87c88bd-jg7dn Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 52m Normal Pulling pod/node-exporter-58wsk Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" openshift-etcd 52m Normal StaticPodInstallerCompleted pod/installer-5-ip-10-0-197-197.ec2.internal Successfully installed revision 5 openshift-monitoring 52m Normal AddedInterface pod/openshift-state-metrics-66f87c88bd-jg7dn Add eth0 [10.128.2.9/23] from ovn-kubernetes openshift-etcd 52m Normal Pulling pod/etcd-ip-10-0-197-197.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" openshift-monitoring 52m Normal ServiceAccountCreated deployment/cluster-monitoring-operator Created ServiceAccount/prometheus-k8s -n openshift-monitoring because it was missing openshift-authentication 52m Normal Pulling pod/oauth-openshift-5fdc498fc9-vjtw8 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2156db516db2fda43363a063c11f91bd201dde2de7eb4fbab9c1210d17e699ba" openshift-kube-scheduler 52m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-monitoring 52m Normal Started pod/openshift-state-metrics-66f87c88bd-jg7dn Started container kube-rbac-proxy-main openshift-monitoring 52m Normal Started pod/node-exporter-g4hdx Started container node-exporter openshift-monitoring 52m Normal Created pod/node-exporter-g4hdx Created container node-exporter openshift-monitoring 52m Normal Pulled pod/node-exporter-g4hdx Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" already present on machine openshift-kube-apiserver-operator 52m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/revision-status-8 -n openshift-kube-apiserver because it was missing openshift-monitoring 52m Normal Pulled pod/node-exporter-cghbq Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" in 960.602888ms (960.614798ms including waiting) openshift-monitoring 52m Normal Created pod/node-exporter-cghbq Created container init-textfile openshift-monitoring 52m Normal Started pod/node-exporter-cghbq Started container init-textfile openshift-kube-controller-manager-operator 52m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"cluster-policy-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager-cert-syncer\" is terminated: Error: \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:19:50.877177 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:19:50.877231 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:20:13.195453 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:20:13.195489 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:20:45.067713 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:20:45.067767 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager-recovery-controller\" is terminated: Completed: " to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host" openshift-kube-scheduler 52m Normal Started pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Started container wait-for-host-port openshift-monitoring 52m Normal Started pod/node-exporter-g4hdx Started container kube-rbac-proxy openshift-monitoring 52m Normal Pulled pod/openshift-state-metrics-66f87c88bd-jg7dn Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-kube-scheduler 52m Warning FastControllerResync pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling openshift-monitoring 52m Normal AddedInterface pod/prometheus-adapter-5b77f96bd4-lkn8s Add eth0 [10.131.0.13/23] from ovn-kubernetes openshift-monitoring 52m Normal Pulling pod/prometheus-adapter-5b77f96bd4-lkn8s Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbc27b4ea8b6ed06d8490b60e95b36bda21f09f15ec3f25f901c8dffc32292d9" openshift-monitoring 52m Normal Pulled pod/node-exporter-ztvgk Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 52m Normal Started pod/node-exporter-cghbq Started container node-exporter openshift-monitoring 52m Normal Started pod/node-exporter-ztvgk Started container kube-rbac-proxy openshift-monitoring 52m Normal Created pod/node-exporter-ztvgk Created container kube-rbac-proxy openshift-kube-scheduler 52m Normal Started pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Started container kube-scheduler-recovery-controller openshift-monitoring 52m Normal Pulled pod/node-exporter-cghbq Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 52m Normal Started pod/openshift-state-metrics-66f87c88bd-jg7dn Started container openshift-state-metrics openshift-monitoring 52m Normal Created pod/openshift-state-metrics-66f87c88bd-jg7dn Created container openshift-state-metrics openshift-monitoring 52m Normal Pulled pod/openshift-state-metrics-66f87c88bd-jg7dn Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:907363827442bc34c33be580ea3ac30198ca65f46a95eb80b2c5255e24d173f3" in 1.133115711s (1.133130835s including waiting) openshift-monitoring 52m Normal Created pod/node-exporter-cghbq Created container kube-rbac-proxy openshift-kube-scheduler 52m Normal Created pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Created container kube-scheduler-recovery-controller openshift-monitoring 52m Normal Created pod/node-exporter-cghbq Created container node-exporter openshift-kube-scheduler 52m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-monitoring 52m Normal Started pod/node-exporter-cghbq Started container kube-rbac-proxy openshift-monitoring 52m Normal Pulling pod/prometheus-adapter-5b77f96bd4-vm8xp Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbc27b4ea8b6ed06d8490b60e95b36bda21f09f15ec3f25f901c8dffc32292d9" openshift-authentication-operator 52m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.130.0.52:8443/apis/oauth.openshift.io/v1: Get \"https://10.130.0.52:8443/apis/oauth.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/user.openshift.io/v1: Get \"https://10.128.0.35:8443/apis/user.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" openshift-monitoring 52m Normal AddedInterface pod/prometheus-adapter-5b77f96bd4-vm8xp Add eth0 [10.128.2.11/23] from ovn-kubernetes openshift-kube-scheduler 52m Normal Started pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Started container kube-scheduler-cert-syncer openshift-kube-scheduler 52m Normal Created pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Created container kube-scheduler-cert-syncer openshift-monitoring 52m Normal Started pod/node-exporter-ztvgk Started container node-exporter openshift-monitoring 52m Normal Created pod/node-exporter-ztvgk Created container node-exporter openshift-kube-scheduler 52m Normal Created pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Created container kube-scheduler openshift-monitoring 52m Normal Started pod/node-exporter-58wsk Started container init-textfile openshift-monitoring 52m Normal Created pod/node-exporter-58wsk Created container init-textfile openshift-monitoring 52m Normal Pulled pod/node-exporter-58wsk Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" in 933.488759ms (933.512439ms including waiting) openshift-monitoring 52m Normal Pulled pod/node-exporter-cghbq Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" already present on machine openshift-kube-scheduler 52m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 52m Normal Started pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Started container kube-scheduler openshift-monitoring 52m Normal Pulled pod/node-exporter-58wsk Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" already present on machine openshift-monitoring 52m Normal Created pod/node-exporter-58wsk Created container node-exporter openshift-monitoring 52m Normal Started pod/node-exporter-58wsk Started container node-exporter openshift-monitoring 52m Normal Pulled pod/node-exporter-58wsk Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 52m Normal Created pod/node-exporter-58wsk Created container kube-rbac-proxy openshift-kube-scheduler 52m Normal LeaderElection lease/cert-recovery-controller-lock ip-10-0-197-197_35341374-eae9-4528-80b3-987673af5672 became leader openshift-kube-scheduler 52m Normal LeaderElection configmap/cert-recovery-controller-lock ip-10-0-197-197_35341374-eae9-4528-80b3-987673af5672 became leader openshift-authentication 52m Normal Pulled pod/oauth-openshift-5fdc498fc9-vjtw8 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2156db516db2fda43363a063c11f91bd201dde2de7eb4fbab9c1210d17e699ba" in 1.618560325s (1.618572214s including waiting) openshift-monitoring 52m Normal Started pod/node-exporter-58wsk Started container kube-rbac-proxy openshift-authentication 52m Normal Started pod/oauth-openshift-5fdc498fc9-vjtw8 Started container oauth-openshift openshift-etcd 52m Normal Pulled pod/etcd-ip-10-0-197-197.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" in 2.03586384s (2.035872513s including waiting) openshift-etcd 52m Normal Created pod/etcd-ip-10-0-197-197.ec2.internal Created container setup openshift-etcd 52m Normal Started pod/etcd-ip-10-0-197-197.ec2.internal Started container setup openshift-authentication 52m Normal SuccessfulDelete replicaset/oauth-openshift-5fdc498fc9 Deleted pod: oauth-openshift-5fdc498fc9-vjtw8 openshift-monitoring 52m Normal Pulled pod/prometheus-adapter-5b77f96bd4-lkn8s Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbc27b4ea8b6ed06d8490b60e95b36bda21f09f15ec3f25f901c8dffc32292d9" in 1.521983172s (1.521997216s including waiting) openshift-monitoring 52m Normal Created pod/prometheus-adapter-5b77f96bd4-lkn8s Created container prometheus-adapter openshift-monitoring 52m Normal Pulled pod/prometheus-adapter-5b77f96bd4-vm8xp Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbc27b4ea8b6ed06d8490b60e95b36bda21f09f15ec3f25f901c8dffc32292d9" in 1.475138679s (1.475153823s including waiting) openshift-monitoring 52m Normal Created pod/prometheus-adapter-5b77f96bd4-vm8xp Created container prometheus-adapter openshift-monitoring 52m Normal Started pod/prometheus-adapter-5b77f96bd4-vm8xp Started container prometheus-adapter openshift-authentication 52m Normal ScalingReplicaSet deployment/oauth-openshift Scaled up replica set oauth-openshift-cf968c599 to 1 from 0 openshift-etcd 52m Normal Pulled pod/etcd-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-authentication 52m Normal SuccessfulCreate replicaset/oauth-openshift-cf968c599 Created pod: oauth-openshift-cf968c599-9vrxf openshift-authentication 52m Normal ScalingReplicaSet deployment/oauth-openshift Scaled down replica set oauth-openshift-5fdc498fc9 to 2 from 3 openshift-kube-apiserver-operator 52m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-pod-8 -n openshift-kube-apiserver because it was missing openshift-authentication-operator 52m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.130.0.52:8443/apis/oauth.openshift.io/v1: Get \"https://10.130.0.52:8443/apis/oauth.openshift.io/v1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.35:8443/apis/user.openshift.io/v1: Get \"https://10.128.0.35:8443/apis/user.openshift.io/v1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nOAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" openshift-authentication 52m Normal Created pod/oauth-openshift-5fdc498fc9-vjtw8 Created container oauth-openshift openshift-etcd 52m Normal Pulled pod/etcd-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-monitoring 52m Normal NoPods poddisruptionbudget/prometheus-k8s No matching pods found openshift-authentication-operator 52m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nWellKnownReadyControllerDegraded: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" openshift-monitoring 52m Normal ScalingReplicaSet deployment/telemeter-client Scaled up replica set telemeter-client-5bd4dfdf7c to 1 openshift-etcd 52m Normal Created pod/etcd-ip-10-0-197-197.ec2.internal Created container etcd-ensure-env-vars openshift-monitoring 52m Normal Started pod/prometheus-adapter-5b77f96bd4-lkn8s Started container prometheus-adapter openshift-etcd 52m Normal Started pod/etcd-ip-10-0-197-197.ec2.internal Started container etcd-ensure-env-vars openshift-authentication-operator 52m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 3 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is waiting in pending oauth-openshift-5fdc498fc9-vjtw8 pod, container is waiting in pending oauth-openshift-5fdc498fc9-pbpqd pod, container is waiting in pending oauth-openshift-5fdc498fc9-2ktk4 pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready",Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" openshift-authentication 52m Normal Killing pod/oauth-openshift-5fdc498fc9-vjtw8 Stopping container oauth-openshift openshift-monitoring 52m Normal SuccessfulCreate replicaset/telemeter-client-5bd4dfdf7c Created pod: telemeter-client-5bd4dfdf7c-2982f openshift-authentication 52m Warning FailedCreatePodSandBox pod/oauth-openshift-5fdc498fc9-pbpqd Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5fdc498fc9-pbpqd_openshift-authentication_90d3b039-d13e-474b-a67f-d9b736891d3a_0(3a6a8a0be8f2b9eabae25250d6179b297801145b533138f4ed455a0513190098): error adding pod openshift-authentication_oauth-openshift-5fdc498fc9-pbpqd to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-authentication/oauth-openshift-5fdc498fc9-pbpqd/90d3b039-d13e-474b-a67f-d9b736891d3a]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5fdc498fc9-pbpqd?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-monitoring 52m Normal SuccessfulCreate statefulset/alertmanager-main create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful openshift-monitoring 52m Warning FailedCreatePodSandBox pod/alertmanager-main-1 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-1_openshift-monitoring_bdbc24f4-94c9-49f5-82c1-f8ee6b1361ee_0(440e71eab4c141b3010352d1226ca51b305916bf1d9b6b2c05fc173ad1e9e4b1): error adding pod openshift-monitoring_alertmanager-main-1 to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-monitoring/alertmanager-main-1/bdbc24f4-94c9-49f5-82c1-f8ee6b1361ee]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-1?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-authentication-operator 52m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" openshift-etcd 52m Normal Started pod/etcd-ip-10-0-197-197.ec2.internal Started container etcd-resources-copy openshift-monitoring 52m Normal SuccessfulCreate statefulset/alertmanager-main create Pod alertmanager-main-1 in StatefulSet alertmanager-main successful openshift-kube-scheduler-operator 52m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "StaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal container \"kube-scheduler\" is terminated: Completed: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal container \"kube-scheduler-cert-syncer\" is terminated: Error: ft-kube-scheduler/configmaps?limit=500&resourceVersion=0\": x509: certificate signed by unknown authority\nStaticPodsDegraded: E0321 12:20:39.466502 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\": x509: certificate signed by unknown authority\nStaticPodsDegraded: E0321 12:20:43.864044 1 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"openshift-kube-scheduler-ip-10-0-197-197.ec2.internal.174e6e7e2182d3cb\", GenerateName:\"\", Namespace:\"openshift-kube-scheduler\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:\"Pod\", Namespace:\"openshift-kube-scheduler\", Name:\"openshift-kube-scheduler-ip-10-0-197-197.ec2.internal\", UID:\"\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}, Reason:\"FastControllerResync\", Message:\"Controller \\\"CertSyncController\\\" resync interval is set to 0s which might lead to client request throttling\", Source:v1.EventSource{Component:\"cert-syncer-certsynccontroller\", Host:\"\"}, FirstTimestamp:time.Date(2023, time.March, 21, 12, 19, 59, 62930379, time.Local), LastTimestamp:time.Date(2023, time.March, 21, 12, 19, 59, 62930379, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'Post \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/events\": x509: certificate signed by unknown authority'(may retry after sleeping)\nStaticPodsDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal container \"kube-scheduler-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" openshift-authentication 52m Normal AddedInterface pod/oauth-openshift-5fdc498fc9-pbpqd Add eth0 [10.128.0.46/23] from ovn-kubernetes openshift-kube-apiserver-operator 52m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/config-8 -n openshift-kube-apiserver because it was missing openshift-authentication-operator 52m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 3 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is waiting in pending oauth-openshift-5fdc498fc9-vjtw8 pod, container is waiting in pending oauth-openshift-5fdc498fc9-pbpqd pod, container is waiting in pending oauth-openshift-5fdc498fc9-2ktk4 pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 3 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is waiting in pending oauth-openshift-5fdc498fc9-vjtw8 pod, container is waiting in pending oauth-openshift-5fdc498fc9-pbpqd pod, container is waiting in pending oauth-openshift-5fdc498fc9-2ktk4 pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" openshift-monitoring 52m Warning FailedCreatePodSandBox pod/alertmanager-main-0 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_18b88642-d12f-4598-ba0b-ae246ae5f164_0(5b4b5b1d3d2fbe6826e55817c8f6214bb528e6832e8a101374c4da94e15f8507): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-monitoring/alertmanager-main-0/18b88642-d12f-4598-ba0b-ae246ae5f164]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-authentication 52m Normal Pulling pod/oauth-openshift-5fdc498fc9-pbpqd Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2156db516db2fda43363a063c11f91bd201dde2de7eb4fbab9c1210d17e699ba" openshift-etcd-operator 52m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node ip-10-0-197-197.ec2.internal" to "GuardControllerDegraded: Missing PodIP in operand etcd-ip-10-0-197-197.ec2.internal on node ip-10-0-197-197.ec2.internal" openshift-etcd 52m Normal Created pod/etcd-ip-10-0-197-197.ec2.internal Created container etcd-resources-copy openshift-etcd 52m Normal Started pod/etcd-ip-10-0-197-197.ec2.internal Started container etcd openshift-monitoring 52m Normal SuccessfulCreate replicaset/thanos-querier-7bbf5b5dcd Created pod: thanos-querier-7bbf5b5dcd-nrjft openshift-monitoring 52m Warning FailedCreatePodSandBox pod/alertmanager-main-1 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-1_openshift-monitoring_bdbc24f4-94c9-49f5-82c1-f8ee6b1361ee_0(f0de5a53a809c8830a715d4b3b76e52d5b98972afe1aa07a2fe8dea530dc34ea): error adding pod openshift-monitoring_alertmanager-main-1 to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-monitoring/alertmanager-main-1/bdbc24f4-94c9-49f5-82c1-f8ee6b1361ee]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-1?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-monitoring 52m Normal ScalingReplicaSet deployment/thanos-querier Scaled up replica set thanos-querier-7bbf5b5dcd to 2 openshift-etcd 52m Normal Pulled pod/etcd-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd 52m Normal Created pod/etcd-ip-10-0-197-197.ec2.internal Created container etcdctl openshift-etcd 52m Normal Pulled pod/etcd-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-kube-apiserver-operator 52m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-8 -n openshift-kube-apiserver because it was missing openshift-etcd 52m Normal Started pod/etcd-ip-10-0-197-197.ec2.internal Started container etcdctl openshift-etcd 52m Normal Pulled pod/etcd-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 52m Normal Pulled pod/etcd-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 52m Normal Created pod/etcd-ip-10-0-197-197.ec2.internal Created container etcd-readyz openshift-etcd 52m Normal Started pod/etcd-ip-10-0-197-197.ec2.internal Started container etcd-readyz openshift-monitoring 52m Warning FailedCreatePodSandBox pod/kube-state-metrics-55f6dbfb8b-phfp9 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_kube-state-metrics-55f6dbfb8b-phfp9_openshift-monitoring_c68f24a0-2c70-43da-8f7f-38cab22841cc_0(9520fe9b5135287e3eb5bf015bd60daecf0dd7c62020cc058cc01aac76b25015): error adding pod openshift-monitoring_kube-state-metrics-55f6dbfb8b-phfp9 to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-monitoring/kube-state-metrics-55f6dbfb8b-phfp9/c68f24a0-2c70-43da-8f7f-38cab22841cc]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-monitoring/pods/kube-state-metrics-55f6dbfb8b-phfp9?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-etcd 52m Normal Started pod/etcd-ip-10-0-197-197.ec2.internal Started container etcd-metrics openshift-authentication 52m Normal Pulled pod/oauth-openshift-5fdc498fc9-pbpqd Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2156db516db2fda43363a063c11f91bd201dde2de7eb4fbab9c1210d17e699ba" in 1.593944641s (1.593952121s including waiting) openshift-monitoring 52m Warning FailedCreatePodSandBox pod/alertmanager-main-0 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_18b88642-d12f-4598-ba0b-ae246ae5f164_0(81ce90348e720e77d797d8c0b76bef6b9676da913c207a2fcf5d7940857ba0ad): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-monitoring/alertmanager-main-0/18b88642-d12f-4598-ba0b-ae246ae5f164]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-etcd 52m Normal Created pod/etcd-ip-10-0-197-197.ec2.internal Created container etcd-metrics openshift-etcd 52m Normal Created pod/etcd-ip-10-0-197-197.ec2.internal Created container etcd openshift-monitoring 52m Normal SuccessfulCreate replicaset/thanos-querier-7bbf5b5dcd Created pod: thanos-querier-7bbf5b5dcd-7fpvv openshift-monitoring 52m Warning FailedMount pod/thanos-querier-7bbf5b5dcd-nrjft MountVolume.SetUp failed for volume "secret-grpc-tls" : failed to sync secret cache: timed out waiting for the condition openshift-monitoring 52m Warning FailedMount pod/thanos-querier-7bbf5b5dcd-nrjft MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy-metrics" : failed to sync secret cache: timed out waiting for the condition openshift-monitoring 52m Normal Pulling pod/kube-state-metrics-55f6dbfb8b-phfp9 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:48772f8b25db5f426c168026f3e89252389ea1c6bf3e508f670bffb24ee6e8e7" openshift-authentication 52m Normal Created pod/oauth-openshift-5fdc498fc9-pbpqd Created container oauth-openshift openshift-monitoring 52m Warning FailedMount pod/thanos-querier-7bbf5b5dcd-nrjft MountVolume.SetUp failed for volume "secret-thanos-querier-kube-rbac-proxy" : failed to sync secret cache: timed out waiting for the condition openshift-monitoring 52m Normal AddedInterface pod/kube-state-metrics-55f6dbfb8b-phfp9 Add eth0 [10.128.2.10/23] from ovn-kubernetes openshift-monitoring 52m Warning FailedMount pod/thanos-querier-7bbf5b5dcd-nrjft MountVolume.SetUp failed for volume "secret-thanos-querier-oauth-cookie" : failed to sync secret cache: timed out waiting for the condition openshift-etcd-operator 52m Warning UnstartedEtcdMember deployment/etcd-operator unstarted members: NAME-PENDING-10.0.197.197 openshift-monitoring 52m Normal AddedInterface pod/thanos-querier-7bbf5b5dcd-7fpvv Add eth0 [10.128.2.14/23] from ovn-kubernetes openshift-etcd-operator 52m Normal MemberAddAsLearner deployment/etcd-operator successfully added new member https://10.0.197.197:2380 openshift-monitoring 52m Normal Pulling pod/thanos-querier-7bbf5b5dcd-7fpvv Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a898acb346b7aae546bb9dc31e43266415a096fa755241b56acb5969190584f9" openshift-authentication 52m Warning ProbeError pod/oauth-openshift-5fdc498fc9-pbpqd Readiness probe error: Get "https://10.128.0.46:6443/healthz": dial tcp 10.128.0.46:6443: connect: connection refused... openshift-etcd-operator 52m Warning UnhealthyEtcdMember deployment/etcd-operator unhealthy members: NAME-PENDING-10.0.197.197 openshift-authentication 52m Normal Started pod/oauth-openshift-5fdc498fc9-pbpqd Started container oauth-openshift openshift-authentication 52m Warning Unhealthy pod/oauth-openshift-5fdc498fc9-pbpqd Readiness probe failed: Get "https://10.128.0.46:6443/healthz": dial tcp 10.128.0.46:6443: connect: connection refused openshift-kube-apiserver 52m Warning FailedCreatePodSandBox pod/revision-pruner-7-ip-10-0-140-6.ec2.internal Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-7-ip-10-0-140-6.ec2.internal_openshift-kube-apiserver_bccd6742-3ebe-4e7e-9c54-c9760e606349_0(44dd16e9e0aa88ba1bf58f8428c76624e60ce645e7854cbfb85bd98a97cca47a): error adding pod openshift-kube-apiserver_revision-pruner-7-ip-10-0-140-6.ec2.internal to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-7-ip-10-0-140-6.ec2.internal/bccd6742-3ebe-4e7e-9c54-c9760e606349]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-7-ip-10-0-140-6.ec2.internal?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-monitoring 52m Normal Pulling pod/thanos-querier-7bbf5b5dcd-nrjft Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a898acb346b7aae546bb9dc31e43266415a096fa755241b56acb5969190584f9" openshift-authentication-operator 52m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 3 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is waiting in pending oauth-openshift-5fdc498fc9-vjtw8 pod, container is waiting in pending oauth-openshift-5fdc498fc9-pbpqd pod, container is waiting in pending oauth-openshift-5fdc498fc9-2ktk4 pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 3 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is waiting in pending oauth-openshift-5fdc498fc9-vjtw8 pod, container is waiting in pending oauth-openshift-5fdc498fc9-pbpqd pod, container is waiting in pending oauth-openshift-5fdc498fc9-2ktk4 pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)" openshift-authentication-operator 52m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" openshift-monitoring 52m Warning FailedCreatePodSandBox pod/thanos-querier-7bbf5b5dcd-nrjft Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_thanos-querier-7bbf5b5dcd-nrjft_openshift-monitoring_5e9b5190-d489-4843-a6e1-e1cc473487d6_0(fd7837bea63b80d2d9fdec1d3db7555645ee6006c9fd0ae9444f2f16ccbdf4a6): error adding pod openshift-monitoring_thanos-querier-7bbf5b5dcd-nrjft to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-monitoring/thanos-querier-7bbf5b5dcd-nrjft/5e9b5190-d489-4843-a6e1-e1cc473487d6]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-monitoring/pods/thanos-querier-7bbf5b5dcd-nrjft?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-kube-apiserver-operator 52m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/oauth-metadata-8 -n openshift-kube-apiserver because it was missing openshift-monitoring 52m Normal AddedInterface pod/thanos-querier-7bbf5b5dcd-nrjft Add eth0 [10.131.0.15/23] from ovn-kubernetes openshift-monitoring 52m Normal SuccessfulCreate statefulset/prometheus-k8s create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful openshift-monitoring 52m Normal Pulled pod/kube-state-metrics-55f6dbfb8b-phfp9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 52m Normal Created pod/kube-state-metrics-55f6dbfb8b-phfp9 Created container kube-rbac-proxy-self openshift-monitoring 52m Normal Started pod/kube-state-metrics-55f6dbfb8b-phfp9 Started container kube-rbac-proxy-self openshift-kube-controller-manager-operator 52m Normal NodeCurrentRevisionChanged deployment/kube-controller-manager-operator Updated node "ip-10-0-239-132.ec2.internal" from revision 4 to 5 because static pod is ready openshift-monitoring 52m Normal Started pod/kube-state-metrics-55f6dbfb8b-phfp9 Started container kube-rbac-proxy-main openshift-monitoring 52m Normal Pulled pod/kube-state-metrics-55f6dbfb8b-phfp9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 52m Normal Started pod/kube-state-metrics-55f6dbfb8b-phfp9 Started container kube-state-metrics openshift-monitoring 52m Normal Pulled pod/kube-state-metrics-55f6dbfb8b-phfp9 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:48772f8b25db5f426c168026f3e89252389ea1c6bf3e508f670bffb24ee6e8e7" in 1.716515221s (1.716527231s including waiting) openshift-monitoring 52m Normal Pulled pod/thanos-querier-7bbf5b5dcd-7fpvv Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a898acb346b7aae546bb9dc31e43266415a096fa755241b56acb5969190584f9" in 2.071923021s (2.071934826s including waiting) openshift-monitoring 52m Normal Pulling pod/thanos-querier-7bbf5b5dcd-7fpvv Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8541443d5c6c0d31959a8e5f8d6a525d43bb4df66be563603d282f48bdeb304" openshift-monitoring 52m Normal Created pod/thanos-querier-7bbf5b5dcd-7fpvv Created container thanos-query openshift-monitoring 52m Normal Started pod/thanos-querier-7bbf5b5dcd-7fpvv Started container kube-rbac-proxy openshift-kube-apiserver-operator 52m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/bound-sa-token-signing-certs-8 -n openshift-kube-apiserver because it was missing openshift-monitoring 52m Normal Created pod/thanos-querier-7bbf5b5dcd-7fpvv Created container kube-rbac-proxy openshift-monitoring 52m Normal Pulled pod/thanos-querier-7bbf5b5dcd-7fpvv Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 52m Normal SuccessfulCreate statefulset/prometheus-k8s create Pod prometheus-k8s-1 in StatefulSet prometheus-k8s successful openshift-kube-controller-manager-operator 52m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 4; 0 nodes have achieved new revision 5" to "NodeInstallerProgressing: 2 nodes are at revision 4; 1 nodes are at revision 5",Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 4; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 4; 1 nodes are at revision 5" openshift-etcd-operator 52m Normal MemberPromote deployment/etcd-operator successfully promoted learner member https://10.0.197.197:2380 openshift-monitoring 52m Normal Started pod/thanos-querier-7bbf5b5dcd-7fpvv Started container thanos-query openshift-monitoring 52m Normal Started pod/thanos-querier-7bbf5b5dcd-7fpvv Started container oauth-proxy openshift-monitoring 52m Normal Created pod/thanos-querier-7bbf5b5dcd-7fpvv Created container oauth-proxy openshift-monitoring 52m Normal Created pod/kube-state-metrics-55f6dbfb8b-phfp9 Created container kube-state-metrics openshift-monitoring 52m Normal Pulled pod/thanos-querier-7bbf5b5dcd-7fpvv Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-monitoring 52m Normal Created pod/kube-state-metrics-55f6dbfb8b-phfp9 Created container kube-rbac-proxy-main openshift-monitoring 52m Normal Pulled pod/thanos-querier-7bbf5b5dcd-nrjft Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 52m Normal Pulled pod/thanos-querier-7bbf5b5dcd-7fpvv Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8541443d5c6c0d31959a8e5f8d6a525d43bb4df66be563603d282f48bdeb304" in 903.748699ms (903.763507ms including waiting) openshift-authentication-operator 52m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded changed from True to False ("OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerDeploymentDegraded: 3 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is waiting in pending oauth-openshift-5fdc498fc9-vjtw8 pod, container is waiting in pending oauth-openshift-5fdc498fc9-pbpqd pod, container is waiting in pending oauth-openshift-5fdc498fc9-2ktk4 pod)") openshift-etcd-operator 52m Normal ConfigMapUpdated deployment/etcd-operator Updated ConfigMap/etcd-endpoints -n openshift-etcd:... openshift-etcd-operator 52m Normal RevisionTriggered deployment/etcd-operator new revision 6 triggered by "configmap/etcd-endpoints has changed" openshift-kube-apiserver-operator 52m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/etcd-serving-ca-8 -n openshift-kube-apiserver because it was missing openshift-authentication-operator 52m Normal ObserveStorageUpdated deployment/authentication-operator Updated storage urls to https://10.0.140.6:2379,https://10.0.197.197:2379,https://10.0.239.132:2379 openshift-authentication-operator 52m Normal ObservedConfigChanged deployment/authentication-operator Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"apiServerArguments\": map[string]any{\n\u00a0\u00a0\t\t\"api-audiences\": []any{string(\"https://kubernetes.default.svc\")},\n\u00a0\u00a0\t\t\"cors-allowed-origins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\t\"etcd-servers\": []any{\n\u00a0\u00a0\t\t\tstring(\"https://10.0.140.6:2379\"),\n+\u00a0\t\t\tstring(\"https://10.0.197.197:2379\"),\n\u00a0\u00a0\t\t\tstring(\"https://10.0.239.132:2379\"),\n\u00a0\u00a0\t\t},\n\u00a0\u00a0\t\t\"tls-cipher-suites\": []any{string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\"), string(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"), ...},\n\u00a0\u00a0\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n\u00a0\u00a0\t},\n\u00a0\u00a0}\n" openshift-monitoring 52m Normal AddedInterface pod/prometheus-k8s-0 Add eth0 [10.128.2.15/23] from ovn-kubernetes openshift-monitoring 52m Normal Pulling pod/prometheus-k8s-0 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" openshift-monitoring 52m Normal Pulled pod/prometheus-k8s-0 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" in 764.492359ms (764.503797ms including waiting) openshift-monitoring 52m Normal AddedInterface pod/telemeter-client-5bd4dfdf7c-2982f Add eth0 [10.128.2.12/23] from ovn-kubernetes openshift-monitoring 52m Normal Pulling pod/telemeter-client-5bd4dfdf7c-2982f Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:942a1ba76f95d02ba681afbb7d1aea28d457fb2a9d967cacc2233bb243588990" openshift-monitoring 52m Normal Created pod/thanos-querier-7bbf5b5dcd-7fpvv Created container prom-label-proxy openshift-monitoring 52m Normal Pulled pod/thanos-querier-7bbf5b5dcd-7fpvv Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 52m Normal Pulling pod/prometheus-k8s-1 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" openshift-monitoring 52m Normal Created pod/thanos-querier-7bbf5b5dcd-7fpvv Created container kube-rbac-proxy-rules openshift-kube-apiserver-operator 52m Normal ObserveStorageUpdated deployment/kube-apiserver-operator Updated storage urls to https://10.0.140.6:2379,https://10.0.197.197:2379,https://10.0.239.132:2379,https://localhost:2379 openshift-monitoring 52m Normal Pulled pod/thanos-querier-7bbf5b5dcd-7fpvv Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 52m Normal Created pod/thanos-querier-7bbf5b5dcd-7fpvv Created container kube-rbac-proxy-metrics openshift-monitoring 52m Normal Started pod/thanos-querier-7bbf5b5dcd-7fpvv Started container kube-rbac-proxy-metrics openshift-authentication 52m Warning FailedCreatePodSandBox pod/oauth-openshift-5fdc498fc9-2ktk4 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-5fdc498fc9-2ktk4_openshift-authentication_5144e58d-2ff7-464a-b2df-2d71df4e9ae6_0(5116df5251374945659451a40282ee5188907493bba83caebadfff863f16d6c7): error adding pod openshift-authentication_oauth-openshift-5fdc498fc9-2ktk4 to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-authentication/oauth-openshift-5fdc498fc9-2ktk4/5144e58d-2ff7-464a-b2df-2d71df4e9ae6]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-5fdc498fc9-2ktk4?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-monitoring 52m Warning FailedCreatePodSandBox pod/prometheus-k8s-1 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_prometheus-k8s-1_openshift-monitoring_8cc1ca6a-056a-4d69-a134-31dff056b687_0(dc20544c4220faeb3a8beae08ce0c015d83e60f0c18e33c2fa157d4c76f2081d): error adding pod openshift-monitoring_prometheus-k8s-1 to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-monitoring/prometheus-k8s-1/8cc1ca6a-056a-4d69-a134-31dff056b687]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-1?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-monitoring 52m Normal AddedInterface pod/prometheus-k8s-1 Add eth0 [10.131.0.16/23] from ovn-kubernetes openshift-kube-apiserver-operator 52m Normal ObservedConfigChanged deployment/kube-apiserver-operator Writing updated observed config:   map[string]any{... openshift-monitoring 52m Normal Pulled pod/thanos-querier-7bbf5b5dcd-nrjft Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a898acb346b7aae546bb9dc31e43266415a096fa755241b56acb5969190584f9" in 1.808421645s (1.808434996s including waiting) openshift-monitoring 52m Normal Created pod/thanos-querier-7bbf5b5dcd-nrjft Created container thanos-query openshift-monitoring 52m Normal Started pod/thanos-querier-7bbf5b5dcd-nrjft Started container thanos-query openshift-monitoring 52m Normal Pulled pod/thanos-querier-7bbf5b5dcd-nrjft Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-monitoring 52m Normal Started pod/thanos-querier-7bbf5b5dcd-7fpvv Started container prom-label-proxy openshift-monitoring 52m Normal Created pod/thanos-querier-7bbf5b5dcd-nrjft Created container oauth-proxy openshift-monitoring 52m Normal Started pod/thanos-querier-7bbf5b5dcd-nrjft Started container oauth-proxy openshift-monitoring 52m Normal Started pod/thanos-querier-7bbf5b5dcd-7fpvv Started container kube-rbac-proxy-rules openshift-monitoring 52m Normal Created pod/thanos-querier-7bbf5b5dcd-nrjft Created container kube-rbac-proxy openshift-monitoring 52m Normal Started pod/thanos-querier-7bbf5b5dcd-nrjft Started container kube-rbac-proxy openshift-monitoring 52m Normal Pulling pod/thanos-querier-7bbf5b5dcd-nrjft Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8541443d5c6c0d31959a8e5f8d6a525d43bb4df66be563603d282f48bdeb304" openshift-monitoring 52m Normal Started pod/telemeter-client-5bd4dfdf7c-2982f Started container kube-rbac-proxy openshift-monitoring 52m Normal Created pod/prometheus-k8s-1 Created container init-config-reloader openshift-monitoring 52m Normal Pulled pod/telemeter-client-5bd4dfdf7c-2982f Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 52m Normal Started pod/telemeter-client-5bd4dfdf7c-2982f Started container reload openshift-monitoring 52m Normal Pulling pod/prometheus-k8s-0 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1cdf8e55288584a04e1a403133d4508bac27808a20427a18aee85a92c367316" openshift-monitoring 52m Normal Started pod/prometheus-k8s-0 Started container init-config-reloader openshift-monitoring 52m Normal Created pod/prometheus-k8s-0 Created container init-config-reloader openshift-monitoring 52m Normal Pulled pod/telemeter-client-5bd4dfdf7c-2982f Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" already present on machine openshift-monitoring 52m Normal Started pod/telemeter-client-5bd4dfdf7c-2982f Started container telemeter-client openshift-monitoring 52m Normal Created pod/telemeter-client-5bd4dfdf7c-2982f Created container telemeter-client openshift-monitoring 52m Normal Pulled pod/telemeter-client-5bd4dfdf7c-2982f Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:942a1ba76f95d02ba681afbb7d1aea28d457fb2a9d967cacc2233bb243588990" in 1.449441383s (1.449452929s including waiting) openshift-monitoring 52m Normal Started pod/prometheus-k8s-1 Started container init-config-reloader openshift-monitoring 52m Normal Created pod/telemeter-client-5bd4dfdf7c-2982f Created container reload openshift-monitoring 52m Normal Pulled pod/prometheus-k8s-1 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" in 933.045852ms (933.057961ms including waiting) openshift-monitoring 52m Normal Created pod/telemeter-client-5bd4dfdf7c-2982f Created container kube-rbac-proxy openshift-etcd-operator 52m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/revision-status-6 -n openshift-etcd because it was missing openshift-monitoring 52m Normal Created pod/thanos-querier-7bbf5b5dcd-nrjft Created container kube-rbac-proxy-metrics openshift-monitoring 52m Normal Pulled pod/thanos-querier-7bbf5b5dcd-nrjft Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 52m Normal Started pod/thanos-querier-7bbf5b5dcd-nrjft Started container kube-rbac-proxy-rules openshift-monitoring 52m Normal Created pod/thanos-querier-7bbf5b5dcd-nrjft Created container kube-rbac-proxy-rules openshift-monitoring 52m Normal Pulled pod/thanos-querier-7bbf5b5dcd-nrjft Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 52m Normal Started pod/thanos-querier-7bbf5b5dcd-nrjft Started container prom-label-proxy openshift-monitoring 52m Normal Created pod/thanos-querier-7bbf5b5dcd-nrjft Created container prom-label-proxy openshift-monitoring 52m Normal Pulled pod/thanos-querier-7bbf5b5dcd-nrjft Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8541443d5c6c0d31959a8e5f8d6a525d43bb4df66be563603d282f48bdeb304" in 1.012984107s (1.012999197s including waiting) openshift-kube-apiserver-operator 52m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-server-ca-8 -n openshift-kube-apiserver because it was missing openshift-monitoring 52m Normal Started pod/thanos-querier-7bbf5b5dcd-nrjft Started container kube-rbac-proxy-metrics openshift-monitoring 52m Normal Pulling pod/prometheus-k8s-1 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1cdf8e55288584a04e1a403133d4508bac27808a20427a18aee85a92c367316" openshift-kube-controller-manager-operator 52m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.219.179:9091: connect: connection refused" openshift-monitoring 52m Normal Pulled pod/prometheus-k8s-0 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1cdf8e55288584a04e1a403133d4508bac27808a20427a18aee85a92c367316" in 2.734976071s (2.734989168s including waiting) openshift-etcd-operator 52m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-pod-6 -n openshift-etcd because it was missing openshift-monitoring 52m Normal Pulled pod/prometheus-k8s-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 52m Normal Pulled pod/prometheus-k8s-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-monitoring 52m Normal Started pod/prometheus-k8s-0 Started container config-reloader openshift-etcd-operator 52m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found") openshift-monitoring 52m Normal Created pod/prometheus-k8s-0 Created container prometheus-proxy openshift-monitoring 52m Normal Started pod/prometheus-k8s-0 Started container prometheus-proxy openshift-monitoring 52m Normal Started pod/prometheus-k8s-0 Started container thanos-sidecar openshift-monitoring 52m Normal Started pod/prometheus-k8s-0 Started container prometheus openshift-oauth-apiserver 52m Normal Killing pod/apiserver-9b9694fdc-kb6ks Stopping container oauth-apiserver openshift-monitoring 52m Normal Pulled pod/prometheus-k8s-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" already present on machine openshift-monitoring 52m Normal Pulled pod/prometheus-k8s-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 52m Normal Created pod/prometheus-k8s-0 Created container kube-rbac-proxy-thanos openshift-oauth-apiserver 52m Normal SuccessfulDelete replicaset/apiserver-9b9694fdc Deleted pod: apiserver-9b9694fdc-kb6ks openshift-kube-apiserver-operator 52m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kubelet-serving-ca-8 -n openshift-kube-apiserver because it was missing openshift-monitoring 52m Normal Created pod/prometheus-k8s-0 Created container config-reloader openshift-monitoring 52m Normal Started pod/prometheus-k8s-0 Started container kube-rbac-proxy openshift-monitoring 52m Normal Created pod/prometheus-k8s-0 Created container kube-rbac-proxy openshift-monitoring 52m Normal Pulled pod/prometheus-k8s-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a898acb346b7aae546bb9dc31e43266415a096fa755241b56acb5969190584f9" already present on machine openshift-monitoring 52m Normal Started pod/prometheus-k8s-0 Started container kube-rbac-proxy-thanos openshift-etcd-operator 52m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-serving-ca-6 -n openshift-etcd because it was missing openshift-oauth-apiserver 52m Normal SuccessfulCreate replicaset/apiserver-8ddbf84fd Created pod: apiserver-8ddbf84fd-g8ssl openshift-oauth-apiserver 52m Normal ScalingReplicaSet deployment/apiserver Scaled up replica set apiserver-8ddbf84fd to 1 from 0 openshift-monitoring 52m Normal Created pod/prometheus-k8s-0 Created container thanos-sidecar openshift-monitoring 52m Normal Created pod/prometheus-k8s-0 Created container prometheus openshift-authentication-operator 52m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 2, desired generation is 3.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" openshift-oauth-apiserver 52m Normal ScalingReplicaSet deployment/apiserver Scaled down replica set apiserver-9b9694fdc to 2 from 3 openshift-etcd-operator 52m Normal PodCreated deployment/etcd-operator Created Pod/etcd-guard-ip-10-0-197-197.ec2.internal -n openshift-etcd because it was missing openshift-monitoring 52m Normal Started pod/prometheus-k8s-1 Started container prometheus-proxy openshift-monitoring 52m Normal Pulled pod/prometheus-k8s-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 52m Normal Created pod/prometheus-k8s-1 Created container thanos-sidecar openshift-etcd 52m Normal Pulled pod/etcd-guard-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd 52m Normal AddedInterface pod/etcd-guard-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.68/23] from ovn-kubernetes openshift-etcd 52m Normal Created pod/etcd-guard-ip-10-0-197-197.ec2.internal Created container guard openshift-monitoring 52m Normal Created pod/prometheus-k8s-1 Created container kube-rbac-proxy openshift-monitoring 52m Normal Created pod/prometheus-k8s-1 Created container kube-rbac-proxy-thanos openshift-monitoring 52m Normal Started pod/prometheus-k8s-1 Started container thanos-sidecar openshift-etcd 52m Normal Started pod/etcd-guard-ip-10-0-197-197.ec2.internal Started container guard openshift-monitoring 52m Normal Pulled pod/prometheus-k8s-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 52m Normal Pulled pod/prometheus-k8s-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" already present on machine openshift-monitoring 52m Normal Created pod/prometheus-k8s-1 Created container config-reloader openshift-monitoring 52m Normal Started pod/prometheus-k8s-1 Started container config-reloader openshift-monitoring 52m Normal Pulled pod/prometheus-k8s-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a898acb346b7aae546bb9dc31e43266415a096fa755241b56acb5969190584f9" already present on machine openshift-monitoring 52m Normal Started pod/prometheus-k8s-1 Started container prometheus openshift-monitoring 52m Normal Created pod/prometheus-k8s-1 Created container prometheus openshift-monitoring 52m Normal Pulled pod/prometheus-k8s-1 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1cdf8e55288584a04e1a403133d4508bac27808a20427a18aee85a92c367316" in 2.589530882s (2.589552147s including waiting) openshift-monitoring 52m Normal Pulled pod/prometheus-k8s-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-kube-controller-manager-operator 52m Normal NodeTargetRevisionChanged deployment/kube-controller-manager-operator Updating node "ip-10-0-140-6.ec2.internal" from revision 4 to 5 because node ip-10-0-140-6.ec2.internal with revision 4 is the oldest openshift-monitoring 52m Normal Started pod/prometheus-k8s-1 Started container kube-rbac-proxy-thanos openshift-monitoring 52m Normal Started pod/prometheus-k8s-1 Started container kube-rbac-proxy openshift-monitoring 52m Normal Created pod/prometheus-k8s-1 Created container prometheus-proxy openshift-kube-apiserver-operator 52m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/sa-token-signing-certs-8 -n openshift-kube-apiserver because it was missing openshift-etcd-operator 52m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-peer-client-ca-6 -n openshift-etcd because it was missing openshift-kube-controller-manager-operator 52m Normal PodCreated deployment/kube-controller-manager-operator Created Pod/installer-5-ip-10-0-140-6.ec2.internal -n openshift-kube-controller-manager because it was missing openshift-kube-controller-manager 52m Normal AddedInterface pod/installer-5-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.47/23] from ovn-kubernetes openshift-authentication-operator 52m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 2, desired generation is 3.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" openshift-etcd-operator 52m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-metrics-proxy-serving-ca-6 -n openshift-etcd because it was missing openshift-kube-controller-manager 52m Normal Pulled pod/installer-5-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 52m Normal Created pod/installer-5-ip-10-0-140-6.ec2.internal Created container installer openshift-kube-controller-manager 52m Normal Started pod/installer-5-ip-10-0-140-6.ec2.internal Started container installer openshift-kube-apiserver-operator 52m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-audit-policies-8 -n openshift-kube-apiserver because it was missing openshift-etcd-operator 52m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-metrics-proxy-client-ca-6 -n openshift-etcd because it was missing openshift-authentication 52m Normal Pulling pod/oauth-openshift-5fdc498fc9-2ktk4 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2156db516db2fda43363a063c11f91bd201dde2de7eb4fbab9c1210d17e699ba" openshift-etcd-operator 52m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-endpoints-6 -n openshift-etcd because it was missing openshift-kube-apiserver-operator 52m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/etcd-client-8 -n openshift-kube-apiserver because it was missing openshift-authentication-operator 52m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerDeploymentDegraded: 3 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is waiting in pending oauth-openshift-5fdc498fc9-vjtw8 pod, container is waiting in pending oauth-openshift-5fdc498fc9-pbpqd pod, container is waiting in pending oauth-openshift-5fdc498fc9-2ktk4 pod)" to "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)",Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 1, desired generation is 2.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" openshift-authentication 52m Normal AddedInterface pod/oauth-openshift-5fdc498fc9-2ktk4 Add eth0 [10.130.0.67/23] from ovn-kubernetes openshift-kube-apiserver 52m Normal Started pod/revision-pruner-7-ip-10-0-140-6.ec2.internal Started container pruner openshift-monitoring 52m Normal Pulling pod/alertmanager-main-1 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1a766de06530f7008fc155e6239f6e3d63b816deeb24c6d520ef1980d748050" openshift-kube-apiserver 52m Normal Created pod/revision-pruner-7-ip-10-0-140-6.ec2.internal Created container pruner openshift-etcd-operator 52m Normal RevisionTriggered deployment/etcd-operator new revision 6 triggered by "configmap/etcd-pod has changed,configmap/etcd-endpoints has changed" openshift-etcd-operator 52m Normal SecretCreated deployment/etcd-operator Created Secret/etcd-all-certs-6 -n openshift-etcd because it was missing openshift-monitoring 52m Normal AddedInterface pod/alertmanager-main-1 Add eth0 [10.131.0.14/23] from ovn-kubernetes openshift-monitoring 52m Warning FailedCreatePodSandBox pod/alertmanager-main-0 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_18b88642-d12f-4598-ba0b-ae246ae5f164_0(1899bf921713883efba6c27a4a8d9ec58ac5ca8db11c52592cd549f18ab22b73): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-monitoring/alertmanager-main-0/18b88642-d12f-4598-ba0b-ae246ae5f164]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-etcd-operator 52m Normal RevisionCreate deployment/etcd-operator Revision 5 created because configmap/etcd-endpoints has changed openshift-kube-apiserver 52m Normal AddedInterface pod/revision-pruner-7-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.44/23] from ovn-kubernetes openshift-kube-apiserver 52m Normal Pulled pod/revision-pruner-7-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-authentication 52m Normal Created pod/oauth-openshift-5fdc498fc9-2ktk4 Created container oauth-openshift openshift-authentication 52m Normal Started pod/oauth-openshift-5fdc498fc9-2ktk4 Started container oauth-openshift openshift-authentication 52m Normal Pulled pod/oauth-openshift-5fdc498fc9-2ktk4 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2156db516db2fda43363a063c11f91bd201dde2de7eb4fbab9c1210d17e699ba" in 1.812020223s (1.812034933s including waiting) openshift-etcd-operator 52m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Progressing message changed from "NodeInstallerProgressing: 1 nodes are at revision 0; 1 nodes are at revision 2; 1 nodes are at revision 5" to "NodeInstallerProgressing: 1 nodes are at revision 2; 2 nodes are at revision 5; 0 nodes have achieved new revision 6",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 nodes are at revision 0; 1 nodes are at revision 2; 1 nodes are at revision 5\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 2; 2 nodes are at revision 5; 0 nodes have achieved new revision 6\nEtcdMembersAvailable: 3 members are available" openshift-etcd-operator 52m Normal NodeCurrentRevisionChanged deployment/etcd-operator Updated node "ip-10-0-197-197.ec2.internal" from revision 0 to 5 because static pod is ready openshift-monitoring 52m Normal Started pod/alertmanager-main-1 Started container kube-rbac-proxy-metric openshift-monitoring 52m Normal Pulled pod/alertmanager-main-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 52m Normal Created pod/alertmanager-main-1 Created container alertmanager-proxy openshift-monitoring 52m Normal Started pod/alertmanager-main-1 Started container alertmanager openshift-monitoring 52m Normal Created pod/alertmanager-main-1 Created container kube-rbac-proxy openshift-monitoring 52m Normal Pulled pod/alertmanager-main-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 52m Normal Created pod/alertmanager-main-1 Created container alertmanager openshift-monitoring 52m Normal Pulled pod/alertmanager-main-1 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1a766de06530f7008fc155e6239f6e3d63b816deeb24c6d520ef1980d748050" in 1.709671871s (1.709686113s including waiting) openshift-monitoring 52m Normal Created pod/alertmanager-main-1 Created container kube-rbac-proxy-metric openshift-monitoring 52m Normal Started pod/alertmanager-main-1 Started container kube-rbac-proxy openshift-monitoring 52m Normal Pulled pod/alertmanager-main-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" already present on machine openshift-monitoring 52m Normal Pulled pod/alertmanager-main-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1a766de06530f7008fc155e6239f6e3d63b816deeb24c6d520ef1980d748050" already present on machine openshift-monitoring 52m Normal Created pod/alertmanager-main-1 Created container config-reloader openshift-monitoring 52m Normal Started pod/alertmanager-main-1 Started container prom-label-proxy openshift-monitoring 52m Normal Started pod/alertmanager-main-1 Started container config-reloader openshift-monitoring 52m Normal Pulled pod/alertmanager-main-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-monitoring 52m Normal Started pod/alertmanager-main-1 Started container alertmanager-proxy openshift-etcd-operator 52m Normal ConfigMapUpdated deployment/etcd-operator Updated ConfigMap/revision-status-6 -n openshift-etcd:... openshift-monitoring 52m Normal Pulled pod/alertmanager-main-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8541443d5c6c0d31959a8e5f8d6a525d43bb4df66be563603d282f48bdeb304" already present on machine openshift-etcd-operator 52m Normal ConfigMapUpdated deployment/etcd-operator Updated ConfigMap/etcd-pod-6 -n openshift-etcd:... openshift-etcd-operator 52m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "RevisionControllerDegraded: conflicting latestAvailableRevision 6\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" openshift-etcd-operator 52m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "RevisionControllerDegraded: conflicting latestAvailableRevision 6\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" openshift-kube-apiserver-operator 52m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" to "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nRevisionControllerDegraded: etcdserver: request timed out" openshift-etcd-operator 52m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found\nBootstrapTeardownDegraded: etcdserver: request timed out" openshift-etcd-operator 52m Normal NodeTargetRevisionChanged deployment/etcd-operator Updating node "ip-10-0-239-132.ec2.internal" from revision 2 to 6 because node ip-10-0-239-132.ec2.internal with revision 2 is the oldest openshift-kube-apiserver-operator 52m Warning SecretCreateFailed deployment/kube-apiserver-operator Failed to create Secret/localhost-recovery-serving-certkey-8 -n openshift-kube-apiserver: etcdserver: request timed out openshift-etcd-operator 52m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found\nBootstrapTeardownDegraded: etcdserver: request timed out" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" openshift-console-operator 52m Normal ScalingReplicaSet deployment/console-operator Scaled up replica set console-operator-57cbc6b88f to 1 openshift-authentication 52m Normal ScalingReplicaSet deployment/oauth-openshift Scaled up replica set oauth-openshift-cf968c599 to 2 from 1 openshift-authentication 52m Normal AddedInterface pod/oauth-openshift-cf968c599-9vrxf Add eth0 [10.129.0.41/23] from ovn-kubernetes openshift-kube-apiserver-operator 52m Normal ConfigMapUpdated deployment/kube-apiserver-operator Updated ConfigMap/revision-status-8 -n openshift-kube-apiserver:... openshift-authentication 52m Normal Started pod/oauth-openshift-cf968c599-9vrxf Started container oauth-openshift openshift-authentication 52m Normal Pulled pod/oauth-openshift-cf968c599-9vrxf Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2156db516db2fda43363a063c11f91bd201dde2de7eb4fbab9c1210d17e699ba" already present on machine openshift-authentication 52m Normal SuccessfulCreate replicaset/oauth-openshift-cf968c599 Created pod: oauth-openshift-cf968c599-ffkkn openshift-authentication 52m Normal Created pod/oauth-openshift-cf968c599-9vrxf Created container oauth-openshift openshift-authentication 52m Normal ScalingReplicaSet deployment/oauth-openshift Scaled down replica set oauth-openshift-5fdc498fc9 to 1 from 2 openshift-authentication 52m Normal Killing pod/oauth-openshift-5fdc498fc9-2ktk4 Stopping container oauth-openshift openshift-authentication 52m Normal SuccessfulDelete replicaset/oauth-openshift-5fdc498fc9 Deleted pod: oauth-openshift-5fdc498fc9-2ktk4 default 52m Normal RenderedConfigGenerated machineconfigpool/master rendered-master-0a9073c6468c496094e297e778284549 successfully generated (release version: 4.13.0-rc.0, controller version: 40575b862f7bd42a2c40c8e6b7203cd4c29b0021) openshift-monitoring 52m Normal AddedInterface pod/alertmanager-main-0 Add eth0 [10.128.2.13/23] from ovn-kubernetes openshift-kube-apiserver-operator 52m Normal ConfigMapUpdated deployment/kube-apiserver-operator Updated ConfigMap/config-8 -n openshift-kube-apiserver:... openshift-monitoring 52m Normal Pulling pod/alertmanager-main-0 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1a766de06530f7008fc155e6239f6e3d63b816deeb24c6d520ef1980d748050" default 52m Normal RenderedConfigGenerated machineconfigpool/worker rendered-worker-65a660c5b4cafef14c5770efedbee76c successfully generated (release version: 4.13.0-rc.0, controller version: 40575b862f7bd42a2c40c8e6b7203cd4c29b0021) openshift-monitoring 52m Normal Pulled pod/alertmanager-main-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 52m Normal Pulled pod/alertmanager-main-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" already present on machine openshift-monitoring 52m Normal Created pod/alertmanager-main-0 Created container config-reloader openshift-monitoring 52m Normal Pulled pod/alertmanager-main-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-monitoring 52m Normal Started pod/alertmanager-main-0 Started container config-reloader openshift-monitoring 52m Normal Pulled pod/alertmanager-main-0 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1a766de06530f7008fc155e6239f6e3d63b816deeb24c6d520ef1980d748050" in 1.380832823s (1.380841438s including waiting) openshift-monitoring 52m Normal Started pod/alertmanager-main-0 Started container alertmanager-proxy openshift-monitoring 52m Normal Created pod/alertmanager-main-0 Created container alertmanager-proxy openshift-monitoring 52m Normal Pulled pod/alertmanager-main-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8541443d5c6c0d31959a8e5f8d6a525d43bb4df66be563603d282f48bdeb304" already present on machine openshift-monitoring 52m Normal Started pod/alertmanager-main-0 Started container prom-label-proxy openshift-monitoring 52m Normal Started pod/alertmanager-main-0 Started container kube-rbac-proxy openshift-monitoring 52m Normal Pulled pod/alertmanager-main-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 52m Normal Created pod/alertmanager-main-0 Created container kube-rbac-proxy-metric openshift-monitoring 52m Normal Created pod/alertmanager-main-0 Created container kube-rbac-proxy openshift-monitoring 52m Normal Started pod/alertmanager-main-0 Started container kube-rbac-proxy-metric openshift-monitoring 52m Normal Created pod/alertmanager-main-0 Created container prom-label-proxy openshift-monitoring 52m Normal Pulled pod/alertmanager-main-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1a766de06530f7008fc155e6239f6e3d63b816deeb24c6d520ef1980d748050" already present on machine openshift-monitoring 52m Normal Started pod/alertmanager-main-0 Started container alertmanager openshift-monitoring 52m Normal Created pod/alertmanager-main-0 Created container alertmanager openshift-kube-apiserver-operator 52m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/localhost-recovery-serving-certkey-8 -n openshift-kube-apiserver because it was missing openshift-etcd-operator 52m Warning PodCreateFailed deployment/etcd-operator Failed to create Pod/installer-6-ip-10-0-239-132.ec2.internal -n openshift-etcd: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods": dial tcp 172.30.0.1:443: connect: connection refused openshift-etcd-operator 52m Warning InstallerPodFailed deployment/etcd-operator Failed to create installer pod for revision 6 count 0 on node "ip-10-0-239-132.ec2.internal": Post "https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods": dial tcp 172.30.0.1:443: connect: connection refused openshift-apiserver-operator 52m Warning OpenShiftAPICheckFailed deployment/openshift-apiserver-operator "template.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/template.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused openshift-kube-apiserver 52m Warning ProbeError pod/kube-apiserver-guard-ip-10-0-197-197.ec2.internal Readiness probe error: Get "https://10.0.197.197:6443/readyz": dial tcp 10.0.197.197:6443: connect: connection refused... openshift-apiserver-operator 52m Warning OpenShiftAPICheckFailed deployment/openshift-apiserver-operator "security.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/security.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused openshift-kube-apiserver-operator 52m Warning SecretCreateFailed deployment/kube-apiserver-operator Failed to create Secret/localhost-recovery-client-token-8 -n openshift-kube-apiserver: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets": dial tcp 172.30.0.1:443: connect: connection refused openshift-kube-apiserver-operator 52m Warning RevisionCreateFailed deployment/kube-apiserver-operator Failed to create revision 8: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets": dial tcp 172.30.0.1:443: connect: connection refused openshift-kube-controller-manager 52m Normal Started pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Started container kube-controller-manager openshift-kube-controller-manager 52m Normal Created pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Created container kube-controller-manager openshift-kube-controller-manager 52m Normal Pulled pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-apiserver 52m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container setup openshift-kube-apiserver 52m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-apiserver 52m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container setup openshift-oauth-apiserver 52m Warning ProbeError pod/apiserver-9b9694fdc-kb6ks Readiness probe error: Get "https://10.130.0.52:8443/readyz": dial tcp 10.130.0.52:8443: connect: connection refused... openshift-oauth-apiserver 52m Warning Unhealthy pod/apiserver-9b9694fdc-kb6ks Readiness probe failed: Get "https://10.130.0.52:8443/readyz": dial tcp 10.130.0.52:8443: connect: connection refused openshift-kube-controller-manager 51m Warning Unhealthy pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Startup probe failed: Get "https://10.0.197.197:10257/healthz": dial tcp 10.0.197.197:10257: connect: connection refused openshift-kube-controller-manager 51m Warning ProbeError pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Startup probe error: Get "https://10.0.197.197:10257/healthz": dial tcp 10.0.197.197:10257: connect: connection refused... openshift-kube-apiserver-operator 51m Warning RevisionCreateFailed deployment/kube-apiserver-operator Failed to create revision 8: Delete "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/secrets/encryption-config-8": dial tcp 172.30.0.1:443: connect: connection refused openshift-ovn-kubernetes 51m Normal Pulled pod/ovnkube-master-l7mb9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 51m Normal Created pod/ovnkube-master-l7mb9 Created container ovnkube-master openshift-ovn-kubernetes 51m Normal Started pod/ovnkube-master-l7mb9 Started container ovnkube-master openshift-etcd-operator 51m Warning EtcdEndpointsErrorUpdatingStatus deployment/etcd-operator Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused openshift-apiserver-operator 51m Warning ObservedConfigWriteError deployment/openshift-apiserver-operator Failed to write observed config: Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/openshiftapiservers/cluster": dial tcp 172.30.0.1:443: connect: connection refused openshift-authentication 51m Warning ProbeError pod/oauth-openshift-5fdc498fc9-2ktk4 Readiness probe error: Get "https://10.130.0.67:6443/healthz": dial tcp 10.130.0.67:6443: connect: connection refused... openshift-authentication 51m Warning Unhealthy pod/oauth-openshift-5fdc498fc9-2ktk4 Readiness probe failed: Get "https://10.130.0.67:6443/healthz": dial tcp 10.130.0.67:6443: connect: connection refused default 51m Normal LeaderElection lease ip-10-0-140-6.ec2.internal stopped leading openshift-etcd-operator 51m Warning ScriptControllerErrorUpdatingStatus deployment/etcd-operator Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused openshift-kube-apiserver-operator 51m Warning InstallerPodFailed deployment/kube-apiserver-operator Failed to create installer pod for revision 7 count 0 on node "ip-10-0-197-197.ec2.internal": Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-7-ip-10-0-197-197.ec2.internal": dial tcp 172.30.0.1:443: connect: connection refused openshift-kube-apiserver 51m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 51m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver-cert-syncer openshift-kube-apiserver 51m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 51m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver openshift-kube-apiserver 51m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver-cert-syncer openshift-kube-apiserver 51m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver-cert-regeneration-controller openshift-kube-apiserver 51m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 51m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver openshift-kube-apiserver 51m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-etcd-operator 51m Warning InstallerPodFailed deployment/etcd-operator Failed to create installer pod for revision 6 count 0 on node "ip-10-0-239-132.ec2.internal": Get "https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/installer-6-ip-10-0-239-132.ec2.internal": dial tcp 172.30.0.1:443: connect: connection refused openshift-kube-apiserver 51m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver-cert-regeneration-controller openshift-kube-apiserver 51m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver-insecure-readyz openshift-kube-apiserver 51m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver-insecure-readyz openshift-kube-apiserver 51m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver-check-endpoints openshift-kube-apiserver 51m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 51m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver-check-endpoints openshift-kube-apiserver 51m Warning FastControllerResync node/ip-10-0-197-197.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver 51m Warning FastControllerResync node/ip-10-0-197-197.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver 51m Normal LeaderElection lease/cert-regeneration-controller-lock ip-10-0-197-197_ffbe1fe2-a77b-407f-88a2-fa9872205ed8 became leader openshift-kube-apiserver-operator 51m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/webhook-authenticator-8 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 51m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/localhost-recovery-client-token-8 -n openshift-kube-apiserver because it was missing openshift-authentication 51m Warning FailedCreatePodSandBox pod/oauth-openshift-cf968c599-ffkkn Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-cf968c599-ffkkn_openshift-authentication_28d798ec-97e9-42fb-9dd9-21575f828d59_0(baeb4ea7722da85c815eaffcd76895a536bef12881541142bec3e8d4e8eb557e): error adding pod openshift-authentication_oauth-openshift-cf968c599-ffkkn to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-authentication/oauth-openshift-cf968c599-ffkkn/28d798ec-97e9-42fb-9dd9-21575f828d59]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-cf968c599-ffkkn?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused default 51m Normal SetDesiredConfig machineconfigpool/worker Targeted node ip-10-0-160-152.ec2.internal to config rendered-worker-65a660c5b4cafef14c5770efedbee76c default 51m Normal SetDesiredConfig machineconfigpool/master Targeted node ip-10-0-239-132.ec2.internal to config rendered-master-0a9073c6468c496094e297e778284549 openshift-oauth-apiserver 51m Warning FailedCreatePodSandBox pod/apiserver-8ddbf84fd-g8ssl Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-8ddbf84fd-g8ssl_openshift-oauth-apiserver_272fe712-a815-4da3-acf3-4a7aa2b3ae91_0(e1aa4cc53356e888fd3bf574c3750f8123d59e4e29c87441523624863eb967a6): error adding pod openshift-oauth-apiserver_apiserver-8ddbf84fd-g8ssl to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-oauth-apiserver/apiserver-8ddbf84fd-g8ssl/272fe712-a815-4da3-acf3-4a7aa2b3ae91]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-oauth-apiserver/pods/apiserver-8ddbf84fd-g8ssl?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-kube-apiserver-operator 51m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nRevisionControllerDegraded: etcdserver: request timed out" to "GuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nRevisionControllerDegraded: etcdserver: request timed out" kube-system 51m Normal LeaderElection lease/kube-controller-manager ip-10-0-140-6_8e83405f-5bf9-4ca3-a62c-b6843d46ce3e became leader default 51m Normal AnnotationChange machineconfigpool/master Node ip-10-0-239-132.ec2.internal now has machineconfiguration.openshift.io/desiredConfig=rendered-master-0a9073c6468c496094e297e778284549 kube-system 51m Normal LeaderElection configmap/kube-controller-manager ip-10-0-140-6_8e83405f-5bf9-4ca3-a62c-b6843d46ce3e became leader openshift-authentication-operator 51m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded changed from False to True ("OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)") openshift-marketplace 51m Normal Started pod/marketplace-operator-554c77d6df-2q9k5 Started container marketplace-operator openshift-marketplace 51m Normal Pulled pod/marketplace-operator-554c77d6df-2q9k5 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e8bda93aae5c360f971e4532706ab6a95eb260e026a6704f837016cab6525fb" already present on machine openshift-marketplace 51m Normal Created pod/marketplace-operator-554c77d6df-2q9k5 Created container marketplace-operator openshift-cluster-storage-operator 51m Normal OperatorStatusChanged deployment/csi-snapshot-controller-operator Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotStaticResourceControllerDegraded: \"webhook_service.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/services/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotStaticResourceControllerDegraded: \"csi_controller_deployment_pdb.yaml\" (string): Get \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-storage-operator/poddisruptionbudgets/csi-snapshot-controller-pdb\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotStaticResourceControllerDegraded: \"webhook_deployment_pdb.yaml\" (string): Get \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-storage-operator/poddisruptionbudgets/csi-snapshot-webhook-pdb\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotStaticResourceControllerDegraded: " to "All is well" openshift-cluster-storage-operator 51m Normal OperatorStatusChanged deployment/csi-snapshot-controller-operator Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "CSISnapshotStaticResourceControllerDegraded: \"serviceaccount.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/serviceaccounts/csi-snapshot-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotStaticResourceControllerDegraded: \"webhook_service.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-cluster-storage-operator/services/csi-snapshot-webhook\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotStaticResourceControllerDegraded: \"csi_controller_deployment_pdb.yaml\" (string): Get \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-storage-operator/poddisruptionbudgets/csi-snapshot-controller-pdb\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotStaticResourceControllerDegraded: \"webhook_deployment_pdb.yaml\" (string): Get \"https://172.30.0.1:443/apis/policy/v1/namespaces/openshift-cluster-storage-operator/poddisruptionbudgets/csi-snapshot-webhook-pdb\": dial tcp 172.30.0.1:443: connect: connection refused\nCSISnapshotStaticResourceControllerDegraded: " openshift-marketplace 51m Warning Unhealthy pod/marketplace-operator-554c77d6df-2q9k5 Readiness probe failed: Get "http://10.130.0.18:8080/healthz": dial tcp 10.130.0.18:8080: connect: connection refused openshift-marketplace 51m Warning ProbeError pod/marketplace-operator-554c77d6df-2q9k5 Readiness probe error: Get "http://10.130.0.18:8080/healthz": dial tcp 10.130.0.18:8080: connect: connection refused... default 50m Normal PendingConfig node/ip-10-0-239-132.ec2.internal Written pending config rendered-master-0a9073c6468c496094e297e778284549 openshift-etcd-operator 50m Normal PodCreated deployment/etcd-operator Created Pod/installer-6-ip-10-0-239-132.ec2.internal -n openshift-etcd because it was missing default 50m Normal SkipReboot node/ip-10-0-239-132.ec2.internal Config changes do not require reboot. default 50m Normal OSUpdateStaged node/ip-10-0-239-132.ec2.internal Changes to OS staged openshift-kube-apiserver-operator 50m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]\nRevisionControllerDegraded: etcdserver: request timed out" to "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nRevisionControllerDegraded: secrets \"localhost-recovery-client-token-8\" already exists" openshift-kube-controller-manager-operator 50m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.219.179:9091: connect: connection refused" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.219.179:9091: connect: connection refused\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: =12) \"serving-cert\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=8 cap=8) {\nNodeInstallerDegraded: (string) (len=27) \"kube-controller-manager-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=32) \"cluster-policy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:21:25.110934 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:25.118932 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:25.122087 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:55.122828 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:21:55.124057 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: " openshift-authentication-operator 50m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server replied with unexpected status: 403 Forbidden (check kube-apiserver logs if this error persists)" openshift-kube-apiserver-operator 50m Warning SecretCreateFailed deployment/kube-apiserver-operator Failed to create Secret/localhost-recovery-client-token-8 -n openshift-kube-apiserver: secrets "localhost-recovery-client-token-8" already exists openshift-kube-apiserver-operator 50m Normal RevisionTriggered deployment/kube-apiserver-operator new revision 8 triggered by "required configmap/config has changed,optional configmap/oauth-metadata has been created" openshift-kube-controller-manager-operator 50m Warning InstallerPodFailed deployment/kube-controller-manager-operator installer errors: installer: =12) "serving-cert"... openshift-authentication-operator 50m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" openshift-kube-controller-manager-operator 50m Normal PodCreated deployment/kube-controller-manager-operator Created Pod/installer-5-retry-1-ip-10-0-140-6.ec2.internal -n openshift-kube-controller-manager because it was missing openshift-kube-apiserver-operator 50m Normal RevisionCreate deployment/kube-apiserver-operator Revision 7 created because required configmap/config has changed,optional configmap/oauth-metadata has been created openshift-kube-apiserver-operator 50m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nRevisionControllerDegraded: secrets \"localhost-recovery-client-token-8\" already exists" to "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" kube-system 50m Normal LeaderElection lease/kube-controller-manager ip-10-0-140-6_6e6aeabf-e2c0-4f63-b8ad-a0f1fddde799 became leader kube-system 50m Normal LeaderElection configmap/kube-controller-manager ip-10-0-140-6_6e6aeabf-e2c0-4f63-b8ad-a0f1fddde799 became leader openshift-kube-scheduler-operator 50m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pods \"openshift-kube-scheduler-ip-10-0-239-132.ec2.internal\" is forbidden: User \"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-scheduler\"\nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal container \"kube-scheduler\" started at 2023-03-21 12:21:08 +0000 UTC is still not ready\nNodeControllerDegraded: All master nodes are ready" openshift-kube-scheduler-operator 50m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "StaticPodsDegraded: pods \"openshift-kube-scheduler-ip-10-0-239-132.ec2.internal\" is forbidden: User \"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-scheduler\"\nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal container \"kube-scheduler\" started at 2023-03-21 12:21:08 +0000 UTC is still not ready\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" openshift-kube-scheduler-operator 50m Normal NodeCurrentRevisionChanged deployment/openshift-kube-scheduler-operator Updated node "ip-10-0-197-197.ec2.internal" from revision 0 to 7 because static pod is ready openshift-kube-scheduler-operator 50m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 nodes are at revision 0; 2 nodes are at revision 6; 0 nodes have achieved new revision 7" to "NodeInstallerProgressing: 2 nodes are at revision 6; 1 nodes are at revision 7",Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 nodes are at revision 0; 2 nodes are at revision 6; 0 nodes have achieved new revision 7" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 6; 1 nodes are at revision 7" default 50m Normal NodeDone node/ip-10-0-239-132.ec2.internal Setting node ip-10-0-239-132.ec2.internal, currentConfig rendered-master-0a9073c6468c496094e297e778284549 to Done default 50m Normal ConfigDriftMonitorStarted node/ip-10-0-239-132.ec2.internal Config Drift Monitor started, watching against rendered-master-0a9073c6468c496094e297e778284549 default 50m Normal Uncordon node/ip-10-0-239-132.ec2.internal Update completed for config rendered-master-0a9073c6468c496094e297e778284549 and node has been uncordoned openshift-kube-controller-manager-operator 50m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.219.179:9091: connect: connection refused\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: =12) \"serving-cert\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=8 cap=8) {\nNodeInstallerDegraded: (string) (len=27) \"kube-controller-manager-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=32) \"cluster-policy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:21:25.110934 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:25.118932 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:25.122087 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:55.122828 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:21:55.124057 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: " to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.219.179:9091: connect: connection refused\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: =12) \"serving-cert\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=8 cap=8) {\nNodeInstallerDegraded: (string) (len=27) \"kube-controller-manager-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=32) \"cluster-policy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:21:25.110934 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:25.118932 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:25.122087 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:55.122828 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:21:55.124057 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager\" is terminated: Error: ube-controller-manager\" fieldPath=\"\" kind=\"Lease\" apiVersion=\"coordination.k8s.io/v1\" type=\"Normal\" reason=\"LeaderElection\" message=\"ip-10-0-140-6_6e6aeabf-e2c0-4f63-b8ad-a0f1fddde799 became leader\"\nStaticPodsDegraded: W0321 12:23:14.915945 1 plugins.go:131] WARNING: aws built-in cloud provider is now deprecated. The AWS provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes/cloud-provider-aws\nStaticPodsDegraded: I0321 12:23:14.915970 1 aws.go:1226] Get AWS region from metadata client\nStaticPodsDegraded: I0321 12:23:14.916090 1 aws.go:1269] Zone not specified in configuration file; querying AWS metadata service\nStaticPodsDegraded: I0321 12:23:14.917382 1 aws.go:1309] Building AWS cloudprovider\nStaticPodsDegraded: I0321 12:23:15.066588 1 tags.go:80] AWS cloud filtering on ClusterID: qeaisrhods-c13-28wr5\nStaticPodsDegraded: I0321 12:23:15.066606 1 aws.go:814] Setting up informers for Cloud\nStaticPodsDegraded: I0321 12:23:15.067218 1 shared_informer.go:273] Waiting for caches to sync for tokens\nStaticPodsDegraded: I0321 12:23:15.069867 1 controllermanager.go:645] Starting \"csrapproving\"\nStaticPodsDegraded: I0321 12:23:15.072313 1 controllermanager.go:674] Started \"csrapproving\"\nStaticPodsDegraded: I0321 12:23:15.072331 1 controllermanager.go:645] Starting \"podgc\"\nStaticPodsDegraded: I0321 12:23:15.072341 1 certificate_controller.go:112] Starting certificate controller \"csrapproving\"\nStaticPodsDegraded: I0321 12:23:15.072354 1 shared_informer.go:273] Waiting for caches to sync for certificate-csrapproving\nStaticPodsDegraded: I0321 12:23:15.074835 1 controllermanager.go:674] Started \"podgc\"\nStaticPodsDegraded: I0321 12:23:15.074869 1 controllermanager.go:645] Starting \"resourcequota\"\nStaticPodsDegraded: I0321 12:23:15.074873 1 gc_controller.go:102] Starting GC controller\nStaticPodsDegraded: I0321 12:23:15.074885 1 shared_informer.go:273] Waiting for caches to sync for GC\nStaticPodsDegraded: E0321 12:23:15.104627 1 controllermanager.go:648] Error starting \"resourcequota\"\nStaticPodsDegraded: F0321 12:23:15.104670 1 controllermanager.go:259] error starting controllers: failed to discover resources: Get \"https://api-int.qeaisrhods-c13.abmw.s1.devshift.org:6443/api\": dial tcp 10.0.209.0:6443: connect: connection refused\nStaticPodsDegraded: " openshift-kube-apiserver-operator 50m Normal PodCreated deployment/kube-apiserver-operator Created Pod/revision-pruner-8-ip-10-0-197-197.ec2.internal -n openshift-kube-apiserver because it was missing openshift-apiserver-operator 50m Warning ObservedConfigWriteError deployment/openshift-apiserver-operator Failed to write observed config: Operation cannot be fulfilled on openshiftapiservers.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again openshift-apiserver-operator 50m Normal ObserveStorageUpdated deployment/openshift-apiserver-operator Updated storage urls to https://10.0.140.6:2379,https://10.0.197.197:2379,https://10.0.239.132:2379 openshift-apiserver-operator 50m Normal ObservedConfigChanged deployment/openshift-apiserver-operator Writing updated observed config:   map[string]any{... default 50m Normal SetDesiredConfig machineconfigpool/master Targeted node ip-10-0-140-6.ec2.internal to config rendered-master-0a9073c6468c496094e297e778284549 openshift-kube-apiserver-operator 50m Normal PodCreated deployment/kube-apiserver-operator Created Pod/revision-pruner-8-ip-10-0-239-132.ec2.internal -n openshift-kube-apiserver because it was missing openshift-apiserver-operator 50m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 3.") openshift-authentication-operator 50m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)" to "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: i/o timeout (Client.Timeout exceeded while awaiting headers)" openshift-kube-controller-manager 50m Warning BackOff pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-ip-10-0-140-6.ec2.internal_openshift-kube-controller-manager(a298987de7b44c3762c83f4f2aef4224) openshift-authentication-operator 50m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server replied with unexpected status: 403 Forbidden (check kube-apiserver logs if this error persists)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: i/o timeout (Client.Timeout exceeded while awaiting headers)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server replied with unexpected status: 403 Forbidden (check kube-apiserver logs if this error persists)" openshift-authentication-operator 50m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: i/o timeout (Client.Timeout exceeded while awaiting headers)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server replied with unexpected status: 403 Forbidden (check kube-apiserver logs if this error persists)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server replied with unexpected status: 403 Forbidden (check kube-apiserver logs if this error persists)" openshift-authentication-operator 50m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: i/o timeout (Client.Timeout exceeded while awaiting headers)" to "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)" openshift-kube-apiserver-operator 50m Normal ObservedConfigChanged deployment/kube-apiserver-operator Writing updated observed config:   map[string]any{... openshift-image-registry 50m Normal DaemonSetCreated deployment/cluster-image-registry-operator Created DaemonSet.apps/node-ca -n openshift-image-registry because it was missing openshift-image-registry 50m Normal DaemonSetUpdated deployment/cluster-image-registry-operator Updated DaemonSet.apps/node-ca -n openshift-image-registry because it changed openshift-image-registry 50m Normal DeploymentCreated deployment/cluster-image-registry-operator Created Deployment.apps/image-registry -n openshift-image-registry because it was missing openshift-apiserver-operator 50m Normal ConfigMapUpdated deployment/openshift-apiserver-operator Updated ConfigMap/image-import-ca -n openshift-apiserver:... openshift-kube-apiserver-operator 50m Normal ObserveInternalRegistryHostnameChanged deployment/kube-apiserver-operator Internal registry hostname changed to "image-registry.openshift-image-registry.svc:5000" openshift-kube-apiserver 50m Warning FailedCreatePodSandBox pod/revision-pruner-8-ip-10-0-140-6.ec2.internal Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-8-ip-10-0-140-6.ec2.internal_openshift-kube-apiserver_9ec466c6-c707-481e-a164-c2ae313cad73_0(434a07241ce6e17c92a8ee64c1e77d901472826c9d7ef08db63f0ad82f51d015): error adding pod openshift-kube-apiserver_revision-pruner-8-ip-10-0-140-6.ec2.internal to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-8-ip-10-0-140-6.ec2.internal/9ec466c6-c707-481e-a164-c2ae313cad73]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-kube-apiserver/pods/revision-pruner-8-ip-10-0-140-6.ec2.internal?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-kube-scheduler-operator 50m Normal NodeTargetRevisionChanged deployment/openshift-kube-scheduler-operator Updating node "ip-10-0-239-132.ec2.internal" from revision 6 to 7 because node ip-10-0-239-132.ec2.internal with revision 6 is the oldest openshift-kube-apiserver-operator 50m Normal PodCreated deployment/kube-apiserver-operator Created Pod/revision-pruner-8-ip-10-0-140-6.ec2.internal -n openshift-kube-apiserver because it was missing default 50m Normal OSUpdateStaged node/ip-10-0-140-6.ec2.internal Changes to OS staged openshift-kube-controller-manager-operator 50m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.219.179:9091: connect: connection refused\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: =12) \"serving-cert\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=8 cap=8) {\nNodeInstallerDegraded: (string) (len=27) \"kube-controller-manager-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=32) \"cluster-policy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:21:25.110934 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:25.118932 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:25.122087 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:55.122828 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:21:55.124057 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager\" is terminated: Error: ube-controller-manager\" fieldPath=\"\" kind=\"Lease\" apiVersion=\"coordination.k8s.io/v1\" type=\"Normal\" reason=\"LeaderElection\" message=\"ip-10-0-140-6_6e6aeabf-e2c0-4f63-b8ad-a0f1fddde799 became leader\"\nStaticPodsDegraded: W0321 12:23:14.915945 1 plugins.go:131] WARNING: aws built-in cloud provider is now deprecated. The AWS provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes/cloud-provider-aws\nStaticPodsDegraded: I0321 12:23:14.915970 1 aws.go:1226] Get AWS region from metadata client\nStaticPodsDegraded: I0321 12:23:14.916090 1 aws.go:1269] Zone not specified in configuration file; querying AWS metadata service\nStaticPodsDegraded: I0321 12:23:14.917382 1 aws.go:1309] Building AWS cloudprovider\nStaticPodsDegraded: I0321 12:23:15.066588 1 tags.go:80] AWS cloud filtering on ClusterID: qeaisrhods-c13-28wr5\nStaticPodsDegraded: I0321 12:23:15.066606 1 aws.go:814] Setting up informers for Cloud\nStaticPodsDegraded: I0321 12:23:15.067218 1 shared_informer.go:273] Waiting for caches to sync for tokens\nStaticPodsDegraded: I0321 12:23:15.069867 1 controllermanager.go:645] Starting \"csrapproving\"\nStaticPodsDegraded: I0321 12:23:15.072313 1 controllermanager.go:674] Started \"csrapproving\"\nStaticPodsDegraded: I0321 12:23:15.072331 1 controllermanager.go:645] Starting \"podgc\"\nStaticPodsDegraded: I0321 12:23:15.072341 1 certificate_controller.go:112] Starting certificate controller \"csrapproving\"\nStaticPodsDegraded: I0321 12:23:15.072354 1 shared_informer.go:273] Waiting for caches to sync for certificate-csrapproving\nStaticPodsDegraded: I0321 12:23:15.074835 1 controllermanager.go:674] Started \"podgc\"\nStaticPodsDegraded: I0321 12:23:15.074869 1 controllermanager.go:645] Starting \"resourcequota\"\nStaticPodsDegraded: I0321 12:23:15.074873 1 gc_controller.go:102] Starting GC controller\nStaticPodsDegraded: I0321 12:23:15.074885 1 shared_informer.go:273] Waiting for caches to sync for GC\nStaticPodsDegraded: E0321 12:23:15.104627 1 controllermanager.go:648] Error starting \"resourcequota\"\nStaticPodsDegraded: F0321 12:23:15.104670 1 controllermanager.go:259] error starting controllers: failed to discover resources: Get \"https://api-int.qeaisrhods-c13.abmw.s1.devshift.org:6443/api\": dial tcp 10.0.209.0:6443: connect: connection refused\nStaticPodsDegraded: " to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.219.179:9091: connect: connection refused\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: =12) \"serving-cert\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=8 cap=8) {\nNodeInstallerDegraded: (string) (len=27) \"kube-controller-manager-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=32) \"cluster-policy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:21:25.110934 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:25.118932 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:25.122087 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:55.122828 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:21:55.124057 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager\" is waiting: CrashLoopBackOff: back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-10-0-140-6.ec2.internal_openshift-kube-controller-manager(a298987de7b44c3762c83f4f2aef4224)" openshift-apiserver-operator 50m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 3." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 4." openshift-apiserver-operator 50m Normal ObservedConfigChanged deployment/openshift-apiserver-operator Writing updated observed config:   map[string]any{... openshift-authentication-operator 50m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)" to "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" openshift-kube-scheduler 50m Warning FailedCreatePodSandBox pod/installer-7-ip-10-0-239-132.ec2.internal Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-7-ip-10-0-239-132.ec2.internal_openshift-kube-scheduler_f6af3ac5-d6c1-4ed9-9ea7-3a806646d834_0(319026bc06b61f746451277bc10a6b738d98bd3439d8cd2040da4785460eb1b4): error adding pod openshift-kube-scheduler_installer-7-ip-10-0-239-132.ec2.internal to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-kube-scheduler/installer-7-ip-10-0-239-132.ec2.internal/f6af3ac5-d6c1-4ed9-9ea7-3a806646d834]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-7-ip-10-0-239-132.ec2.internal?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-authentication-operator 50m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" to "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)" openshift-kube-scheduler-operator 50m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/installer-7-ip-10-0-239-132.ec2.internal -n openshift-kube-scheduler because it was missing openshift-ovn-kubernetes 50m Normal LeaderElection lease/ovn-kubernetes-master ip-10-0-239-132.ec2.internal became leader default 50m Warning ErrorReconcilingNode node/ip-10-0-140-6.ec2.internal error creating gateway for node ip-10-0-140-6.ec2.internal: failed to init shared interface gateway: failed to sync stale SNATs on node ip-10-0-140-6.ec2.internal: unable to fetch podIPs for pod openshift-kube-controller-manager/installer-5-retry-1-ip-10-0-140-6.ec2.internal default 50m Warning ErrorReconcilingNode node/ip-10-0-239-132.ec2.internal error creating gateway for node ip-10-0-239-132.ec2.internal: failed to init shared interface gateway: failed to sync stale SNATs on node ip-10-0-239-132.ec2.internal: unable to fetch podIPs for pod openshift-etcd/installer-6-ip-10-0-239-132.ec2.internal openshift-apiserver-operator 50m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 4." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 4.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 5, desired generation is 6." openshift-apiserver-operator 50m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 4.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 5, desired generation is 6." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 5.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 5, desired generation is 6." default 50m Warning ErrorReconcilingNode node/ip-10-0-197-197.ec2.internal error creating gateway for node ip-10-0-197-197.ec2.internal: failed to init shared interface gateway: failed to sync stale SNATs on node ip-10-0-197-197.ec2.internal: unable to fetch podIPs for pod openshift-kube-apiserver/revision-pruner-8-ip-10-0-197-197.ec2.internal openshift-kube-controller-manager 50m Normal Created pod/installer-5-retry-1-ip-10-0-140-6.ec2.internal Created container installer openshift-kube-controller-manager 50m Normal Pulled pod/installer-5-retry-1-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 50m Normal Started pod/installer-5-retry-1-ip-10-0-140-6.ec2.internal Started container installer kube-system 50m Normal LeaderElection configmap/kube-controller-manager ip-10-0-197-197_d1a01515-034a-4194-8f8a-059b73673be0 became leader openshift-kube-controller-manager 50m Normal AddedInterface pod/installer-5-retry-1-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.9/23] from ovn-kubernetes kube-system 50m Normal LeaderElection lease/kube-controller-manager ip-10-0-197-197_d1a01515-034a-4194-8f8a-059b73673be0 became leader openshift-etcd 50m Normal AddedInterface pod/installer-6-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.8/23] from ovn-kubernetes openshift-oauth-apiserver 50m Normal AddedInterface pod/apiserver-8ddbf84fd-g8ssl Add eth0 [10.130.0.20/23] from ovn-kubernetes openshift-authentication 50m Normal Started pod/oauth-openshift-cf968c599-ffkkn Started container oauth-openshift openshift-oauth-apiserver 50m Normal Pulled pod/apiserver-8ddbf84fd-g8ssl Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" already present on machine openshift-oauth-apiserver 50m Normal Created pod/apiserver-8ddbf84fd-g8ssl Created container fix-audit-permissions openshift-oauth-apiserver 50m Normal Started pod/apiserver-8ddbf84fd-g8ssl Started container fix-audit-permissions openshift-oauth-apiserver 50m Normal Created pod/apiserver-8ddbf84fd-g8ssl Created container oauth-apiserver openshift-oauth-apiserver 50m Normal Started pod/apiserver-8ddbf84fd-g8ssl Started container oauth-apiserver openshift-kube-apiserver 50m Normal AddedInterface pod/revision-pruner-8-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.8/23] from ovn-kubernetes openshift-authentication 50m Normal Created pod/oauth-openshift-cf968c599-ffkkn Created container oauth-openshift openshift-etcd 50m Normal Pulled pod/installer-6-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-kube-apiserver 50m Normal Pulled pod/revision-pruner-8-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 50m Normal Created pod/revision-pruner-8-ip-10-0-140-6.ec2.internal Created container pruner openshift-kube-apiserver 50m Normal Started pod/revision-pruner-8-ip-10-0-140-6.ec2.internal Started container pruner openshift-kube-scheduler 50m Normal Started pod/installer-7-ip-10-0-239-132.ec2.internal Started container installer openshift-kube-apiserver 50m Normal AddedInterface pod/revision-pruner-8-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.35/23] from ovn-kubernetes openshift-etcd 50m Normal Started pod/installer-6-ip-10-0-239-132.ec2.internal Started container installer openshift-authentication 50m Normal AddedInterface pod/oauth-openshift-cf968c599-ffkkn Add eth0 [10.130.0.36/23] from ovn-kubernetes openshift-authentication 50m Normal Pulled pod/oauth-openshift-cf968c599-ffkkn Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2156db516db2fda43363a063c11f91bd201dde2de7eb4fbab9c1210d17e699ba" already present on machine openshift-kube-apiserver 50m Normal Pulled pod/revision-pruner-8-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 50m Normal Created pod/revision-pruner-8-ip-10-0-197-197.ec2.internal Created container pruner openshift-kube-apiserver 50m Normal Started pod/revision-pruner-8-ip-10-0-197-197.ec2.internal Started container pruner openshift-kube-apiserver 50m Normal AddedInterface pod/revision-pruner-8-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.9/23] from ovn-kubernetes openshift-etcd 50m Normal Created pod/installer-6-ip-10-0-239-132.ec2.internal Created container installer openshift-kube-scheduler 50m Normal Created pod/installer-7-ip-10-0-239-132.ec2.internal Created container installer openshift-kube-apiserver 50m Normal Pulled pod/revision-pruner-8-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-scheduler 50m Normal AddedInterface pod/installer-7-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.10/23] from ovn-kubernetes openshift-kube-apiserver 50m Normal Created pod/revision-pruner-8-ip-10-0-239-132.ec2.internal Created container pruner openshift-kube-apiserver 50m Normal Started pod/revision-pruner-8-ip-10-0-239-132.ec2.internal Started container pruner openshift-oauth-apiserver 50m Normal Pulled pod/apiserver-8ddbf84fd-g8ssl Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" already present on machine openshift-kube-scheduler 50m Normal Pulled pod/installer-7-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine default 50m Normal RegisteredNode node/ip-10-0-239-132.ec2.internal Node ip-10-0-239-132.ec2.internal event: Registered Node ip-10-0-239-132.ec2.internal in Controller default 50m Normal RegisteredNode node/ip-10-0-140-6.ec2.internal Node ip-10-0-140-6.ec2.internal event: Registered Node ip-10-0-140-6.ec2.internal in Controller default 50m Normal RegisteredNode node/ip-10-0-197-197.ec2.internal Node ip-10-0-197-197.ec2.internal event: Registered Node ip-10-0-197-197.ec2.internal in Controller openshift-kube-apiserver-operator 50m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 7" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 8",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 7" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 8" openshift-ingress 50m Normal EnsuringLoadBalancer service/router-default Ensuring load balancer default 50m Normal RegisteredNode node/ip-10-0-232-8.ec2.internal Node ip-10-0-232-8.ec2.internal event: Registered Node ip-10-0-232-8.ec2.internal in Controller default 50m Normal RegisteredNode node/ip-10-0-160-152.ec2.internal Node ip-10-0-160-152.ec2.internal event: Registered Node ip-10-0-160-152.ec2.internal in Controller openshift-image-registry 50m Normal AddedInterface pod/image-registry-5588bdd7b4-m28sx Add eth0 [10.128.2.3/23] from ovn-kubernetes openshift-apiserver 50m Normal SuccessfulCreate replicaset/apiserver-7475f65d84 Created pod: apiserver-7475f65d84-4ncn2 openshift-authentication 50m Normal SuccessfulCreate replicaset/oauth-openshift-cf968c599 Created pod: oauth-openshift-cf968c599-kskc6 openshift-apiserver 50m Normal SuccessfulDelete replicaset/apiserver-6977bc9f6b Deleted pod: apiserver-6977bc9f6b-b9qrr openshift-image-registry 50m Normal SuccessfulCreate replicaset/image-registry-5588bdd7b4 Created pod: image-registry-5588bdd7b4-m28sx openshift-image-registry 50m Normal SuccessfulCreate replicaset/image-registry-5588bdd7b4 Created pod: image-registry-5588bdd7b4-4mffb openshift-image-registry 50m Normal Pulling pod/node-ca-sfbnk Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" openshift-apiserver 50m Normal Killing pod/apiserver-6977bc9f6b-b9qrr Stopping container openshift-apiserver-check-endpoints openshift-image-registry 50m Normal Pulling pod/node-ca-tvq4f Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" openshift-image-registry 50m Normal Pulling pod/node-ca-92xvd Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" openshift-image-registry 50m Normal Pulling pod/node-ca-bcbwn Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" openshift-apiserver 50m Normal ScalingReplicaSet deployment/apiserver Scaled down replica set apiserver-6977bc9f6b to 2 from 3 openshift-image-registry 50m Normal Pulling pod/image-registry-5588bdd7b4-m28sx Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" openshift-image-registry 50m Normal SuccessfulCreate daemonset/node-ca Created pod: node-ca-92xvd openshift-dns 50m Warning TopologyAwareHintsDisabled service/dns-default Insufficient Node information: allocatable CPU or zone not specified on one or more nodes, addressType: IPv4 openshift-image-registry 50m Normal Pulling pod/node-ca-rz7r5 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" openshift-apiserver 50m Normal ScalingReplicaSet deployment/apiserver Scaled up replica set apiserver-7475f65d84 to 1 from 0 openshift-apiserver 50m Normal Killing pod/apiserver-6977bc9f6b-b9qrr Stopping container openshift-apiserver openshift-image-registry 50m Normal SuccessfulCreate daemonset/node-ca Created pod: node-ca-tvq4f openshift-authentication 50m Normal SuccessfulDelete replicaset/oauth-openshift-5fdc498fc9 Deleted pod: oauth-openshift-5fdc498fc9-pbpqd openshift-authentication 50m Normal ScalingReplicaSet deployment/oauth-openshift Scaled up replica set oauth-openshift-cf968c599 to 3 from 2 openshift-image-registry 50m Normal SuccessfulCreate daemonset/node-ca Created pod: node-ca-bcbwn openshift-authentication-operator 50m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)" to "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)",Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" openshift-image-registry 50m Normal Pulling pod/image-registry-5588bdd7b4-4mffb Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" openshift-image-registry 50m Normal SuccessfulCreate daemonset/node-ca Created pod: node-ca-sfbnk openshift-image-registry 50m Normal AddedInterface pod/image-registry-5588bdd7b4-4mffb Add eth0 [10.131.0.3/23] from ovn-kubernetes openshift-image-registry 50m Normal SuccessfulCreate daemonset/node-ca Created pod: node-ca-rz7r5 openshift-image-registry 50m Normal ScalingReplicaSet deployment/image-registry Scaled up replica set image-registry-5588bdd7b4 to 2 openshift-authentication 50m Normal ScalingReplicaSet deployment/oauth-openshift Scaled down replica set oauth-openshift-5fdc498fc9 to 0 from 1 openshift-authentication 50m Normal Killing pod/oauth-openshift-5fdc498fc9-pbpqd Stopping container oauth-openshift openshift-ingress 50m Normal EnsuredLoadBalancer service/router-default Ensured load balancer openshift-authentication-operator 50m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server replied with unexpected status: 403 Forbidden (check kube-apiserver logs if this error persists)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server replied with unexpected status: 403 Forbidden (check kube-apiserver logs if this error persists)" openshift-apiserver-operator 50m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 5.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 5, desired generation is 6." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 5." openshift-authentication-operator 50m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server replied with unexpected status: 403 Forbidden (check kube-apiserver logs if this error persists)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server replied with unexpected status: 403 Forbidden (check kube-apiserver logs if this error persists)" openshift-authentication-operator 50m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)" to "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" openshift-authentication-operator 50m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" to "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)" openshift-apiserver-operator 50m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 5." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" openshift-image-registry 50m Normal Started pod/node-ca-bcbwn Started container node-ca openshift-image-registry 50m Normal Created pod/node-ca-rz7r5 Created container node-ca openshift-image-registry 50m Normal Started pod/node-ca-rz7r5 Started container node-ca openshift-image-registry 50m Normal Created pod/image-registry-5588bdd7b4-m28sx Created container registry openshift-image-registry 50m Normal Pulled pod/node-ca-rz7r5 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" in 1.985002486s (1.985010497s including waiting) openshift-image-registry 50m Normal Pulled pod/node-ca-92xvd Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" in 1.704633836s (1.704649681s including waiting) openshift-image-registry 50m Normal Created pod/node-ca-92xvd Created container node-ca openshift-image-registry 50m Normal Created pod/node-ca-bcbwn Created container node-ca openshift-image-registry 50m Normal Pulled pod/node-ca-bcbwn Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" in 1.723908388s (1.723921143s including waiting) openshift-image-registry 50m Normal Pulled pod/image-registry-5588bdd7b4-4mffb Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" in 1.496554704s (1.496567407s including waiting) openshift-image-registry 50m Normal Pulled pod/image-registry-5588bdd7b4-m28sx Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" in 1.732394582s (1.732404076s including waiting) openshift-image-registry 50m Normal Started pod/node-ca-tvq4f Started container node-ca openshift-image-registry 50m Normal Created pod/node-ca-tvq4f Created container node-ca openshift-image-registry 50m Normal Started pod/image-registry-5588bdd7b4-m28sx Started container registry openshift-image-registry 50m Normal Started pod/node-ca-92xvd Started container node-ca openshift-image-registry 50m Normal Pulled pod/node-ca-sfbnk Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" in 1.973541417s (1.97355111s including waiting) openshift-image-registry 50m Normal Created pod/node-ca-sfbnk Created container node-ca openshift-image-registry 50m Normal Created pod/image-registry-5588bdd7b4-4mffb Created container registry openshift-image-registry 50m Normal Pulled pod/node-ca-tvq4f Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" in 1.914206054s (1.914216699s including waiting) openshift-image-registry 50m Normal Started pod/image-registry-5588bdd7b4-4mffb Started container registry openshift-image-registry 50m Normal Started pod/node-ca-sfbnk Started container node-ca openshift-oauth-apiserver 50m Normal SuccessfulDelete replicaset/apiserver-9b9694fdc Deleted pod: apiserver-9b9694fdc-g7gxw openshift-oauth-apiserver 50m Normal ScalingReplicaSet deployment/apiserver Scaled up replica set apiserver-8ddbf84fd to 2 from 1 openshift-oauth-apiserver 50m Normal Killing pod/apiserver-9b9694fdc-g7gxw Stopping container oauth-apiserver openshift-oauth-apiserver 50m Normal ScalingReplicaSet deployment/apiserver Scaled down replica set apiserver-9b9694fdc to 1 from 2 openshift-oauth-apiserver 50m Normal SuccessfulCreate replicaset/apiserver-8ddbf84fd Created pod: apiserver-8ddbf84fd-7qf7p openshift-kube-apiserver-operator 50m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/revision-status-9 -n openshift-kube-apiserver because it was missing openshift-authentication-operator 50m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" openshift-kube-apiserver-operator 50m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-pod-9 -n openshift-kube-apiserver because it was missing openshift-kube-controller-manager 50m Warning FastControllerResync pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Controller "pod-security-admission-label-synchronization-controller" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager 50m Warning FastControllerResync pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Controller "namespace-security-allocation-controller" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager 50m Normal LeaderElection configmap/cluster-policy-controller-lock ip-10-0-239-132_919b3082-b985-4222-a0ec-ebd6dd13d36e became leader openshift-kube-controller-manager 50m Normal LeaderElection lease/cluster-policy-controller-lock ip-10-0-239-132_919b3082-b985-4222-a0ec-ebd6dd13d36e became leader openshift-kube-controller-manager 50m Normal Started pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Started container kube-controller-manager default 50m Normal OSUpdateStaged node/ip-10-0-140-6.ec2.internal Changes to OS staged openshift-kube-controller-manager 50m Normal Created pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Created container kube-controller-manager openshift-kube-controller-manager 50m Normal Pulled pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-apiserver-operator 50m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/config-9 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 50m Normal PodCreated deployment/kube-apiserver-operator Created Pod/installer-8-ip-10-0-197-197.ec2.internal -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 50m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-9 -n openshift-kube-apiserver because it was missing openshift-kube-controller-manager 50m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-239-132.ec2.internal created SCC ranges for openshift-console-operator namespace openshift-kube-apiserver 50m Normal Created pod/installer-8-ip-10-0-197-197.ec2.internal Created container installer openshift-kube-apiserver 50m Normal AddedInterface pod/installer-8-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.37/23] from ovn-kubernetes openshift-kube-apiserver 50m Normal Pulled pod/installer-8-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-controller-manager 50m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-239-132.ec2.internal created SCC ranges for openshift-console-user-settings namespace openshift-kube-controller-manager 50m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-239-132.ec2.internal created SCC ranges for openshift-console namespace default 50m Normal PendingConfig node/ip-10-0-160-152.ec2.internal Written pending config rendered-worker-65a660c5b4cafef14c5770efedbee76c openshift-console-operator 50m Normal AddedInterface pod/console-operator-57cbc6b88f-snwcj Add eth0 [10.129.0.13/23] from ovn-kubernetes openshift-kube-controller-manager-operator 50m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.219.179:9091: connect: connection refused\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: =12) \"serving-cert\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=8 cap=8) {\nNodeInstallerDegraded: (string) (len=27) \"kube-controller-manager-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=32) \"cluster-policy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:21:25.110934 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:25.118932 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:25.122087 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:55.122828 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:21:55.124057 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager\" is waiting: CrashLoopBackOff: back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-ip-10-0-140-6.ec2.internal_openshift-kube-controller-manager(a298987de7b44c3762c83f4f2aef4224)" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.219.179:9091: connect: connection refused\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: =12) \"serving-cert\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=8 cap=8) {\nNodeInstallerDegraded: (string) (len=27) \"kube-controller-manager-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=32) \"cluster-policy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:21:25.110934 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:25.118932 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:25.122087 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:55.122828 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:21:55.124057 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: " default 50m Normal OSUpdateStaged node/ip-10-0-160-152.ec2.internal Changes to OS staged openshift-kube-apiserver 50m Normal Started pod/installer-8-ip-10-0-197-197.ec2.internal Started container installer openshift-console-operator 50m Normal Pulling pod/console-operator-57cbc6b88f-snwcj Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6dd6ba37d430e9e8e248b4c5911ef0903f8bd8d05451ed65eeb1d9d2b3c42e4" openshift-console-operator 50m Normal SuccessfulCreate replicaset/console-operator-57cbc6b88f Created pod: console-operator-57cbc6b88f-snwcj default 50m Normal SkipReboot node/ip-10-0-160-152.ec2.internal Config changes do not require reboot. openshift-kube-controller-manager 50m Warning Unhealthy pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Startup probe failed: Get "https://10.0.140.6:10257/healthz": dial tcp 10.0.140.6:10257: connect: connection refused openshift-kube-apiserver-operator 50m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/oauth-metadata-9 -n openshift-kube-apiserver because it was missing openshift-console-operator 50m Normal LeaderElection lease/console-operator-lock console-operator-57cbc6b88f-snwcj_9631e7b6-7090-41b3-a355-5c1b365320a3 became leader openshift-console-operator 50m Normal Created pod/console-operator-57cbc6b88f-snwcj Created container conversion-webhook-server openshift-console-operator 50m Warning FastControllerResync deployment/console-operator Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling openshift-console-operator 50m Normal Started pod/console-operator-57cbc6b88f-snwcj Started container conversion-webhook-server openshift-console-operator 50m Normal Pulled pod/console-operator-57cbc6b88f-snwcj Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6dd6ba37d430e9e8e248b4c5911ef0903f8bd8d05451ed65eeb1d9d2b3c42e4" in 1.654413496s (1.654426575s including waiting) openshift-console-operator 50m Normal Created pod/console-operator-57cbc6b88f-snwcj Created container console-operator openshift-console-operator 50m Normal Started pod/console-operator-57cbc6b88f-snwcj Started container console-operator openshift-console-operator 50m Normal Pulled pod/console-operator-57cbc6b88f-snwcj Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6dd6ba37d430e9e8e248b4c5911ef0903f8bd8d05451ed65eeb1d9d2b3c42e4" already present on machine openshift-console-operator 50m Warning FastControllerResync deployment/console-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-ovn-kubernetes 50m Normal LeaderElection lease/ovn-kubernetes-cluster-manager ip-10-0-197-197.ec2.internal became leader openshift-console-operator 50m Normal LeaderElection configmap/console-operator-lock console-operator-57cbc6b88f-snwcj_9631e7b6-7090-41b3-a355-5c1b365320a3 became leader openshift-console-operator 50m Warning FastControllerResync deployment/console-operator Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling openshift-console 50m Normal AddedInterface pod/downloads-fcdb597fd-qhkwv Add eth0 [10.129.0.15/23] from ovn-kubernetes openshift-console-operator 50m Normal ServiceCreated deployment/console-operator Created Service/downloads -n openshift-console because it was missing openshift-console-operator 50m Normal PodDisruptionBudgetCreated deployment/console-operator Created PodDisruptionBudget.policy/console -n openshift-console because it was missing openshift-apiserver 50m Warning ProbeError pod/apiserver-6977bc9f6b-b9qrr Readiness probe error: HTTP probe failed with statuscode: 500... openshift-console 50m Normal NoPods poddisruptionbudget/downloads No matching pods found openshift-console-operator 50m Normal ConfigMapCreated deployment/console-operator Created ConfigMap/default-ingress-cert -n openshift-console because it was missing openshift-console 50m Normal ScalingReplicaSet deployment/downloads Scaled up replica set downloads-fcdb597fd to 2 openshift-console-operator 50m Normal ConfigMapCreated deployment/console-operator Created ConfigMap/oauth-serving-cert -n openshift-console because it was missing openshift-console-operator 50m Normal PodDisruptionBudgetCreated deployment/console-operator Created PodDisruptionBudget.policy/downloads -n openshift-console because it was missing openshift-console-operator 50m Normal OperatorStatusChanged deployment/console-operator Status for clusteroperator/console changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}],status.versions changed from [] to [{"operator" "4.13.0-rc.0"}] openshift-console 50m Normal SuccessfulCreate replicaset/downloads-fcdb597fd Created pod: downloads-fcdb597fd-qhkwv openshift-console-operator 50m Normal DeploymentCreated deployment/console-operator Created Deployment.apps/downloads -n openshift-console because it was missing openshift-apiserver 50m Warning Unhealthy pod/apiserver-6977bc9f6b-b9qrr Readiness probe failed: HTTP probe failed with statuscode: 500 openshift-console-operator 50m Normal ServiceCreated deployment/console-operator Created Service/console -n openshift-console because it was missing openshift-console 50m Normal NoPods poddisruptionbudget/console No matching pods found openshift-kube-apiserver-operator 50m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/bound-sa-token-signing-certs-9 -n openshift-kube-apiserver because it was missing openshift-console-operator 50m Normal OperatorVersionChanged deployment/console-operator clusteroperator/console version "operator" changed from "" to "4.13.0-rc.0" openshift-console 50m Normal Pulling pod/downloads-fcdb597fd-qhkwv Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec5351e220112a5b70451310b563175ae713c4d2864765c861b969730515a21b" openshift-controller-manager-operator 50m Normal ObservedConfigChanged deployment/openshift-controller-manager-operator Writing updated observed config:   map[string]any{... openshift-console 50m Normal SuccessfulCreate replicaset/downloads-fcdb597fd Created pod: downloads-fcdb597fd-24zcn openshift-console 50m Normal AddedInterface pod/downloads-fcdb597fd-24zcn Add eth0 [10.131.0.11/23] from ovn-kubernetes openshift-console 50m Normal Pulling pod/downloads-fcdb597fd-24zcn Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec5351e220112a5b70451310b563175ae713c4d2864765c861b969730515a21b" openshift-console-operator 50m Normal OperatorStatusChanged deployment/console-operator Status for clusteroperator/console changed: Degraded changed from Unknown to False ("All is well") openshift-console-operator 50m Normal ConfigMapCreated deployment/console-operator Created ConfigMap/console-config -n openshift-console because it was missing openshift-kube-apiserver-operator 50m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/etcd-serving-ca-9 -n openshift-kube-apiserver because it was missing openshift-console-operator 50m Normal OperatorStatusChanged deployment/console-operator Status for clusteroperator/console changed: Upgradeable changed from Unknown to True ("All is well") openshift-kube-apiserver-operator 50m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-server-ca-9 -n openshift-kube-apiserver because it was missing openshift-console-operator 50m Normal OperatorStatusChanged deployment/console-operator Status for clusteroperator/console changed: Progressing changed from Unknown to False ("All is well") openshift-kube-apiserver-operator 50m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kubelet-serving-ca-9 -n openshift-kube-apiserver because it was missing default 50m Normal Uncordon node/ip-10-0-160-152.ec2.internal Update completed for config rendered-worker-65a660c5b4cafef14c5770efedbee76c and node has been uncordoned default 50m Normal ConfigDriftMonitorStarted node/ip-10-0-160-152.ec2.internal Config Drift Monitor started, watching against rendered-worker-65a660c5b4cafef14c5770efedbee76c default 50m Normal NodeDone node/ip-10-0-160-152.ec2.internal Setting node ip-10-0-160-152.ec2.internal, currentConfig rendered-worker-65a660c5b4cafef14c5770efedbee76c to Done default 50m Normal PendingConfig node/ip-10-0-140-6.ec2.internal Written pending config rendered-master-0a9073c6468c496094e297e778284549 openshift-kube-apiserver-operator 50m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/sa-token-signing-certs-9 -n openshift-kube-apiserver because it was missing openshift-console-operator 50m Normal OperatorStatusChanged deployment/console-operator Status for clusteroperator/console changed: Degraded message changed from "All is well" to "DownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.qeaisrhods-c13.abmw.s1.devshift.org in route downloads in namespace openshift-console",Upgradeable changed from True to False ("DownloadsDefaultRouteSyncUpgradeable: no ingress for host downloads-openshift-console.apps.qeaisrhods-c13.abmw.s1.devshift.org in route downloads in namespace openshift-console") default 50m Normal SkipReboot node/ip-10-0-140-6.ec2.internal Config changes do not require reboot. default 50m Normal OSUpdateStaged node/ip-10-0-140-6.ec2.internal Changes to OS staged openshift-kube-controller-manager 50m Warning ProbeError pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Startup probe error: Get "https://10.0.140.6:10257/healthz": dial tcp 10.0.140.6:10257: connect: connection refused... openshift-console 50m Normal AddedInterface pod/console-64949fc89-v8nrv Add eth0 [10.128.0.11/23] from ovn-kubernetes openshift-console 50m Normal Pulling pod/console-64949fc89-v8nrv Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f8ed86b29b0df00f0cfb8b6d170e5fa8d9b0092ee92140788ec5a0a1eb60a10" openshift-authentication-operator 50m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server replied with unexpected status: 403 Forbidden (check kube-apiserver logs if this error persists)" to "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" openshift-authentication-operator 50m Normal ObserveConsoleURL deployment/authentication-operator assetPublicURL changed from to https://console-openshift-console.apps.qeaisrhods-c13.abmw.s1.devshift.org openshift-console 50m Normal ScalingReplicaSet deployment/console Scaled up replica set console-64949fc89 to 2 openshift-console-operator 50m Normal ConfigMapCreated deployment/console-operator Created ConfigMap/console-public -n openshift-config-managed because it was missing openshift-console-operator 50m Normal SecretCreated deployment/console-operator Created Secret/console-oauth-config -n openshift-console because it was missing openshift-console-operator 50m Normal DeploymentCreated deployment/console-operator Created Deployment.apps/console -n openshift-console because it was missing openshift-authentication-operator 50m Normal ObservedConfigChanged deployment/authentication-operator Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\"oauthConfig\": map[string]any{\n-\u00a0\t\t\"assetPublicURL\": string(\"\"),\n+\u00a0\t\t\"assetPublicURL\": string(\"https://console-openshift-console.apps.qeaisrhods-c13.abmw.s1.devshift.org\"),\n\u00a0\u00a0\t\t\"loginURL\": string(\"https://api.qeaisrhods-c13.abmw.s1.devshift.org:6443\"),\n\u00a0\u00a0\t\t\"templates\": map[string]any{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)},\n\u00a0\u00a0\t\t\"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)},\n\u00a0\u00a0\t},\n\u00a0\u00a0\t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n\u00a0\u00a0\t\"servingInfo\": map[string]any{\"cipherSuites\": []any{string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\"), string(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"), ...}, \"minTLSVersion\": string(\"VersionTLS12\"), \"namedCertificates\": []any{map[string]any{\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"names\": []any{string(\"*.apps.qeaisrhods-c13.abmw.s1.devshift.org\")}}}},\n\u00a0\u00a0\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n\u00a0\u00a0}\n" openshift-console 50m Normal SuccessfulCreate replicaset/console-64949fc89 Created pod: console-64949fc89-v8nrv openshift-console 50m Normal SuccessfulCreate replicaset/console-64949fc89 Created pod: console-64949fc89-nhxbj openshift-kube-apiserver-operator 50m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-audit-policies-9 -n openshift-kube-apiserver because it was missing openshift-console-operator 50m Normal DeploymentUpdated deployment/console-operator Updated Deployment.apps/downloads -n openshift-console because it changed openshift-console 50m Normal Created pod/console-64949fc89-v8nrv Created container console openshift-console 50m Normal Started pod/console-64949fc89-v8nrv Started container console default 50m Normal SetDesiredConfig machineconfigpool/worker Targeted node ip-10-0-232-8.ec2.internal to config rendered-worker-65a660c5b4cafef14c5770efedbee76c openshift-console 50m Normal Pulled pod/console-64949fc89-v8nrv Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f8ed86b29b0df00f0cfb8b6d170e5fa8d9b0092ee92140788ec5a0a1eb60a10" in 2.548154802s (2.548163187s including waiting) openshift-kube-apiserver-operator 50m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/etcd-client-9 -n openshift-kube-apiserver because it was missing openshift-monitoring 50m Normal Killing pod/alertmanager-main-1 Stopping container config-reloader openshift-console 50m Normal Started pod/downloads-fcdb597fd-qhkwv Started container download-server openshift-console 50m Normal Pulled pod/downloads-fcdb597fd-qhkwv Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec5351e220112a5b70451310b563175ae713c4d2864765c861b969730515a21b" in 12.150710121s (12.150721968s including waiting) openshift-console 50m Normal Created pod/downloads-fcdb597fd-qhkwv Created container download-server openshift-authentication 50m Normal Started pod/oauth-openshift-cf968c599-kskc6 Started container oauth-openshift openshift-console 50m Normal Pulled pod/downloads-fcdb597fd-24zcn Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec5351e220112a5b70451310b563175ae713c4d2864765c861b969730515a21b" in 12.34792855s (12.347942191s including waiting) openshift-monitoring 50m Normal SuccessfulDelete statefulset/alertmanager-main delete Pod alertmanager-main-1 in StatefulSet alertmanager-main successful openshift-authentication 50m Normal AddedInterface pod/oauth-openshift-cf968c599-kskc6 Add eth0 [10.128.0.16/23] from ovn-kubernetes openshift-console 50m Normal AddedInterface pod/console-64949fc89-nhxbj Add eth0 [10.129.0.16/23] from ovn-kubernetes openshift-authentication 50m Normal Pulled pod/oauth-openshift-cf968c599-kskc6 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2156db516db2fda43363a063c11f91bd201dde2de7eb4fbab9c1210d17e699ba" already present on machine openshift-console 50m Normal Started pod/downloads-fcdb597fd-24zcn Started container download-server openshift-console 50m Normal Created pod/downloads-fcdb597fd-24zcn Created container download-server openshift-console 50m Normal Pulling pod/console-64949fc89-nhxbj Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f8ed86b29b0df00f0cfb8b6d170e5fa8d9b0092ee92140788ec5a0a1eb60a10" openshift-authentication 50m Normal Created pod/oauth-openshift-cf968c599-kskc6 Created container oauth-openshift openshift-console 50m Warning ProbeError pod/downloads-fcdb597fd-qhkwv Readiness probe error: Get "http://10.129.0.15:8080/": dial tcp 10.129.0.15:8080: connect: connection refused... openshift-authentication-operator 50m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)" to "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is waiting in pending oauth-openshift-cf968c599-kskc6 pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)",Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" openshift-authentication 50m Normal SuccessfulCreate replicaset/oauth-openshift-86966797f8 Created pod: oauth-openshift-86966797f8-g5rm7 openshift-authentication 50m Normal SuccessfulDelete replicaset/oauth-openshift-cf968c599 Deleted pod: oauth-openshift-cf968c599-kskc6 openshift-authentication 50m Normal ScalingReplicaSet deployment/oauth-openshift Scaled down replica set oauth-openshift-cf968c599 to 2 from 3 openshift-kube-apiserver-operator 50m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/localhost-recovery-serving-certkey-9 -n openshift-kube-apiserver because it was missing openshift-console 50m Warning Unhealthy pod/downloads-fcdb597fd-qhkwv Readiness probe failed: Get "http://10.129.0.15:8080/": dial tcp 10.129.0.15:8080: connect: connection refused openshift-authentication 50m Normal ScalingReplicaSet deployment/oauth-openshift Scaled up replica set oauth-openshift-86966797f8 to 1 from 0 openshift-kube-controller-manager 50m Warning Unhealthy pod/kube-controller-manager-guard-ip-10-0-140-6.ec2.internal Readiness probe failed: Get "https://10.0.140.6:10257/healthz": dial tcp 10.0.140.6:10257: connect: connection refused openshift-console 50m Warning ProbeError pod/downloads-fcdb597fd-24zcn Readiness probe error: Get "http://10.131.0.11:8080/": dial tcp 10.131.0.11:8080: connect: connection refused... openshift-authentication 50m Normal Killing pod/oauth-openshift-cf968c599-kskc6 Stopping container oauth-openshift openshift-monitoring 50m Normal SuccessfulCreate statefulset/alertmanager-main create Pod alertmanager-main-1 in StatefulSet alertmanager-main successful openshift-monitoring 50m Normal AddedInterface pod/alertmanager-main-1 Add eth0 [10.131.0.14/23] from ovn-kubernetes openshift-monitoring 50m Normal Pulled pod/alertmanager-main-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" already present on machine openshift-monitoring 50m Normal Started pod/alertmanager-main-1 Started container config-reloader openshift-monitoring 50m Normal Pulled pod/alertmanager-main-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-console 50m Warning Unhealthy pod/downloads-fcdb597fd-24zcn Readiness probe failed: Get "http://10.131.0.11:8080/": dial tcp 10.131.0.11:8080: connect: connection refused openshift-monitoring 50m Normal Created pod/alertmanager-main-1 Created container config-reloader openshift-kube-scheduler 50m Normal StaticPodInstallerCompleted pod/installer-7-ip-10-0-239-132.ec2.internal Successfully installed revision 7 default 50m Normal OSUpdateStaged node/ip-10-0-232-8.ec2.internal Changes to OS staged openshift-monitoring 50m Normal Started pod/alertmanager-main-1 Started container alertmanager-proxy openshift-monitoring 50m Normal Created pod/alertmanager-main-1 Created container alertmanager-proxy openshift-monitoring 50m Normal Pulled pod/alertmanager-main-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 50m Normal Created pod/alertmanager-main-1 Created container kube-rbac-proxy openshift-console 50m Normal Created pod/console-64949fc89-nhxbj Created container console openshift-monitoring 50m Normal Started pod/alertmanager-main-1 Started container kube-rbac-proxy openshift-monitoring 50m Normal Pulled pod/alertmanager-main-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 50m Normal Created pod/alertmanager-main-1 Created container kube-rbac-proxy-metric openshift-kube-scheduler 50m Normal Killing pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Stopping container kube-scheduler openshift-monitoring 50m Normal Pulled pod/alertmanager-main-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8541443d5c6c0d31959a8e5f8d6a525d43bb4df66be563603d282f48bdeb304" already present on machine openshift-console 50m Normal Pulled pod/console-64949fc89-nhxbj Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f8ed86b29b0df00f0cfb8b6d170e5fa8d9b0092ee92140788ec5a0a1eb60a10" in 2.786532154s (2.786546822s including waiting) openshift-monitoring 50m Normal Created pod/alertmanager-main-1 Created container prom-label-proxy openshift-console 50m Normal Started pod/console-64949fc89-nhxbj Started container console default 50m Normal PendingConfig node/ip-10-0-232-8.ec2.internal Written pending config rendered-worker-65a660c5b4cafef14c5770efedbee76c openshift-monitoring 50m Normal Started pod/alertmanager-main-1 Started container prom-label-proxy openshift-kube-scheduler 50m Normal Killing pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Stopping container kube-scheduler-recovery-controller openshift-kube-scheduler 50m Normal Killing pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Stopping container kube-scheduler-cert-syncer default 50m Normal SkipReboot node/ip-10-0-232-8.ec2.internal Config changes do not require reboot. openshift-monitoring 50m Normal Started pod/alertmanager-main-1 Started container kube-rbac-proxy-metric openshift-kube-apiserver-operator 50m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/localhost-recovery-client-token-9 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 50m Normal RevisionCreate deployment/kube-apiserver-operator Revision 8 created because required configmap/config has changed openshift-etcd 50m Normal StaticPodInstallerCompleted pod/installer-6-ip-10-0-239-132.ec2.internal Successfully installed revision 6 openshift-authentication-operator 50m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication (container is waiting in pending oauth-openshift-cf968c599-kskc6 pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)" to "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)",Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" openshift-monitoring 50m Normal SuccessfulDelete statefulset/prometheus-k8s delete Pod prometheus-k8s-1 in StatefulSet prometheus-k8s successful openshift-monitoring 50m Normal Started pod/alertmanager-main-1 Started container alertmanager openshift-kube-apiserver-operator 50m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/webhook-authenticator-9 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 50m Normal RevisionTriggered deployment/kube-apiserver-operator new revision 9 triggered by "required configmap/config has changed" openshift-monitoring 50m Normal Pulled pod/alertmanager-main-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1a766de06530f7008fc155e6239f6e3d63b816deeb24c6d520ef1980d748050" already present on machine openshift-monitoring 50m Normal Created pod/alertmanager-main-1 Created container alertmanager openshift-monitoring 50m Normal Killing pod/prometheus-k8s-1 Stopping container prometheus openshift-etcd 50m Normal Killing pod/etcd-ip-10-0-239-132.ec2.internal Stopping container etcdctl default 50m Normal ConfigDriftMonitorStarted node/ip-10-0-140-6.ec2.internal Config Drift Monitor started, watching against rendered-master-0a9073c6468c496094e297e778284549 default 50m Normal NodeDone node/ip-10-0-140-6.ec2.internal Setting node ip-10-0-140-6.ec2.internal, currentConfig rendered-master-0a9073c6468c496094e297e778284549 to Done default 50m Normal AnnotationChange machineconfigpool/master Node ip-10-0-140-6.ec2.internal now has machineconfiguration.openshift.io/state=Done default 50m Normal Uncordon node/ip-10-0-140-6.ec2.internal Update completed for config rendered-master-0a9073c6468c496094e297e778284549 and node has been uncordoned openshift-kube-controller-manager 50m Normal StaticPodInstallerCompleted pod/installer-5-retry-1-ip-10-0-140-6.ec2.internal Successfully installed revision 5 openshift-kube-apiserver-operator 50m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nRevisionControllerDegraded: conflicting latestAvailableRevision 9" to "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" openshift-monitoring 50m Normal SuccessfulCreate statefulset/prometheus-k8s create Pod prometheus-k8s-1 in StatefulSet prometheus-k8s successful openshift-kube-apiserver-operator 50m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" to "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]\nRevisionControllerDegraded: conflicting latestAvailableRevision 9" openshift-apiserver 49m Warning Unhealthy pod/apiserver-6977bc9f6b-b9qrr Readiness probe failed: Get "https://10.129.0.34:8443/readyz": dial tcp 10.129.0.34:8443: connect: connection refused openshift-route-controller-manager 49m Normal ScalingReplicaSet deployment/route-controller-manager Scaled down replica set route-controller-manager-7ff89c67c to 2 from 3 openshift-route-controller-manager 49m Normal SuccessfulCreate replicaset/route-controller-manager-9b45479c5 Created pod: route-controller-manager-9b45479c5-kkjqb openshift-kube-scheduler-operator 49m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal container \"kube-scheduler\" is terminated: Completed: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal container \"kube-scheduler-cert-syncer\" is terminated: Error: to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:23:10.564749 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:23:10.564804 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:23:40.544138 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:23:40.544171 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:23:42.227292 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:23:42.227357 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal container \"kube-scheduler-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" openshift-controller-manager 49m Normal Killing pod/controller-manager-6fcd58c8dc-6vtpl Stopping container controller-manager openshift-controller-manager 49m Normal SuccessfulCreate replicaset/controller-manager-c5c84d6f9 Created pod: controller-manager-c5c84d6f9-qxhsq openshift-controller-manager 49m Normal ScalingReplicaSet deployment/controller-manager Scaled up replica set controller-manager-c5c84d6f9 to 1 from 0 openshift-route-controller-manager 49m Normal SuccessfulDelete replicaset/route-controller-manager-7ff89c67c Deleted pod: route-controller-manager-7ff89c67c-2bq47 openshift-controller-manager 49m Normal ScalingReplicaSet deployment/controller-manager Scaled down replica set controller-manager-6fcd58c8dc to 2 from 3 openshift-controller-manager-operator 49m Normal OperatorStatusChanged deployment/openshift-controller-manager-operator Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: deployment/controller-manager: observed generation is 6, desired generation is 7.\nProgressing: deployment/route-controller-manager: observed generation is 5, desired generation is 6.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 2, desired generation is 3.") openshift-route-controller-manager 49m Normal Killing pod/route-controller-manager-7ff89c67c-2bq47 Stopping container route-controller-manager openshift-route-controller-manager 49m Normal ScalingReplicaSet deployment/route-controller-manager Scaled up replica set route-controller-manager-9b45479c5 to 1 from 0 openshift-controller-manager 49m Normal SuccessfulDelete replicaset/controller-manager-6fcd58c8dc Deleted pod: controller-manager-6fcd58c8dc-6vtpl openshift-console 49m Normal ScalingReplicaSet deployment/console Scaled up replica set console-7dc48fc574 to 2 openshift-console 49m Normal SuccessfulCreate replicaset/console-7dc48fc574 Created pod: console-7dc48fc574-fvlls openshift-console 49m Normal SuccessfulCreate replicaset/console-7dc48fc574 Created pod: console-7dc48fc574-4kqrk openshift-kube-apiserver-operator 49m Normal PodCreated deployment/kube-apiserver-operator Created Pod/revision-pruner-9-ip-10-0-197-197.ec2.internal -n openshift-kube-apiserver because it was missing openshift-console 49m Normal SuccessfulDelete replicaset/console-64949fc89 Deleted pod: console-64949fc89-v8nrv openshift-console 49m Normal ScalingReplicaSet deployment/console Scaled down replica set console-64949fc89 to 1 from 2 openshift-console 49m Normal Killing pod/console-64949fc89-v8nrv Stopping container console openshift-kube-apiserver 49m Normal AddedInterface pod/revision-pruner-9-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.40/23] from ovn-kubernetes openshift-kube-apiserver 49m Normal Created pod/revision-pruner-9-ip-10-0-197-197.ec2.internal Created container pruner openshift-kube-apiserver 49m Normal Started pod/revision-pruner-9-ip-10-0-197-197.ec2.internal Started container pruner openshift-kube-apiserver 49m Normal Pulled pod/revision-pruner-9-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine default 49m Normal SetDesiredConfig machineconfigpool/master Targeted node ip-10-0-197-197.ec2.internal to config rendered-master-0a9073c6468c496094e297e778284549 openshift-console-operator 49m Normal OperatorStatusChanged deployment/console-operator Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: Changes made during sync updates, additional sync expected."),Available changed from Unknown to False ("DeploymentAvailable: 0 replicas available for console deployment") openshift-kube-apiserver-operator 49m Normal PodCreated deployment/kube-apiserver-operator Created Pod/revision-pruner-9-ip-10-0-239-132.ec2.internal -n openshift-kube-apiserver because it was missing openshift-kube-apiserver 49m Normal AddedInterface pod/revision-pruner-9-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.17/23] from ovn-kubernetes openshift-kube-apiserver 49m Normal Pulled pod/revision-pruner-9-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine default 49m Normal NodeDone node/ip-10-0-232-8.ec2.internal Setting node ip-10-0-232-8.ec2.internal, currentConfig rendered-worker-65a660c5b4cafef14c5770efedbee76c to Done default 49m Normal ConfigDriftMonitorStarted node/ip-10-0-232-8.ec2.internal Config Drift Monitor started, watching against rendered-worker-65a660c5b4cafef14c5770efedbee76c openshift-kube-apiserver 49m Normal Created pod/revision-pruner-9-ip-10-0-239-132.ec2.internal Created container pruner openshift-kube-apiserver-operator 49m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 8" to "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 9",Available message changed from "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 8" to "StaticPodsAvailable: 0 nodes are active; 3 nodes are at revision 0; 0 nodes have achieved new revision 9" default 49m Normal Uncordon node/ip-10-0-232-8.ec2.internal Update completed for config rendered-worker-65a660c5b4cafef14c5770efedbee76c and node has been uncordoned openshift-kube-apiserver 49m Normal Killing pod/installer-8-ip-10-0-197-197.ec2.internal Stopping container installer openshift-console-operator 49m Normal OperatorStatusChanged deployment/console-operator Status for clusteroperator/console changed: Degraded message changed from "DownloadsDefaultRouteSyncDegraded: no ingress for host downloads-openshift-console.apps.qeaisrhods-c13.abmw.s1.devshift.org in route downloads in namespace openshift-console" to "All is well",Upgradeable changed from False to True ("All is well") openshift-kube-apiserver 49m Normal Started pod/revision-pruner-9-ip-10-0-239-132.ec2.internal Started container pruner openshift-kube-scheduler 49m Normal LeaderElection lease/kube-scheduler ip-10-0-197-197_7ae853a1-c4b8-45b5-93c2-5c3bb8a85792 became leader openshift-kube-scheduler 49m Normal LeaderElection configmap/kube-scheduler ip-10-0-197-197_7ae853a1-c4b8-45b5-93c2-5c3bb8a85792 became leader openshift-kube-scheduler 49m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-route-controller-manager 49m Normal AddedInterface pod/route-controller-manager-9b45479c5-kkjqb Add eth0 [10.129.0.18/23] from ovn-kubernetes openshift-route-controller-manager 49m Normal Pulled pod/route-controller-manager-9b45479c5-kkjqb Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:884971410b4691a0666287eb389e07a425040d4fc90665c864aa9f4728d4599c" already present on machine openshift-monitoring 49m Normal Created pod/prometheus-k8s-1 Created container init-config-reloader openshift-monitoring 49m Normal Pulled pod/prometheus-k8s-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" already present on machine openshift-monitoring 49m Normal AddedInterface pod/prometheus-k8s-1 Add eth0 [10.131.0.16/23] from ovn-kubernetes openshift-monitoring 49m Normal Started pod/prometheus-k8s-1 Started container kube-rbac-proxy openshift-kube-scheduler 49m Warning ProbeError pod/openshift-kube-scheduler-guard-ip-10-0-239-132.ec2.internal Readiness probe error: Get "https://10.0.239.132:10259/healthz": dial tcp 10.0.239.132:10259: connect: connection refused... openshift-monitoring 49m Normal Pulled pod/prometheus-k8s-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" already present on machine openshift-monitoring 49m Normal Created pod/prometheus-k8s-1 Created container prometheus-proxy openshift-monitoring 49m Normal Created pod/prometheus-k8s-1 Created container config-reloader openshift-monitoring 49m Normal Started pod/prometheus-k8s-1 Started container config-reloader openshift-monitoring 49m Normal Pulled pod/prometheus-k8s-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a898acb346b7aae546bb9dc31e43266415a096fa755241b56acb5969190584f9" already present on machine openshift-monitoring 49m Normal Created pod/prometheus-k8s-1 Created container prometheus openshift-monitoring 49m Normal Started pod/prometheus-k8s-1 Started container kube-rbac-proxy-thanos openshift-monitoring 49m Normal Pulled pod/prometheus-k8s-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 49m Normal Started pod/prometheus-k8s-1 Started container prometheus openshift-monitoring 49m Normal Created pod/prometheus-k8s-1 Created container kube-rbac-proxy openshift-monitoring 49m Normal Created pod/prometheus-k8s-1 Created container thanos-sidecar openshift-monitoring 49m Normal Pulled pod/prometheus-k8s-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 49m Normal Pulled pod/prometheus-k8s-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1cdf8e55288584a04e1a403133d4508bac27808a20427a18aee85a92c367316" already present on machine openshift-kube-scheduler 49m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-monitoring 49m Normal Started pod/prometheus-k8s-1 Started container thanos-sidecar openshift-route-controller-manager 49m Normal Started pod/route-controller-manager-9b45479c5-kkjqb Started container route-controller-manager openshift-route-controller-manager 49m Normal Created pod/route-controller-manager-9b45479c5-kkjqb Created container route-controller-manager openshift-monitoring 49m Normal Started pod/prometheus-k8s-1 Started container init-config-reloader openshift-kube-scheduler 49m Warning Unhealthy pod/openshift-kube-scheduler-guard-ip-10-0-239-132.ec2.internal Readiness probe failed: Get "https://10.0.239.132:10259/healthz": dial tcp 10.0.239.132:10259: connect: connection refused openshift-monitoring 49m Normal Created pod/prometheus-k8s-1 Created container kube-rbac-proxy-thanos openshift-monitoring 49m Normal Pulled pod/prometheus-k8s-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-monitoring 49m Normal Started pod/prometheus-k8s-1 Started container prometheus-proxy openshift-kube-scheduler 49m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-scheduler 49m Normal Created pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Created container kube-scheduler openshift-kube-scheduler 49m Normal Started pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Started container kube-scheduler openshift-kube-scheduler 49m Normal Created pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Created container wait-for-host-port openshift-kube-scheduler 49m Normal Started pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Started container wait-for-host-port openshift-kube-apiserver 49m Normal Started pod/revision-pruner-9-ip-10-0-140-6.ec2.internal Started container pruner openshift-kube-scheduler 49m Normal Started pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Started container kube-scheduler-cert-syncer openshift-kube-scheduler 49m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 49m Normal Created pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Created container kube-scheduler-recovery-controller openshift-kube-scheduler 49m Normal Started pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Started container kube-scheduler-recovery-controller openshift-kube-apiserver 49m Normal Created pod/revision-pruner-9-ip-10-0-140-6.ec2.internal Created container pruner openshift-kube-apiserver-operator 49m Normal PodCreated deployment/kube-apiserver-operator Created Pod/revision-pruner-9-ip-10-0-140-6.ec2.internal -n openshift-kube-apiserver because it was missing openshift-route-controller-manager 49m Normal SuccessfulDelete replicaset/route-controller-manager-7ff89c67c Deleted pod: route-controller-manager-7ff89c67c-8b8g2 openshift-route-controller-manager 49m Normal SuccessfulCreate replicaset/route-controller-manager-9b45479c5 Created pod: route-controller-manager-9b45479c5-69h2c openshift-console-operator 49m Normal OperatorStatusChanged deployment/console-operator Status for clusteroperator/console changed: Degraded message changed from "All is well" to "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.qeaisrhods-c13.abmw.s1.devshift.org returns '503 Service Unavailable'",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.qeaisrhods-c13.abmw.s1.devshift.org returns '503 Service Unavailable'" openshift-route-controller-manager 49m Normal ScalingReplicaSet deployment/route-controller-manager Scaled down replica set route-controller-manager-7ff89c67c to 1 from 2 openshift-route-controller-manager 49m Normal ScalingReplicaSet deployment/route-controller-manager Scaled up replica set route-controller-manager-9b45479c5 to 2 from 1 openshift-kube-scheduler 49m Normal Created pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Created container kube-scheduler-cert-syncer openshift-kube-apiserver 49m Normal Pulled pod/revision-pruner-9-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 49m Normal AddedInterface pod/revision-pruner-9-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.21/23] from ovn-kubernetes openshift-route-controller-manager 49m Normal Killing pod/route-controller-manager-7ff89c67c-8b8g2 Stopping container route-controller-manager default 49m Normal OSUpdateStaged node/ip-10-0-197-197.ec2.internal Changes to OS staged default 49m Normal SkipReboot node/ip-10-0-197-197.ec2.internal Config changes do not require reboot. openshift-apiserver 49m Warning ProbeError pod/apiserver-6977bc9f6b-b9qrr Readiness probe error: Get "https://10.129.0.34:8443/readyz": dial tcp 10.129.0.34:8443: connect: connection refused... openshift-console 49m Normal Pulling pod/console-7dc48fc574-4kqrk Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f8ed86b29b0df00f0cfb8b6d170e5fa8d9b0092ee92140788ec5a0a1eb60a10" openshift-console 49m Normal AddedInterface pod/console-7dc48fc574-4kqrk Add eth0 [10.130.0.43/23] from ovn-kubernetes openshift-controller-manager 49m Normal AddedInterface pod/controller-manager-c5c84d6f9-qxhsq Add eth0 [10.130.0.41/23] from ovn-kubernetes openshift-controller-manager 49m Normal Created pod/controller-manager-c5c84d6f9-qxhsq Created container controller-manager openshift-controller-manager 49m Normal Started pod/controller-manager-c5c84d6f9-qxhsq Started container controller-manager openshift-controller-manager 49m Normal Pulled pod/controller-manager-c5c84d6f9-qxhsq Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:03d999968e93d76624e3d10f22591abfbfdaaf2dd8f315abf64cdc590ab64195" already present on machine default 49m Normal PendingConfig node/ip-10-0-197-197.ec2.internal Written pending config rendered-master-0a9073c6468c496094e297e778284549 openshift-route-controller-manager 49m Normal Pulled pod/route-controller-manager-9b45479c5-69h2c Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:884971410b4691a0666287eb389e07a425040d4fc90665c864aa9f4728d4599c" already present on machine openshift-route-controller-manager 49m Normal AddedInterface pod/route-controller-manager-9b45479c5-69h2c Add eth0 [10.130.0.44/23] from ovn-kubernetes openshift-console-operator 49m Normal OperatorStatusChanged deployment/console-operator Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: Changes made during sync updates, additional sync expected." to "SyncLoopRefreshProgressing: Working toward version 4.13.0-rc.0, 0 replicas available" openshift-kube-scheduler-operator 49m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "StaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal container \"kube-scheduler\" is terminated: Completed: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal container \"kube-scheduler-cert-syncer\" is terminated: Error: to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:23:10.564749 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:23:10.564804 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:23:40.544138 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:23:40.544171 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:23:42.227292 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:23:42.227357 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal container \"kube-scheduler-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" openshift-route-controller-manager 49m Normal Started pod/route-controller-manager-9b45479c5-69h2c Started container route-controller-manager openshift-route-controller-manager 49m Normal Created pod/route-controller-manager-9b45479c5-69h2c Created container route-controller-manager openshift-controller-manager 49m Normal SuccessfulCreate replicaset/controller-manager-c5c84d6f9 Created pod: controller-manager-c5c84d6f9-x72pp openshift-controller-manager 49m Normal Killing pod/controller-manager-6fcd58c8dc-dnsjp Stopping container controller-manager openshift-controller-manager 49m Normal ScalingReplicaSet deployment/controller-manager Scaled down replica set controller-manager-6fcd58c8dc to 1 from 2 openshift-controller-manager 49m Normal ScalingReplicaSet deployment/controller-manager Scaled up replica set controller-manager-c5c84d6f9 to 2 from 1 openshift-controller-manager 49m Normal SuccessfulDelete replicaset/controller-manager-6fcd58c8dc Deleted pod: controller-manager-6fcd58c8dc-dnsjp openshift-console 49m Normal Created pod/console-7dc48fc574-4kqrk Created container console openshift-console 49m Normal Pulled pod/console-7dc48fc574-4kqrk Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f8ed86b29b0df00f0cfb8b6d170e5fa8d9b0092ee92140788ec5a0a1eb60a10" in 2.659059849s (2.659072592s including waiting) openshift-console 49m Normal Started pod/console-7dc48fc574-4kqrk Started container console openshift-route-controller-manager 49m Normal ScalingReplicaSet deployment/route-controller-manager Scaled down replica set route-controller-manager-7ff89c67c to 0 from 1 openshift-route-controller-manager 49m Normal Killing pod/route-controller-manager-7ff89c67c-4622z Stopping container route-controller-manager openshift-route-controller-manager 49m Normal SuccessfulCreate replicaset/route-controller-manager-9b45479c5 Created pod: route-controller-manager-9b45479c5-q5nh8 openshift-route-controller-manager 49m Normal ScalingReplicaSet deployment/route-controller-manager Scaled up replica set route-controller-manager-9b45479c5 to 3 from 2 openshift-route-controller-manager 49m Normal SuccessfulDelete replicaset/route-controller-manager-7ff89c67c Deleted pod: route-controller-manager-7ff89c67c-4622z openshift-oauth-apiserver 49m Warning Unhealthy pod/apiserver-9b9694fdc-g7gxw Readiness probe failed: Get "https://10.129.0.32:8443/readyz": dial tcp 10.129.0.32:8443: connect: connection refused openshift-controller-manager 49m Normal AddedInterface pod/controller-manager-c5c84d6f9-x72pp Add eth0 [10.129.0.19/23] from ovn-kubernetes openshift-controller-manager 49m Normal Pulled pod/controller-manager-c5c84d6f9-x72pp Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:03d999968e93d76624e3d10f22591abfbfdaaf2dd8f315abf64cdc590ab64195" already present on machine openshift-kube-apiserver-operator 49m Normal PodCreated deployment/kube-apiserver-operator Created Pod/installer-9-ip-10-0-197-197.ec2.internal -n openshift-kube-apiserver because it was missing openshift-route-controller-manager 49m Normal Pulled pod/route-controller-manager-9b45479c5-q5nh8 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:884971410b4691a0666287eb389e07a425040d4fc90665c864aa9f4728d4599c" already present on machine openshift-controller-manager 49m Normal SuccessfulDelete replicaset/controller-manager-6fcd58c8dc Deleted pod: controller-manager-6fcd58c8dc-wdb9f openshift-controller-manager 49m Normal ScalingReplicaSet deployment/controller-manager Scaled up replica set controller-manager-c5c84d6f9 to 3 from 2 openshift-controller-manager 49m Normal Created pod/controller-manager-c5c84d6f9-x72pp Created container controller-manager openshift-controller-manager 49m Normal Started pod/controller-manager-c5c84d6f9-x72pp Started container controller-manager openshift-controller-manager 49m Normal SuccessfulCreate replicaset/controller-manager-c5c84d6f9 Created pod: controller-manager-c5c84d6f9-wrj8l openshift-controller-manager 49m Normal ScalingReplicaSet deployment/controller-manager Scaled down replica set controller-manager-6fcd58c8dc to 0 from 1 openshift-route-controller-manager 49m Normal AddedInterface pod/route-controller-manager-9b45479c5-q5nh8 Add eth0 [10.128.0.22/23] from ovn-kubernetes openshift-route-controller-manager 49m Normal Created pod/route-controller-manager-9b45479c5-q5nh8 Created container route-controller-manager openshift-kube-apiserver 49m Normal AddedInterface pod/installer-9-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.45/23] from ovn-kubernetes openshift-route-controller-manager 49m Normal LeaderElection lease/openshift-route-controllers route-controller-manager-9b45479c5-q5nh8 became leader openshift-kube-apiserver 49m Normal Started pod/installer-9-ip-10-0-197-197.ec2.internal Started container installer openshift-kube-apiserver 49m Normal Created pod/installer-9-ip-10-0-197-197.ec2.internal Created container installer openshift-kube-apiserver 49m Normal Pulled pod/installer-9-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-route-controller-manager 49m Normal Started pod/route-controller-manager-9b45479c5-q5nh8 Started container route-controller-manager openshift-controller-manager 49m Normal Pulled pod/controller-manager-c5c84d6f9-wrj8l Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:03d999968e93d76624e3d10f22591abfbfdaaf2dd8f315abf64cdc590ab64195" already present on machine openshift-controller-manager 49m Normal Killing pod/controller-manager-6fcd58c8dc-wdb9f Stopping container controller-manager openshift-controller-manager 49m Normal AddedInterface pod/controller-manager-c5c84d6f9-wrj8l Add eth0 [10.128.0.23/23] from ovn-kubernetes openshift-controller-manager 49m Normal Started pod/controller-manager-c5c84d6f9-wrj8l Started container controller-manager openshift-controller-manager 49m Normal LeaderElection configmap/openshift-master-controllers controller-manager-c5c84d6f9-wrj8l became leader openshift-controller-manager 49m Normal Created pod/controller-manager-c5c84d6f9-wrj8l Created container controller-manager openshift-controller-manager 49m Normal LeaderElection lease/openshift-master-controllers controller-manager-c5c84d6f9-wrj8l became leader openshift-console 49m Warning Unhealthy pod/console-64949fc89-v8nrv Readiness probe failed: Get "https://10.128.0.11:8443/health": dial tcp 10.128.0.11:8443: connect: connection refused openshift-oauth-apiserver 49m Warning ProbeError pod/apiserver-9b9694fdc-g7gxw Readiness probe error: Get "https://10.129.0.32:8443/readyz": dial tcp 10.129.0.32:8443: connect: connection refused... openshift-console 49m Warning ProbeError pod/console-64949fc89-v8nrv Readiness probe error: Get "https://10.128.0.11:8443/health": dial tcp 10.128.0.11:8443: connect: connection refused... openshift-ingress-operator 49m Warning MalscheduledPod deployment/ingress-operator pod/router-default-699d8c97f-6nwwk pod/router-default-699d8c97f-9xbcx should be one per node, but all were placed on node/ip-10-0-160-152.ec2.internal; evicting pod/router-default-699d8c97f-6nwwk default 49m Normal NodeDone node/ip-10-0-197-197.ec2.internal Setting node ip-10-0-197-197.ec2.internal, currentConfig rendered-master-0a9073c6468c496094e297e778284549 to Done default 49m Normal Uncordon node/ip-10-0-197-197.ec2.internal Update completed for config rendered-master-0a9073c6468c496094e297e778284549 and node has been uncordoned openshift-ingress 49m Normal AddedInterface pod/router-default-699d8c97f-mlkcv Add eth0 [10.128.2.16/23] from ovn-kubernetes openshift-ingress-operator 49m Warning MalscheduledPod deployment/ingress-operator pod/router-default-699d8c97f-6nwwk pod/router-default-699d8c97f-9xbcx should be one per node, but all were placed on node/ip-10-0-160-152.ec2.internal; evicting pod/router-default-699d8c97f-9xbcx openshift-ingress 49m Normal Pulling pod/router-default-699d8c97f-mlkcv Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0743d54d3acaf6558295618248ff446b4352dde0234d52465d7578c7c261e6fd" openshift-controller-manager-operator 49m Normal OperatorStatusChanged deployment/openshift-controller-manager-operator Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well") default 49m Normal ConfigDriftMonitorStarted node/ip-10-0-197-197.ec2.internal Config Drift Monitor started, watching against rendered-master-0a9073c6468c496094e297e778284549 openshift-ingress 49m Normal Killing pod/router-default-699d8c97f-6nwwk Stopping container router openshift-ingress 49m Normal SuccessfulCreate replicaset/router-default-699d8c97f Created pod: router-default-699d8c97f-mlkcv openshift-ingress 49m Normal Started pod/router-default-699d8c97f-mlkcv Started container router openshift-kube-apiserver-operator 49m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" to "GuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]" openshift-ingress 49m Normal Pulled pod/router-default-699d8c97f-mlkcv Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0743d54d3acaf6558295618248ff446b4352dde0234d52465d7578c7c261e6fd" in 1.7552129s (1.755226122s including waiting) openshift-ingress 49m Normal Created pod/router-default-699d8c97f-mlkcv Created container router openshift-monitoring 49m Normal Killing pod/prometheus-k8s-0 Stopping container kube-rbac-proxy openshift-monitoring 49m Normal SuccessfulDelete statefulset/prometheus-k8s delete Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful openshift-monitoring 49m Normal Killing pod/prometheus-k8s-0 Stopping container prometheus openshift-etcd 49m Warning Unhealthy pod/etcd-guard-ip-10-0-239-132.ec2.internal Readiness probe failed: Get "https://10.0.239.132:9980/healthz": dial tcp 10.0.239.132:9980: connect: connection refused openshift-authentication 49m Normal AddedInterface pod/oauth-openshift-86966797f8-g5rm7 Add eth0 [10.128.0.24/23] from ovn-kubernetes openshift-authentication 49m Normal Started pod/oauth-openshift-86966797f8-g5rm7 Started container oauth-openshift openshift-etcd-operator 49m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "StaticPodsDegraded: pod/etcd-ip-10-0-239-132.ec2.internal container \"etcd\" started at 2023-03-21 12:15:57 +0000 UTC is still not ready\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" openshift-authentication 49m Normal ScalingReplicaSet deployment/oauth-openshift Scaled up replica set oauth-openshift-86966797f8 to 2 from 1 openshift-authentication 49m Normal Pulled pod/oauth-openshift-86966797f8-g5rm7 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2156db516db2fda43363a063c11f91bd201dde2de7eb4fbab9c1210d17e699ba" already present on machine openshift-authentication 49m Normal Created pod/oauth-openshift-86966797f8-g5rm7 Created container oauth-openshift openshift-authentication 49m Normal SuccessfulCreate replicaset/oauth-openshift-86966797f8 Created pod: oauth-openshift-86966797f8-b47q9 openshift-authentication 49m Normal ScalingReplicaSet deployment/oauth-openshift Scaled down replica set oauth-openshift-cf968c599 to 1 from 2 openshift-authentication 49m Normal SuccessfulDelete replicaset/oauth-openshift-cf968c599 Deleted pod: oauth-openshift-cf968c599-ffkkn openshift-monitoring 49m Normal SuccessfulCreate statefulset/prometheus-k8s create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful openshift-monitoring 49m Normal SuccessfulDelete statefulset/alertmanager-main delete Pod alertmanager-main-0 in StatefulSet alertmanager-main successful openshift-monitoring 49m Normal Started pod/prometheus-k8s-0 Started container config-reloader openshift-monitoring 49m Normal Pulled pod/prometheus-k8s-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-monitoring 49m Normal Started pod/prometheus-k8s-0 Started container thanos-sidecar openshift-monitoring 49m Normal AddedInterface pod/prometheus-k8s-0 Add eth0 [10.128.2.17/23] from ovn-kubernetes openshift-monitoring 49m Normal Pulled pod/prometheus-k8s-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a898acb346b7aae546bb9dc31e43266415a096fa755241b56acb5969190584f9" already present on machine openshift-monitoring 49m Normal Created pod/prometheus-k8s-0 Created container thanos-sidecar openshift-monitoring 49m Normal Pulled pod/prometheus-k8s-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" already present on machine openshift-monitoring 49m Normal Created pod/prometheus-k8s-0 Created container prometheus openshift-monitoring 49m Normal Pulled pod/prometheus-k8s-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1cdf8e55288584a04e1a403133d4508bac27808a20427a18aee85a92c367316" already present on machine openshift-monitoring 49m Normal Started pod/prometheus-k8s-0 Started container init-config-reloader openshift-monitoring 49m Normal Created pod/prometheus-k8s-0 Created container init-config-reloader openshift-monitoring 49m Normal Pulled pod/prometheus-k8s-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" already present on machine openshift-monitoring 49m Normal Started pod/prometheus-k8s-0 Started container prometheus openshift-monitoring 49m Normal Created pod/prometheus-k8s-0 Created container config-reloader openshift-monitoring 49m Normal Started pod/prometheus-k8s-0 Started container prometheus-proxy openshift-monitoring 49m Normal Pulled pod/prometheus-k8s-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-kube-apiserver-operator 49m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing operand on node ip-10-0-239-132.ec2.internal]" to "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" openshift-monitoring 49m Normal Started pod/prometheus-k8s-0 Started container kube-rbac-proxy openshift-monitoring 49m Normal Pulled pod/prometheus-k8s-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 49m Normal Created pod/prometheus-k8s-0 Created container kube-rbac-proxy-thanos openshift-monitoring 49m Normal Started pod/prometheus-k8s-0 Started container kube-rbac-proxy-thanos openshift-monitoring 49m Normal Created pod/prometheus-k8s-0 Created container prometheus-proxy openshift-authentication-operator 49m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" openshift-monitoring 49m Normal Created pod/prometheus-k8s-0 Created container kube-rbac-proxy openshift-monitoring 49m Normal SuccessfulCreate statefulset/alertmanager-main create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful openshift-monitoring 49m Normal Pulled pod/alertmanager-main-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" already present on machine openshift-monitoring 49m Normal Created pod/alertmanager-main-0 Created container kube-rbac-proxy-metric openshift-monitoring 49m Normal Pulled pod/alertmanager-main-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 49m Normal Created pod/alertmanager-main-0 Created container config-reloader openshift-monitoring 49m Normal Started pod/alertmanager-main-0 Started container config-reloader openshift-monitoring 49m Normal Pulled pod/alertmanager-main-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 49m Normal Created pod/alertmanager-main-0 Created container alertmanager-proxy openshift-monitoring 49m Normal Created pod/alertmanager-main-0 Created container kube-rbac-proxy openshift-monitoring 49m Normal Started pod/alertmanager-main-0 Started container alertmanager-proxy openshift-monitoring 49m Normal Started pod/alertmanager-main-0 Started container kube-rbac-proxy openshift-monitoring 49m Normal Started pod/alertmanager-main-0 Started container kube-rbac-proxy-metric openshift-monitoring 49m Normal Pulled pod/alertmanager-main-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8541443d5c6c0d31959a8e5f8d6a525d43bb4df66be563603d282f48bdeb304" already present on machine openshift-monitoring 49m Normal Created pod/alertmanager-main-0 Created container prom-label-proxy openshift-monitoring 49m Normal Started pod/alertmanager-main-0 Started container prom-label-proxy openshift-monitoring 49m Normal Pulled pod/alertmanager-main-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-monitoring 49m Normal AddedInterface pod/alertmanager-main-0 Add eth0 [10.128.2.18/23] from ovn-kubernetes openshift-monitoring 49m Normal Started pod/alertmanager-main-0 Started container alertmanager openshift-monitoring 49m Normal Created pod/alertmanager-main-0 Created container alertmanager openshift-monitoring 49m Normal Pulled pod/alertmanager-main-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1a766de06530f7008fc155e6239f6e3d63b816deeb24c6d520ef1980d748050" already present on machine openshift-etcd-operator 49m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "StaticPodsDegraded: pod/etcd-ip-10-0-239-132.ec2.internal container \"etcd\" started at 2023-03-21 12:15:57 +0000 UTC is still not ready\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "StaticPodsDegraded: pod/etcd-ip-10-0-239-132.ec2.internal container \"etcd\" is terminated: Completed: \nStaticPodsDegraded: pod/etcd-ip-10-0-239-132.ec2.internal container \"etcd-metrics\" is terminated: Error: ,\"caller\":\"zapgrpc/zapgrpc.go:191\",\"msg\":\"[core] grpc: addrConn.createTransport failed to connect to {10.0.239.132:9978 10.0.239.132 0 }. Err: connection error: desc = \\\"transport: Error while dialing dial tcp 10.0.239.132:9978: connect: connection refused\\\"\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:16:24.735Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to TRANSIENT_FAILURE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:16:24.735Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000427d90, TRANSIENT_FAILURE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:16:44.309Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to IDLE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:16:44.309Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000427d90, IDLE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:16:44.309Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to CONNECTING\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:16:44.309Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel picks a new address \\\"10.0.239.132:9978\\\" to connect\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:16:44.309Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000427d90, CONNECTING\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:16:44.315Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to READY\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:16:44.315Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000427d90, READY\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:16:44.315Z\",\"caller\":\nStaticPodsDegraded: pod/etcd-ip-10-0-239-132.ec2.internal container \"etcd-readyz\" is terminated: Completed: \nStaticPodsDegraded: pod/etcd-ip-10-0-239-132.ec2.internal container \"etcdctl\" is terminated: Error: \nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" openshift-kube-controller-manager-operator 49m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.219.179:9091: connect: connection refused\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: =12) \"serving-cert\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=8 cap=8) {\nNodeInstallerDegraded: (string) (len=27) \"kube-controller-manager-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=32) \"cluster-policy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:21:25.110934 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:25.118932 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:25.122087 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:55.122828 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:21:55.124057 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: " to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.219.179:9091: connect: connection refused\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: =12) \"serving-cert\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=8 cap=8) {\nNodeInstallerDegraded: (string) (len=27) \"kube-controller-manager-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=32) \"cluster-policy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:21:25.110934 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:25.118932 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:25.122087 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:55.122828 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:21:55.124057 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"cluster-policy-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager\" is terminated: Error: s/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt\" certDetail=\"\\\"openshift-kube-apiserver-operator_node-system-admin-signer@1679400872\\\" [] issuer=\\\"\\\" (2023-03-21 12:14:31 +0000 UTC to 2024-03-20 12:14:32 +0000 UTC (now=2023-03-21 12:24:08.308736378 +0000 UTC))\"\nStaticPodsDegraded: I0321 12:24:08.308771 1 tlsconfig.go:178] \"Loaded client CA\" index=7 certName=\"client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt\" certDetail=\"\\\"aggregator-signer\\\" [] issuer=\\\"\\\" (2023-03-21 12:02:33 +0000 UTC to 2023-03-22 12:02:33 +0000 UTC (now=2023-03-21 12:24:08.308760986 +0000 UTC))\"\nStaticPodsDegraded: I0321 12:24:08.308881 1 tlsconfig.go:200] \"Loaded serving cert\" certName=\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\" certDetail=\"\\\"kube-controller-manager.openshift-kube-controller-manager.svc\\\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\\\"openshift-service-serving-signer@1679400872\\\" (2023-03-21 12:14:35 +0000 UTC to 2025-03-20 12:14:36 +0000 UTC (now=2023-03-21 12:24:08.308865571 +0000 UTC))\"\nStaticPodsDegraded: I0321 12:24:08.308982 1 named_certificates.go:53] \"Loaded SNI cert\" index=0 certName=\"self-signed loopback\" certDetail=\"\\\"apiserver-loopback-client@1679401448\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\"apiserver-loopback-client-ca@1679401447\\\" (2023-03-21 11:24:07 +0000 UTC to 2024-03-20 11:24:07 +0000 UTC (now=2023-03-21 12:24:08.308970507 +0000 UTC))\"\nStaticPodsDegraded: I0321 12:24:08.309001 1 secure_serving.go:210] Serving securely on [::]:10257\nStaticPodsDegraded: I0321 12:24:08.309078 1 tlsconfig.go:240] \"Starting DynamicServingCertificateController\"\nStaticPodsDegraded: I0321 12:24:08.309211 1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager-cert-syncer\" is terminated: Error: calhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:23:00.100152 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:23:00.100213 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:23:36.561111 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:23:36.561152 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:23:48.412568 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:23:48.412609 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager-recovery-controller\" is terminated: Completed: " openshift-oauth-apiserver 49m Normal Pulled pod/apiserver-8ddbf84fd-7qf7p Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" already present on machine openshift-oauth-apiserver 49m Normal AddedInterface pod/apiserver-8ddbf84fd-7qf7p Add eth0 [10.129.0.20/23] from ovn-kubernetes openshift-console 49m Normal Pulled pod/console-7dc48fc574-fvlls Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f8ed86b29b0df00f0cfb8b6d170e5fa8d9b0092ee92140788ec5a0a1eb60a10" already present on machine openshift-console 49m Normal AddedInterface pod/console-7dc48fc574-fvlls Add eth0 [10.128.0.25/23] from ovn-kubernetes openshift-console 49m Normal Started pod/console-7dc48fc574-fvlls Started container console openshift-oauth-apiserver 49m Normal Started pod/apiserver-8ddbf84fd-7qf7p Started container oauth-apiserver openshift-oauth-apiserver 49m Normal Created pod/apiserver-8ddbf84fd-7qf7p Created container fix-audit-permissions openshift-console 49m Normal Created pod/console-7dc48fc574-fvlls Created container console openshift-oauth-apiserver 49m Normal Created pod/apiserver-8ddbf84fd-7qf7p Created container oauth-apiserver openshift-oauth-apiserver 49m Normal Pulled pod/apiserver-8ddbf84fd-7qf7p Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" already present on machine openshift-oauth-apiserver 49m Normal Started pod/apiserver-8ddbf84fd-7qf7p Started container fix-audit-permissions openshift-etcd-operator 49m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "StaticPodsDegraded: pod/etcd-ip-10-0-239-132.ec2.internal container \"etcd\" is terminated: Completed: \nStaticPodsDegraded: pod/etcd-ip-10-0-239-132.ec2.internal container \"etcd-metrics\" is terminated: Error: ,\"caller\":\"zapgrpc/zapgrpc.go:191\",\"msg\":\"[core] grpc: addrConn.createTransport failed to connect to {10.0.239.132:9978 10.0.239.132 0 }. Err: connection error: desc = \\\"transport: Error while dialing dial tcp 10.0.239.132:9978: connect: connection refused\\\"\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:16:24.735Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to TRANSIENT_FAILURE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:16:24.735Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000427d90, TRANSIENT_FAILURE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:16:44.309Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to IDLE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:16:44.309Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000427d90, IDLE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:16:44.309Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to CONNECTING\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:16:44.309Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel picks a new address \\\"10.0.239.132:9978\\\" to connect\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:16:44.309Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000427d90, CONNECTING\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:16:44.315Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to READY\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:16:44.315Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000427d90, READY\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:16:44.315Z\",\"caller\":\nStaticPodsDegraded: pod/etcd-ip-10-0-239-132.ec2.internal container \"etcd-readyz\" is terminated: Completed: \nStaticPodsDegraded: pod/etcd-ip-10-0-239-132.ec2.internal container \"etcdctl\" is terminated: Error: \nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" openshift-etcd 49m Normal Pulled pod/etcd-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-kube-controller-manager 49m Normal Started pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Started container kube-controller-manager-cert-syncer openshift-etcd 49m Normal Created pod/etcd-ip-10-0-239-132.ec2.internal Created container setup openshift-kube-controller-manager 49m Normal Started pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Started container kube-controller-manager-recovery-controller openshift-oauth-apiserver 49m Normal ScalingReplicaSet deployment/apiserver Scaled down replica set apiserver-9b9694fdc to 0 from 1 openshift-etcd 49m Normal Started pod/etcd-ip-10-0-239-132.ec2.internal Started container setup openshift-oauth-apiserver 49m Normal ScalingReplicaSet deployment/apiserver Scaled up replica set apiserver-8ddbf84fd to 3 from 2 openshift-kube-controller-manager 49m Normal Created pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Created container kube-controller-manager-recovery-controller openshift-etcd 49m Normal Started pod/etcd-ip-10-0-239-132.ec2.internal Started container etcd-ensure-env-vars openshift-etcd 49m Normal Created pod/etcd-ip-10-0-239-132.ec2.internal Created container etcd-ensure-env-vars openshift-kube-controller-manager 49m Normal Created pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Created container kube-controller-manager-cert-syncer openshift-kube-controller-manager 49m Normal Pulled pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-oauth-apiserver 49m Normal SuccessfulCreate replicaset/apiserver-8ddbf84fd Created pod: apiserver-8ddbf84fd-4jwnk openshift-kube-controller-manager 49m Normal Started pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Started container cluster-policy-controller openshift-kube-controller-manager 49m Normal Created pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Created container cluster-policy-controller openshift-kube-controller-manager 49m Normal Pulled pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fec19020202ba3b7f09460bc0cb52fc001941fbb3a3fa7d1ca34a2e9ebf0e9" already present on machine openshift-oauth-apiserver 49m Normal Killing pod/apiserver-9b9694fdc-sl5wc Stopping container oauth-apiserver openshift-oauth-apiserver 49m Normal SuccessfulDelete replicaset/apiserver-9b9694fdc Deleted pod: apiserver-9b9694fdc-sl5wc openshift-kube-controller-manager 49m Normal Pulled pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-etcd 49m Normal Pulled pod/etcd-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 49m Normal Started pod/etcd-ip-10-0-239-132.ec2.internal Started container etcd-resources-copy openshift-kube-controller-manager 49m Warning ClusterInfrastructureStatus pod/kube-controller-manager-ip-10-0-140-6.ec2.internal unable to get cluster infrastructure status, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope openshift-etcd 49m Normal Pulled pod/etcd-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-authentication-operator 49m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)",Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" openshift-etcd 49m Normal Created pod/etcd-ip-10-0-239-132.ec2.internal Created container etcd-resources-copy openshift-etcd 49m Normal Pulled pod/etcd-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 49m Normal Pulled pod/etcd-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 49m Normal Created pod/etcd-ip-10-0-239-132.ec2.internal Created container etcdctl openshift-etcd 49m Normal Started pod/etcd-ip-10-0-239-132.ec2.internal Started container etcdctl openshift-etcd 49m Normal Created pod/etcd-ip-10-0-239-132.ec2.internal Created container etcd-readyz openshift-kube-controller-manager-operator 49m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.219.179:9091: connect: connection refused\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: =12) \"serving-cert\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=8 cap=8) {\nNodeInstallerDegraded: (string) (len=27) \"kube-controller-manager-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=32) \"cluster-policy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:21:25.110934 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:25.118932 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:25.122087 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:55.122828 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:21:55.124057 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"cluster-policy-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager\" is terminated: Error: s/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt\" certDetail=\"\\\"openshift-kube-apiserver-operator_node-system-admin-signer@1679400872\\\" [] issuer=\\\"\\\" (2023-03-21 12:14:31 +0000 UTC to 2024-03-20 12:14:32 +0000 UTC (now=2023-03-21 12:24:08.308736378 +0000 UTC))\"\nStaticPodsDegraded: I0321 12:24:08.308771 1 tlsconfig.go:178] \"Loaded client CA\" index=7 certName=\"client-ca-bundle::/etc/kubernetes/static-pod-certs/configmaps/client-ca/ca-bundle.crt,request-header::/etc/kubernetes/static-pod-certs/configmaps/aggregator-client-ca/ca-bundle.crt\" certDetail=\"\\\"aggregator-signer\\\" [] issuer=\\\"\\\" (2023-03-21 12:02:33 +0000 UTC to 2023-03-22 12:02:33 +0000 UTC (now=2023-03-21 12:24:08.308760986 +0000 UTC))\"\nStaticPodsDegraded: I0321 12:24:08.308881 1 tlsconfig.go:200] \"Loaded serving cert\" certName=\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\" certDetail=\"\\\"kube-controller-manager.openshift-kube-controller-manager.svc\\\" [serving] validServingFor=[kube-controller-manager.openshift-kube-controller-manager.svc,kube-controller-manager.openshift-kube-controller-manager.svc.cluster.local] issuer=\\\"openshift-service-serving-signer@1679400872\\\" (2023-03-21 12:14:35 +0000 UTC to 2025-03-20 12:14:36 +0000 UTC (now=2023-03-21 12:24:08.308865571 +0000 UTC))\"\nStaticPodsDegraded: I0321 12:24:08.308982 1 named_certificates.go:53] \"Loaded SNI cert\" index=0 certName=\"self-signed loopback\" certDetail=\"\\\"apiserver-loopback-client@1679401448\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\"apiserver-loopback-client-ca@1679401447\\\" (2023-03-21 11:24:07 +0000 UTC to 2024-03-20 11:24:07 +0000 UTC (now=2023-03-21 12:24:08.308970507 +0000 UTC))\"\nStaticPodsDegraded: I0321 12:24:08.309001 1 secure_serving.go:210] Serving securely on [::]:10257\nStaticPodsDegraded: I0321 12:24:08.309078 1 tlsconfig.go:240] \"Starting DynamicServingCertificateController\"\nStaticPodsDegraded: I0321 12:24:08.309211 1 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager-cert-syncer\" is terminated: Error: calhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:23:00.100152 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:23:00.100213 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:23:36.561111 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:23:36.561152 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:23:48.412568 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:23:48.412609 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager-recovery-controller\" is terminated: Completed: " to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.219.179:9091: connect: connection refused\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: =12) \"serving-cert\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=8 cap=8) {\nNodeInstallerDegraded: (string) (len=27) \"kube-controller-manager-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=32) \"cluster-policy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:21:25.110934 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:25.118932 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:25.122087 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:55.122828 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:21:55.124057 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: " openshift-etcd 49m Normal Created pod/etcd-ip-10-0-239-132.ec2.internal Created container etcd openshift-etcd 49m Normal Pulled pod/etcd-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd 49m Normal Pulled pod/etcd-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 49m Normal Started pod/etcd-ip-10-0-239-132.ec2.internal Started container etcd-readyz openshift-etcd 49m Normal Started pod/etcd-ip-10-0-239-132.ec2.internal Started container etcd-metrics openshift-etcd 49m Normal Started pod/etcd-ip-10-0-239-132.ec2.internal Started container etcd openshift-etcd 49m Normal Created pod/etcd-ip-10-0-239-132.ec2.internal Created container etcd-metrics openshift-kube-controller-manager-operator 49m Normal SecretUpdated deployment/kube-controller-manager-operator Updated Secret/service-account-private-key -n openshift-kube-controller-manager because it changed openshift-authentication-operator 49m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-9b9694fdc-sl5wc pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)" openshift-apiserver 49m Normal AddedInterface pod/apiserver-7475f65d84-4ncn2 Add eth0 [10.129.0.21/23] from ovn-kubernetes openshift-apiserver 49m Normal Pulled pod/apiserver-7475f65d84-4ncn2 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" already present on machine openshift-kube-controller-manager-operator 49m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/revision-status-6 -n openshift-kube-controller-manager because it was missing openshift-kube-controller-manager-operator 49m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/kube-controller-manager-pod-6 -n openshift-kube-controller-manager because it was missing openshift-apiserver 49m Normal Created pod/apiserver-7475f65d84-4ncn2 Created container fix-audit-permissions openshift-apiserver 49m Normal Started pod/apiserver-7475f65d84-4ncn2 Started container fix-audit-permissions openshift-apiserver 49m Normal Pulled pod/apiserver-7475f65d84-4ncn2 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" already present on machine openshift-apiserver 49m Normal Pulled pod/apiserver-7475f65d84-4ncn2 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-apiserver 49m Normal Started pod/apiserver-7475f65d84-4ncn2 Started container openshift-apiserver openshift-apiserver 49m Normal Created pod/apiserver-7475f65d84-4ncn2 Created container openshift-apiserver openshift-apiserver 49m Normal Started pod/apiserver-7475f65d84-4ncn2 Started container openshift-apiserver-check-endpoints openshift-apiserver 49m Normal Created pod/apiserver-7475f65d84-4ncn2 Created container openshift-apiserver-check-endpoints openshift-kube-controller-manager-operator 49m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/config-6 -n openshift-kube-controller-manager because it was missing openshift-apiserver 49m Warning FastControllerResync node/ip-10-0-239-132.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling openshift-apiserver 49m Warning FastControllerResync node/ip-10-0-239-132.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager-operator 49m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/cluster-policy-controller-config-6 -n openshift-kube-controller-manager because it was missing openshift-apiserver 49m Normal Killing pod/apiserver-6977bc9f6b-6c47k Stopping container openshift-apiserver openshift-apiserver 49m Normal ScalingReplicaSet deployment/apiserver Scaled down replica set apiserver-6977bc9f6b to 1 from 2 openshift-apiserver 49m Normal ScalingReplicaSet deployment/apiserver Scaled up replica set apiserver-7475f65d84 to 2 from 1 openshift-kube-controller-manager-operator 49m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/controller-manager-kubeconfig-6 -n openshift-kube-controller-manager because it was missing openshift-apiserver 49m Normal Killing pod/apiserver-6977bc9f6b-6c47k Stopping container openshift-apiserver-check-endpoints openshift-apiserver 49m Normal SuccessfulCreate replicaset/apiserver-7475f65d84 Created pod: apiserver-7475f65d84-whqlh openshift-apiserver 49m Normal SuccessfulDelete replicaset/apiserver-6977bc9f6b Deleted pod: apiserver-6977bc9f6b-6c47k openshift-kube-controller-manager-operator 49m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/kube-controller-cert-syncer-kubeconfig-6 -n openshift-kube-controller-manager because it was missing openshift-authentication 49m Normal Killing pod/oauth-openshift-cf968c599-ffkkn Stopping container oauth-openshift openshift-kube-controller-manager-operator 49m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/serviceaccount-ca-6 -n openshift-kube-controller-manager because it was missing openshift-kube-controller-manager-operator 49m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/service-ca-6 -n openshift-kube-controller-manager because it was missing openshift-kube-controller-manager-operator 49m Normal NodeCurrentRevisionChanged deployment/kube-controller-manager-operator Updated node "ip-10-0-140-6.ec2.internal" from revision 4 to 5 because static pod is ready openshift-kube-controller-manager-operator 49m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.219.179:9091: connect: connection refused\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: installer: =12) \"serving-cert\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=8 cap=8) {\nNodeInstallerDegraded: (string) (len=27) \"kube-controller-manager-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=32) \"cluster-policy-controller-config\",\nNodeInstallerDegraded: (string) (len=29) \"controller-manager-kubeconfig\",\nNodeInstallerDegraded: (string) (len=38) \"kube-controller-cert-syncer-kubeconfig\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=10) \"service-ca\",\nNodeInstallerDegraded: (string) (len=15) \"recycler-config\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"cloud-config\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=39) \"kube-controller-manager-client-cert-key\",\nNodeInstallerDegraded: (string) (len=10) \"csr-signer\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) (len=2 cap=2) {\nNodeInstallerDegraded: (string) (len=20) \"aggregator-client-ca\",\nNodeInstallerDegraded: (string) (len=9) \"client-ca\"\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=17) \"trusted-ca-bundle\"\nNodeInstallerDegraded: CertDir: (string) (len=66) \"/etc/kubernetes/static-pod-resources/kube-controller-manager-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:21:25.110934 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:25.118932 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:25.122087 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:21:55.122828 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:21:55.124057 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods?labelSelector=app%3Dinstaller\": dial tcp 172.30.0.1:443: connect: connection refused\nNodeInstallerDegraded: " to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.219.179:9091: connect: connection refused",Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 4; 1 nodes are at revision 5" to "NodeInstallerProgressing: 1 nodes are at revision 4; 2 nodes are at revision 5",Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 4; 1 nodes are at revision 5" to "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 4; 2 nodes are at revision 5" openshift-kube-controller-manager-operator 49m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/recycler-config-6 -n openshift-kube-controller-manager because it was missing openshift-kube-apiserver 49m Normal Killing pod/kube-apiserver-ip-10-0-197-197.ec2.internal Stopping container kube-apiserver-cert-regeneration-controller openshift-kube-apiserver 49m Normal Killing pod/kube-apiserver-ip-10-0-197-197.ec2.internal Stopping container kube-apiserver-cert-syncer openshift-kube-apiserver 49m Normal Killing pod/kube-apiserver-ip-10-0-197-197.ec2.internal Stopping container kube-apiserver openshift-kube-apiserver 49m Normal StaticPodInstallerCompleted pod/installer-9-ip-10-0-197-197.ec2.internal Successfully installed revision 9 openshift-kube-apiserver 49m Normal Killing pod/kube-apiserver-ip-10-0-197-197.ec2.internal Stopping container kube-apiserver-insecure-readyz openshift-kube-controller-manager-operator 49m Normal SecretCreated deployment/kube-controller-manager-operator Created Secret/service-account-private-key-6 -n openshift-kube-controller-manager because it was missing openshift-kube-controller-manager-operator 49m Normal SecretCreated deployment/kube-controller-manager-operator Created Secret/serving-cert-6 -n openshift-kube-controller-manager because it was missing openshift-kube-controller-manager-operator 49m Normal RevisionTriggered deployment/kube-controller-manager-operator new revision 6 triggered by "secret/service-account-private-key has changed" openshift-kube-controller-manager-operator 49m Normal SecretCreated deployment/kube-controller-manager-operator Created Secret/localhost-recovery-client-token-6 -n openshift-kube-controller-manager because it was missing openshift-ingress 49m Warning ProbeError pod/router-default-699d8c97f-6nwwk Readiness probe error: HTTP probe failed with statuscode: 500... openshift-kube-controller-manager-operator 49m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.219.179:9091: connect: connection refused" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.219.179:9091: connect: connection refused\nRevisionControllerDegraded: conflicting latestAvailableRevision 6" openshift-kube-controller-manager-operator 49m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.219.179:9091: connect: connection refused\nRevisionControllerDegraded: conflicting latestAvailableRevision 6" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.219.179:9091: connect: connection refused" openshift-kube-controller-manager-operator 49m Normal RevisionCreate deployment/kube-controller-manager-operator Revision 5 created because secret/service-account-private-key has changed openshift-etcd 49m Warning ProbeError pod/etcd-ip-10-0-239-132.ec2.internal Startup probe error: Get "https://10.0.239.132:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers)... openshift-etcd 49m Warning Unhealthy pod/etcd-ip-10-0-239-132.ec2.internal Startup probe failed: Get "https://10.0.239.132:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) openshift-console 49m Warning Unhealthy pod/console-64949fc89-nhxbj Readiness probe failed: Get "https://10.129.0.16:8443/health": dial tcp 10.129.0.16:8443: connect: connection refused openshift-kube-scheduler-operator 49m Normal NodeCurrentRevisionChanged deployment/openshift-kube-scheduler-operator Updated node "ip-10-0-239-132.ec2.internal" from revision 6 to 7 because static pod is ready openshift-kube-controller-manager-operator 49m Normal NodeTargetRevisionChanged deployment/kube-controller-manager-operator Updating node "ip-10-0-197-197.ec2.internal" from revision 4 to 5 because node ip-10-0-197-197.ec2.internal with revision 4 is the oldest openshift-kube-controller-manager-operator 49m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 nodes are at revision 4; 2 nodes are at revision 5" to "NodeInstallerProgressing: 1 nodes are at revision 4; 2 nodes are at revision 5; 0 nodes have achieved new revision 6",Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 4; 2 nodes are at revision 5" to "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 4; 2 nodes are at revision 5; 0 nodes have achieved new revision 6" openshift-kube-scheduler-operator 49m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 6; 1 nodes are at revision 7" to "NodeInstallerProgressing: 1 nodes are at revision 6; 2 nodes are at revision 7",Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 6; 1 nodes are at revision 7" to "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 6; 2 nodes are at revision 7" openshift-authentication 48m Normal AddedInterface pod/oauth-openshift-86966797f8-b47q9 Add eth0 [10.130.0.47/23] from ovn-kubernetes openshift-authentication 48m Normal Pulled pod/oauth-openshift-86966797f8-b47q9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2156db516db2fda43363a063c11f91bd201dde2de7eb4fbab9c1210d17e699ba" already present on machine openshift-authentication 48m Normal Created pod/oauth-openshift-86966797f8-b47q9 Created container oauth-openshift openshift-authentication 48m Normal Started pod/oauth-openshift-86966797f8-b47q9 Started container oauth-openshift openshift-authentication 48m Normal ScalingReplicaSet deployment/oauth-openshift Scaled down replica set oauth-openshift-cf968c599 to 0 from 1 openshift-apiserver 48m Warning ProbeError pod/apiserver-6977bc9f6b-6c47k Readiness probe error: HTTP probe failed with statuscode: 500... openshift-authentication 48m Normal Killing pod/oauth-openshift-cf968c599-9vrxf Stopping container oauth-openshift openshift-kube-controller-manager-operator 48m Normal PodCreated deployment/kube-controller-manager-operator Created Pod/installer-6-ip-10-0-197-197.ec2.internal -n openshift-kube-controller-manager because it was missing openshift-authentication 48m Normal SuccessfulDelete replicaset/oauth-openshift-cf968c599 Deleted pod: oauth-openshift-cf968c599-9vrxf openshift-authentication 48m Normal SuccessfulCreate replicaset/oauth-openshift-86966797f8 Created pod: oauth-openshift-86966797f8-sbdp5 openshift-apiserver 48m Warning Unhealthy pod/apiserver-6977bc9f6b-6c47k Readiness probe failed: HTTP probe failed with statuscode: 500 openshift-authentication 48m Normal ScalingReplicaSet deployment/oauth-openshift Scaled up replica set oauth-openshift-86966797f8 to 3 from 2 openshift-kube-controller-manager 48m Normal Pulled pod/installer-6-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 48m Normal AddedInterface pod/installer-6-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.48/23] from ovn-kubernetes openshift-kube-controller-manager 48m Normal Created pod/installer-6-ip-10-0-197-197.ec2.internal Created container installer openshift-kube-controller-manager 48m Normal Started pod/installer-6-ip-10-0-197-197.ec2.internal Started container installer openshift-kube-scheduler-operator 48m Normal NodeTargetRevisionChanged deployment/openshift-kube-scheduler-operator Updating node "ip-10-0-140-6.ec2.internal" from revision 6 to 7 because node ip-10-0-140-6.ec2.internal with revision 6 is the oldest openshift-kube-scheduler-operator 48m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/installer-7-ip-10-0-140-6.ec2.internal -n openshift-kube-scheduler because it was missing openshift-ingress 48m Warning ProbeError pod/router-default-699d8c97f-6nwwk Readiness probe error: HTTP probe failed with statuscode: 500... openshift-ingress 48m Warning Unhealthy pod/router-default-699d8c97f-6nwwk Readiness probe failed: HTTP probe failed with statuscode: 500 openshift-kube-scheduler 48m Normal Started pod/installer-7-ip-10-0-140-6.ec2.internal Started container installer openshift-kube-scheduler 48m Normal Created pod/installer-7-ip-10-0-140-6.ec2.internal Created container installer openshift-kube-scheduler 48m Normal Pulled pod/installer-7-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 48m Normal AddedInterface pod/installer-7-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.27/23] from ovn-kubernetes openshift-console 48m Warning ProbeError pod/console-64949fc89-nhxbj Readiness probe error: Get "https://10.129.0.16:8443/health": dial tcp 10.129.0.16:8443: connect: connection refused... openshift-etcd-operator 48m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Progressing message changed from "NodeInstallerProgressing: 1 nodes are at revision 2; 2 nodes are at revision 5; 0 nodes have achieved new revision 6" to "NodeInstallerProgressing: 2 nodes are at revision 5; 1 nodes are at revision 6",Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 2; 2 nodes are at revision 5; 0 nodes have achieved new revision 6\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 5; 1 nodes are at revision 6\nEtcdMembersAvailable: 3 members are available" openshift-etcd-operator 48m Normal NodeCurrentRevisionChanged deployment/etcd-operator Updated node "ip-10-0-239-132.ec2.internal" from revision 2 to 6 because static pod is ready openshift-etcd-operator 48m Normal NodeTargetRevisionChanged deployment/etcd-operator Updating node "ip-10-0-140-6.ec2.internal" from revision 5 to 6 because node ip-10-0-140-6.ec2.internal with revision 5 is the oldest openshift-etcd 48m Normal Pulled pod/installer-6-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd 48m Normal AddedInterface pod/installer-6-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.28/23] from ovn-kubernetes openshift-etcd-operator 48m Normal PodCreated deployment/etcd-operator Created Pod/installer-6-ip-10-0-140-6.ec2.internal -n openshift-etcd because it was missing openshift-etcd 48m Normal Started pod/installer-6-ip-10-0-140-6.ec2.internal Started container installer openshift-etcd 48m Normal Created pod/installer-6-ip-10-0-140-6.ec2.internal Created container installer openshift-oauth-apiserver 48m Warning Unhealthy pod/apiserver-9b9694fdc-sl5wc Readiness probe failed: Get "https://10.128.0.35:8443/readyz": dial tcp 10.128.0.35:8443: connect: connection refused openshift-apiserver 48m Warning Unhealthy pod/apiserver-6977bc9f6b-6c47k Readiness probe failed: Get "https://10.130.0.53:8443/readyz": dial tcp 10.130.0.53:8443: connect: connection refused openshift-oauth-apiserver 48m Warning ProbeError pod/apiserver-9b9694fdc-sl5wc Readiness probe error: Get "https://10.128.0.35:8443/readyz": dial tcp 10.128.0.35:8443: connect: connection refused... openshift-apiserver 48m Warning ProbeError pod/apiserver-6977bc9f6b-6c47k Readiness probe error: Get "https://10.130.0.53:8443/readyz": dial tcp 10.130.0.53:8443: connect: connection refused... openshift-authentication 48m Normal Created pod/oauth-openshift-86966797f8-sbdp5 Created container oauth-openshift openshift-authentication 48m Normal Started pod/oauth-openshift-86966797f8-sbdp5 Started container oauth-openshift openshift-authentication 48m Normal Pulled pod/oauth-openshift-86966797f8-sbdp5 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2156db516db2fda43363a063c11f91bd201dde2de7eb4fbab9c1210d17e699ba" already present on machine openshift-authentication 48m Normal AddedInterface pod/oauth-openshift-86966797f8-sbdp5 Add eth0 [10.129.0.22/23] from ovn-kubernetes openshift-console 48m Warning Unhealthy pod/console-7dc48fc574-4kqrk Readiness probe failed: Get "https://10.130.0.43:8443/health": dial tcp 10.130.0.43:8443: connect: connection refused openshift-kube-controller-manager 48m Normal Killing pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Stopping container kube-controller-manager openshift-kube-controller-manager 48m Normal Killing pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Stopping container kube-controller-manager-cert-syncer openshift-kube-controller-manager 48m Normal StaticPodInstallerCompleted pod/installer-6-ip-10-0-197-197.ec2.internal Successfully installed revision 6 openshift-kube-controller-manager 48m Normal Killing pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Stopping container cluster-policy-controller openshift-kube-controller-manager 48m Normal Killing pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Stopping container kube-controller-manager-recovery-controller openshift-kube-controller-manager-operator 48m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.219.179:9091: connect: connection refused" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.219.179:9091: connect: connection refused\nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"cluster-policy-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-cert-syncer\" is terminated: Error: st *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": x509: certificate signed by unknown authority\nStaticPodsDegraded: W0321 12:24:33.856647 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": x509: certificate signed by unknown authority\nStaticPodsDegraded: E0321 12:24:33.856696 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": x509: certificate signed by unknown authority\nStaticPodsDegraded: W0321 12:25:11.129064 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": x509: certificate signed by unknown authority\nStaticPodsDegraded: E0321 12:25:11.129095 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": x509: certificate signed by unknown authority\nStaticPodsDegraded: W0321 12:25:14.933675 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": x509: certificate signed by unknown authority\nStaticPodsDegraded: E0321 12:25:14.933730 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": x509: certificate signed by unknown authority\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-recovery-controller\" is terminated: Completed: " openshift-console 48m Warning ProbeError pod/console-7dc48fc574-4kqrk Readiness probe error: Get "https://10.130.0.43:8443/health": dial tcp 10.130.0.43:8443: connect: connection refused... openshift-kube-scheduler 48m Normal StaticPodInstallerCompleted pod/installer-7-ip-10-0-140-6.ec2.internal Successfully installed revision 7 openshift-kube-scheduler-operator 48m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal container \"kube-scheduler\" is terminated: Completed: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal container \"kube-scheduler-cert-syncer\" is terminated: Error: ailed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:24:49.047504 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:24:49.047539 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:25:12.768903 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:25:12.768941 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:25:43.536901 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:25:43.536956 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal container \"kube-scheduler-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" openshift-authentication-operator 48m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-9b9694fdc-sl5wc pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)" openshift-kube-controller-manager-operator 48m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.219.179:9091: connect: connection refused\nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"cluster-policy-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-cert-syncer\" is terminated: Error: st *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": x509: certificate signed by unknown authority\nStaticPodsDegraded: W0321 12:24:33.856647 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": x509: certificate signed by unknown authority\nStaticPodsDegraded: E0321 12:24:33.856696 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": x509: certificate signed by unknown authority\nStaticPodsDegraded: W0321 12:25:11.129064 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": x509: certificate signed by unknown authority\nStaticPodsDegraded: E0321 12:25:11.129095 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": x509: certificate signed by unknown authority\nStaticPodsDegraded: W0321 12:25:14.933675 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": x509: certificate signed by unknown authority\nStaticPodsDegraded: E0321 12:25:14.933730 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": x509: certificate signed by unknown authority\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-recovery-controller\" is terminated: Completed: " to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.219.179:9091: connect: connection refused" openshift-kube-scheduler 48m Normal Killing pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Stopping container kube-scheduler openshift-kube-scheduler 48m Normal Killing pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Stopping container kube-scheduler-recovery-controller openshift-kube-scheduler 48m Normal Killing pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Stopping container kube-scheduler-cert-syncer openshift-kube-controller-manager-operator 48m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.219.179:9091: connect: connection refused" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.219.179:9091: connect: connection refused\nGuardControllerDegraded: Missing PodIP in operand kube-controller-manager-ip-10-0-197-197.ec2.internal on node ip-10-0-197-197.ec2.internal" openshift-kube-controller-manager-operator 48m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded changed from True to False ("GuardControllerDegraded: Missing PodIP in operand kube-controller-manager-ip-10-0-197-197.ec2.internal on node ip-10-0-197-197.ec2.internal\nNodeControllerDegraded: All master nodes are ready") openshift-authentication-operator 48m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-8ddbf84fd-4jwnk pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)" openshift-oauth-apiserver 48m Warning FailedCreatePodSandBox pod/apiserver-8ddbf84fd-4jwnk Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-8ddbf84fd-4jwnk_openshift-oauth-apiserver_30d34ad7-50c9-411e-940c-cdfac0f10ddd_0(5b822c1f8cf344f7b9135a67013440db59949af12822305206a93b634fb4a54a): error adding pod openshift-oauth-apiserver_apiserver-8ddbf84fd-4jwnk to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-oauth-apiserver/apiserver-8ddbf84fd-4jwnk/30d34ad7-50c9-411e-940c-cdfac0f10ddd]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-oauth-apiserver/pods/apiserver-8ddbf84fd-4jwnk?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-oauth-apiserver 48m Warning FailedCreatePodSandBox pod/apiserver-8ddbf84fd-4jwnk Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-8ddbf84fd-4jwnk_openshift-oauth-apiserver_30d34ad7-50c9-411e-940c-cdfac0f10ddd_0(d8bf65ac50fdc0381d16af8f30929bf86c48c284eed8ed94489c149b565f7ed6): error adding pod openshift-oauth-apiserver_apiserver-8ddbf84fd-4jwnk to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-oauth-apiserver/apiserver-8ddbf84fd-4jwnk/30d34ad7-50c9-411e-940c-cdfac0f10ddd]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-oauth-apiserver/pods/apiserver-8ddbf84fd-4jwnk?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-etcd 48m Normal StaticPodInstallerCompleted pod/installer-6-ip-10-0-140-6.ec2.internal Successfully installed revision 6 openshift-etcd 48m Warning Unhealthy pod/etcd-guard-ip-10-0-140-6.ec2.internal Readiness probe failed: Get "https://10.0.140.6:9980/healthz": dial tcp 10.0.140.6:9980: connect: connection refused openshift-etcd 48m Normal Killing pod/etcd-ip-10-0-140-6.ec2.internal Stopping container etcd-metrics openshift-etcd 48m Normal Killing pod/etcd-ip-10-0-140-6.ec2.internal Stopping container etcd-readyz openshift-etcd 48m Normal Killing pod/etcd-ip-10-0-140-6.ec2.internal Stopping container etcd openshift-marketplace 48m Normal Pulled pod/certified-operators-5mh29 Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.12" in 242.143101ms (242.150954ms including waiting) openshift-marketplace 48m Normal Pulling pod/certified-operators-5mh29 Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.12" openshift-marketplace 48m Normal AddedInterface pod/certified-operators-5mh29 Add eth0 [10.128.0.30/23] from ovn-kubernetes openshift-etcd 48m Normal Killing pod/etcd-ip-10-0-140-6.ec2.internal Stopping container etcdctl openshift-marketplace 48m Normal Started pod/certified-operators-5mh29 Started container registry-server openshift-kube-controller-manager 48m Normal Pulled pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fec19020202ba3b7f09460bc0cb52fc001941fbb3a3fa7d1ca34a2e9ebf0e9" already present on machine openshift-kube-controller-manager 48m Normal Pulled pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 48m Normal Started pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Started container cluster-policy-controller openshift-kube-scheduler 48m Warning ProbeError pod/openshift-kube-scheduler-guard-ip-10-0-140-6.ec2.internal Readiness probe error: Get "https://10.0.140.6:10259/healthz": dial tcp 10.0.140.6:10259: connect: connection refused... openshift-kube-controller-manager 48m Normal Created pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Created container cluster-policy-controller openshift-marketplace 48m Normal Created pod/certified-operators-5mh29 Created container registry-server openshift-kube-scheduler 48m Warning Unhealthy pod/openshift-kube-scheduler-guard-ip-10-0-140-6.ec2.internal Readiness probe failed: Get "https://10.0.140.6:10259/healthz": dial tcp 10.0.140.6:10259: connect: connection refused openshift-kube-controller-manager 48m Normal Pulled pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 48m Normal Created pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Created container kube-controller-manager-cert-syncer openshift-kube-controller-manager 48m Normal Started pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Started container kube-controller-manager-cert-syncer openshift-kube-controller-manager 48m Warning FastControllerResync pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager 48m Normal Created pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Created container kube-controller-manager-recovery-controller openshift-kube-controller-manager 48m Normal Started pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Started container kube-controller-manager-recovery-controller kube-system 48m Normal LeaderElection configmap/kube-controller-manager ip-10-0-140-6_08f843a7-cf44-4f28-943a-0569a92438c8 became leader kube-system 48m Normal LeaderElection lease/kube-controller-manager ip-10-0-140-6_08f843a7-cf44-4f28-943a-0569a92438c8 became leader openshift-kube-scheduler 48m Normal Started pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Started container wait-for-host-port openshift-kube-scheduler 48m Normal Created pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Created container wait-for-host-port openshift-kube-scheduler 48m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-console 48m Warning ProbeError pod/console-7dc48fc574-fvlls Readiness probe error: Get "https://10.128.0.25:8443/health": dial tcp 10.128.0.25:8443: connect: connection refused... openshift-console 48m Warning Unhealthy pod/console-7dc48fc574-fvlls Readiness probe failed: Get "https://10.128.0.25:8443/health": dial tcp 10.128.0.25:8443: connect: connection refused openshift-kube-controller-manager-operator 48m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Missing PodIP in operand kube-controller-manager-ip-10-0-197-197.ec2.internal on node ip-10-0-197-197.ec2.internal\nNodeControllerDegraded: All master nodes are ready" to "GuardControllerDegraded: Missing PodIP in operand kube-controller-manager-ip-10-0-197-197.ec2.internal on node ip-10-0-197-197.ec2.internal\nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nNodeControllerDegraded: All master nodes are ready" openshift-kube-scheduler 48m Normal Created pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Created container kube-scheduler-cert-syncer openshift-kube-controller-manager 48m Normal Started pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Started container kube-controller-manager openshift-kube-controller-manager 48m Normal Created pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Created container kube-controller-manager openshift-kube-controller-manager 48m Normal Pulled pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-scheduler 48m Normal Created pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Created container kube-scheduler-recovery-controller openshift-kube-scheduler 48m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 48m Normal Started pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Started container kube-scheduler-cert-syncer openshift-kube-scheduler 48m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-scheduler 48m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-controller-manager 48m Normal LeaderElection configmap/cert-recovery-controller-lock ip-10-0-197-197_df285e02-8882-4186-ba15-fc542d40f6f1 became leader openshift-kube-controller-manager 48m Normal LeaderElection lease/cert-recovery-controller-lock ip-10-0-197-197_df285e02-8882-4186-ba15-fc542d40f6f1 became leader openshift-kube-scheduler 48m Normal Started pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Started container kube-scheduler openshift-kube-scheduler 48m Normal Created pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Created container kube-scheduler openshift-kube-scheduler 48m Normal Started pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Started container kube-scheduler-recovery-controller openshift-etcd 48m Warning ProbeError pod/etcd-ip-10-0-140-6.ec2.internal Readiness probe error: Get "https://10.0.140.6:9980/readyz": dial tcp 10.0.140.6:9980: connect: connection refused... openshift-kube-scheduler-operator 48m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "StaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal container \"kube-scheduler\" is terminated: Completed: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal container \"kube-scheduler-cert-syncer\" is terminated: Error: ailed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:24:49.047504 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:24:49.047539 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:25:12.768903 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:25:12.768941 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:25:43.536901 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:25:43.536956 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-scheduler/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal container \"kube-scheduler-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" openshift-kube-controller-manager 48m Warning ClusterInfrastructureStatus pod/kube-controller-manager-ip-10-0-197-197.ec2.internal unable to get cluster infrastructure status, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope openshift-kube-controller-manager-operator 48m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Missing PodIP in operand kube-controller-manager-ip-10-0-197-197.ec2.internal on node ip-10-0-197-197.ec2.internal\nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nNodeControllerDegraded: All master nodes are ready" to "GuardControllerDegraded: Missing PodIP in operand kube-controller-manager-ip-10-0-197-197.ec2.internal on node ip-10-0-197-197.ec2.internal\nNodeControllerDegraded: All master nodes are ready" openshift-marketplace 47m Normal Killing pod/certified-operators-5mh29 Stopping container registry-server openshift-oauth-apiserver 47m Warning FailedCreatePodSandBox pod/apiserver-8ddbf84fd-4jwnk Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-8ddbf84fd-4jwnk_openshift-oauth-apiserver_30d34ad7-50c9-411e-940c-cdfac0f10ddd_0(da6c7685977bcaf42a689c75e9be39e0d636c83a27d191d4075bb1d1609435ea): error adding pod openshift-oauth-apiserver_apiserver-8ddbf84fd-4jwnk to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-oauth-apiserver/apiserver-8ddbf84fd-4jwnk/30d34ad7-50c9-411e-940c-cdfac0f10ddd]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-oauth-apiserver/pods/apiserver-8ddbf84fd-4jwnk?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-kube-controller-manager-operator 47m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 nodes are at revision 4; 2 nodes are at revision 5; 0 nodes have achieved new revision 6" to "NodeInstallerProgressing: 2 nodes are at revision 5; 1 nodes are at revision 6",Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 4; 2 nodes are at revision 5; 0 nodes have achieved new revision 6" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 5; 1 nodes are at revision 6" openshift-kube-controller-manager-operator 47m Normal NodeCurrentRevisionChanged deployment/kube-controller-manager-operator Updated node "ip-10-0-197-197.ec2.internal" from revision 4 to 6 because static pod is ready openshift-etcd-operator 47m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy" openshift-authentication-operator 47m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" openshift-authentication-operator 47m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded changed from True to False ("APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-8ddbf84fd-4jwnk pod)\nWellKnownReadyControllerDegraded: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)") openshift-authentication-operator 47m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-8ddbf84fd-4jwnk pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-8ddbf84fd-4jwnk pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": dial tcp: lookup oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org on 172.30.0.10:53: no such host (this is likely result of malfunctioning DNS server)\nWellKnownReadyControllerDegraded: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" openshift-etcd-operator 47m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 5; 1 nodes are at revision 6\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 5; 1 nodes are at revision 6\nEtcdMembersAvailable: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy" openshift-apiserver 47m Warning FailedCreatePodSandBox pod/apiserver-7475f65d84-whqlh Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_apiserver-7475f65d84-whqlh_openshift-apiserver_a354a412-28b4-4fae-9daf-87d334f3bda4_0(bf44515b83ce339f694d8b29dc872f4f5c4c38b5eeda575bfc71055cb34328db): error adding pod openshift-apiserver_apiserver-7475f65d84-whqlh to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-apiserver/apiserver-7475f65d84-whqlh/a354a412-28b4-4fae-9daf-87d334f3bda4]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-apiserver/pods/apiserver-7475f65d84-whqlh?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-apiserver 47m Normal Started pod/apiserver-7475f65d84-whqlh Started container fix-audit-permissions openshift-apiserver 47m Normal Created pod/apiserver-7475f65d84-whqlh Created container fix-audit-permissions openshift-apiserver 47m Normal Pulled pod/apiserver-7475f65d84-whqlh Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" already present on machine openshift-apiserver 47m Normal AddedInterface pod/apiserver-7475f65d84-whqlh Add eth0 [10.130.0.50/23] from ovn-kubernetes openshift-apiserver 47m Normal Started pod/apiserver-7475f65d84-whqlh Started container openshift-apiserver openshift-apiserver 47m Normal Created pod/apiserver-7475f65d84-whqlh Created container openshift-apiserver-check-endpoints openshift-apiserver 47m Normal Started pod/apiserver-7475f65d84-whqlh Started container openshift-apiserver-check-endpoints openshift-apiserver 47m Normal Pulled pod/apiserver-7475f65d84-whqlh Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" already present on machine openshift-apiserver 47m Normal Created pod/apiserver-7475f65d84-whqlh Created container openshift-apiserver openshift-apiserver 47m Normal Pulled pod/apiserver-7475f65d84-whqlh Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-apiserver 47m Warning FastControllerResync node/ip-10-0-197-197.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling openshift-apiserver 47m Warning FastControllerResync node/ip-10-0-197-197.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling kube-system 47m Normal LeaderElection configmap/kube-controller-manager ip-10-0-239-132_0196b51d-4a4b-4422-be6e-06aa6cfe78c5 became leader kube-system 47m Normal LeaderElection lease/kube-controller-manager ip-10-0-239-132_0196b51d-4a4b-4422-be6e-06aa6cfe78c5 became leader openshift-kube-controller-manager 47m Normal Started pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Started container kube-controller-manager openshift-kube-controller-manager 47m Normal Pulled pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-controller-manager 47m Normal Created pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Created container kube-controller-manager openshift-etcd-operator 47m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy" to "StaticPodsDegraded: pod/etcd-ip-10-0-140-6.ec2.internal container \"etcd\" started at 2023-03-21 12:20:09 +0000 UTC is still not ready\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy" openshift-kube-controller-manager-operator 47m Normal NodeTargetRevisionChanged deployment/kube-controller-manager-operator Updating node "ip-10-0-239-132.ec2.internal" from revision 5 to 6 because node ip-10-0-239-132.ec2.internal with revision 5 is the oldest openshift-oauth-apiserver 47m Normal Started pod/apiserver-8ddbf84fd-4jwnk Started container fix-audit-permissions openshift-oauth-apiserver 47m Normal AddedInterface pod/apiserver-8ddbf84fd-4jwnk Add eth0 [10.128.0.29/23] from ovn-kubernetes openshift-oauth-apiserver 47m Normal Created pod/apiserver-8ddbf84fd-4jwnk Created container fix-audit-permissions openshift-oauth-apiserver 47m Normal Pulled pod/apiserver-8ddbf84fd-4jwnk Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" already present on machine openshift-oauth-apiserver 47m Normal Created pod/apiserver-8ddbf84fd-4jwnk Created container oauth-apiserver openshift-authentication-operator 47m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-8ddbf84fd-4jwnk pod)\nWellKnownReadyControllerDegraded: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is waiting in pending apiserver-8ddbf84fd-4jwnk pod)\nWellKnownReadyControllerDegraded: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" openshift-oauth-apiserver 47m Normal Started pod/apiserver-8ddbf84fd-4jwnk Started container oauth-apiserver openshift-oauth-apiserver 47m Normal Pulled pod/apiserver-8ddbf84fd-4jwnk Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" already present on machine openshift-authentication-operator 47m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is waiting in pending apiserver-8ddbf84fd-4jwnk pod)\nWellKnownReadyControllerDegraded: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nWellKnownReadyControllerDegraded: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" openshift-kube-controller-manager-operator 47m Normal PodCreated deployment/kube-controller-manager-operator Created Pod/installer-6-ip-10-0-239-132.ec2.internal -n openshift-kube-controller-manager because it was missing openshift-marketplace 47m Warning FailedCreatePodSandBox pod/redhat-marketplace-xhp6s Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_redhat-marketplace-xhp6s_openshift-marketplace_c871a9fb-0594-4c78-a0ed-e4aad6b152bc_0(db07dca6bc6bb80e0828d9880b07a780cc988671424cab85f6b6e27724a330da): error adding pod openshift-marketplace_redhat-marketplace-xhp6s to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-marketplace/redhat-marketplace-xhp6s/c871a9fb-0594-4c78-a0ed-e4aad6b152bc]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-xhp6s?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-etcd-operator 47m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "StaticPodsDegraded: pod/etcd-ip-10-0-140-6.ec2.internal container \"etcd\" started at 2023-03-21 12:20:09 +0000 UTC is still not ready\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy" to "StaticPodsDegraded: pod/etcd-ip-10-0-140-6.ec2.internal container \"etcd\" is terminated: Completed: \nStaticPodsDegraded: pod/etcd-ip-10-0-140-6.ec2.internal container \"etcd-metrics\" is terminated: Error: \"etcdmain/grpc_proxy.go:558\",\"msg\":\"gRPC proxy listening for metrics\",\"address\":\"https://0.0.0.0:9979\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:20:09.595Z\",\"caller\":\"etcdmain/grpc_proxy.go:261\",\"msg\":\"started gRPC proxy\",\"address\":\"127.0.0.1:9977\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:20:09.595Z\",\"caller\":\"etcdmain/main.go:44\",\"msg\":\"notifying init daemon\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:20:09.595Z\",\"caller\":\"etcdmain/main.go:50\",\"msg\":\"successfully notified init daemon\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:20:09.596Z\",\"caller\":\"etcdmain/grpc_proxy.go:251\",\"msg\":\"gRPC proxy server metrics URL serving\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:20:10.595Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to IDLE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:20:10.595Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000502dc0, IDLE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:20:10.595Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to CONNECTING\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:20:10.595Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel picks a new address \\\"10.0.140.6:9978\\\" to connect\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:20:10.595Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000502dc0, CONNECTING\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:20:10.601Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to READY\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:20:10.601Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000502dc0, READY\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:20:10.601Z\",\"call\nStaticPodsDegraded: pod/etcd-ip-10-0-140-6.ec2.internal container \"etcd-readyz\" is terminated: Completed: \nStaticPodsDegraded: pod/etcd-ip-10-0-140-6.ec2.internal container \"etcdctl\" is terminated: Error: \nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy" openshift-marketplace 47m Normal Pulling pod/redhat-marketplace-xhp6s Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.12" openshift-marketplace 47m Normal Created pod/redhat-marketplace-xhp6s Created container registry-server openshift-marketplace 47m Normal AddedInterface pod/redhat-marketplace-xhp6s Add eth0 [10.128.0.32/23] from ovn-kubernetes openshift-marketplace 47m Normal Pulled pod/redhat-marketplace-xhp6s Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.12" in 234.359133ms (234.367001ms including waiting) openshift-kube-controller-manager 47m Warning FailedCreatePodSandBox pod/installer-6-ip-10-0-239-132.ec2.internal Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-6-ip-10-0-239-132.ec2.internal_openshift-kube-controller-manager_f8e187d2-8f12-4186-bd05-ed87428b6797_0(1eab5e7461bacd1c764401046ee559e858726ae06338bd39dd160957c33b6c8b): error adding pod openshift-kube-controller-manager_installer-6-ip-10-0-239-132.ec2.internal to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-kube-controller-manager/installer-6-ip-10-0-239-132.ec2.internal/f8e187d2-8f12-4186-bd05-ed87428b6797]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-6-ip-10-0-239-132.ec2.internal?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-marketplace 47m Normal Started pod/redhat-marketplace-xhp6s Started container registry-server openshift-kube-controller-manager 47m Warning FailedCreatePodSandBox pod/installer-6-ip-10-0-239-132.ec2.internal Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-6-ip-10-0-239-132.ec2.internal_openshift-kube-controller-manager_f8e187d2-8f12-4186-bd05-ed87428b6797_0(72d61fab4fb9c5438e1ca8d11f8e11014c8363c906ae765eda225bded876bda7): error adding pod openshift-kube-controller-manager_installer-6-ip-10-0-239-132.ec2.internal to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-kube-controller-manager/installer-6-ip-10-0-239-132.ec2.internal/f8e187d2-8f12-4186-bd05-ed87428b6797]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-6-ip-10-0-239-132.ec2.internal?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-etcd 47m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container setup openshift-etcd-operator 47m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "StaticPodsDegraded: pod/etcd-ip-10-0-140-6.ec2.internal container \"etcd\" is terminated: Completed: \nStaticPodsDegraded: pod/etcd-ip-10-0-140-6.ec2.internal container \"etcd-metrics\" is terminated: Error: \"etcdmain/grpc_proxy.go:558\",\"msg\":\"gRPC proxy listening for metrics\",\"address\":\"https://0.0.0.0:9979\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:20:09.595Z\",\"caller\":\"etcdmain/grpc_proxy.go:261\",\"msg\":\"started gRPC proxy\",\"address\":\"127.0.0.1:9977\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:20:09.595Z\",\"caller\":\"etcdmain/main.go:44\",\"msg\":\"notifying init daemon\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:20:09.595Z\",\"caller\":\"etcdmain/main.go:50\",\"msg\":\"successfully notified init daemon\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:20:09.596Z\",\"caller\":\"etcdmain/grpc_proxy.go:251\",\"msg\":\"gRPC proxy server metrics URL serving\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:20:10.595Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to IDLE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:20:10.595Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000502dc0, IDLE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:20:10.595Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to CONNECTING\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:20:10.595Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel picks a new address \\\"10.0.140.6:9978\\\" to connect\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:20:10.595Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000502dc0, CONNECTING\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:20:10.601Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to READY\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:20:10.601Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000502dc0, READY\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:20:10.601Z\",\"call\nStaticPodsDegraded: pod/etcd-ip-10-0-140-6.ec2.internal container \"etcd-readyz\" is terminated: Completed: \nStaticPodsDegraded: pod/etcd-ip-10-0-140-6.ec2.internal container \"etcdctl\" is terminated: Error: \nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy" openshift-etcd 47m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 47m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container setup openshift-etcd 47m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 47m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcd-ensure-env-vars openshift-etcd 47m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcd-ensure-env-vars openshift-etcd 47m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcd-resources-copy openshift-etcd 47m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 47m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcd-resources-copy openshift-etcd 47m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcd openshift-etcd 47m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 47m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcdctl openshift-etcd 47m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcdctl openshift-etcd 47m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 47m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcd-readyz openshift-etcd 47m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd 47m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcd-metrics openshift-etcd 47m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcd openshift-etcd 47m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 47m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcd-readyz openshift-etcd 47m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcd-metrics openshift-marketplace 47m Normal Killing pod/redhat-marketplace-xhp6s Stopping container registry-server openshift-kube-controller-manager 47m Warning FailedCreatePodSandBox pod/installer-6-ip-10-0-239-132.ec2.internal Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-6-ip-10-0-239-132.ec2.internal_openshift-kube-controller-manager_f8e187d2-8f12-4186-bd05-ed87428b6797_0(339f4d9ab896187ac8e3e874a12be26fe79d7bde636adb413108728a2465fca3): error adding pod openshift-kube-controller-manager_installer-6-ip-10-0-239-132.ec2.internal to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-kube-controller-manager/installer-6-ip-10-0-239-132.ec2.internal/f8e187d2-8f12-4186-bd05-ed87428b6797]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-6-ip-10-0-239-132.ec2.internal?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-marketplace 47m Normal Pulling pod/redhat-operators-lf7xl Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.12" openshift-marketplace 47m Normal AddedInterface pod/redhat-operators-lf7xl Add eth0 [10.128.0.33/23] from ovn-kubernetes openshift-marketplace 47m Normal Pulled pod/redhat-operators-lf7xl Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.12" in 656.832303ms (656.840011ms including waiting) openshift-marketplace 47m Normal Created pod/redhat-operators-lf7xl Created container registry-server openshift-marketplace 47m Normal Started pod/redhat-operators-lf7xl Started container registry-server openshift-kube-scheduler-operator 47m Normal NodeCurrentRevisionChanged deployment/openshift-kube-scheduler-operator Updated node "ip-10-0-140-6.ec2.internal" from revision 6 to 7 because static pod is ready openshift-kube-scheduler-operator 47m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 7"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 6; 2 nodes are at revision 7" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7" kube-system 47m Normal LeaderElection lease/kube-controller-manager ip-10-0-197-197_13f5914d-c221-48cb-9f24-95bacf123a58 became leader kube-system 47m Normal LeaderElection configmap/kube-controller-manager ip-10-0-197-197_13f5914d-c221-48cb-9f24-95bacf123a58 became leader openshift-kube-controller-manager 47m Normal Pulled pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-controller-manager 47m Normal Created pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Created container kube-controller-manager openshift-kube-controller-manager 47m Normal Started pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Started container kube-controller-manager openshift-etcd 47m Warning ProbeError pod/etcd-ip-10-0-140-6.ec2.internal Startup probe error: Get "https://10.0.140.6:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers)... openshift-etcd 47m Warning Unhealthy pod/etcd-ip-10-0-140-6.ec2.internal Startup probe failed: Get "https://10.0.140.6:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) openshift-marketplace 47m Normal AddedInterface pod/community-operators-p676f Add eth0 [10.128.0.35/23] from ovn-kubernetes openshift-marketplace 47m Normal Killing pod/redhat-operators-lf7xl Stopping container registry-server openshift-marketplace 47m Normal Pulling pod/community-operators-p676f Pulling image "registry.redhat.io/redhat/community-operator-index:v4.12" openshift-marketplace 47m Normal Started pod/community-operators-p676f Started container registry-server openshift-marketplace 47m Normal Pulled pod/community-operators-p676f Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.12" in 766.759721ms (766.76703ms including waiting) openshift-marketplace 47m Normal Created pod/community-operators-p676f Created container registry-server openshift-kube-controller-manager 47m Warning FailedCreatePodSandBox pod/installer-6-ip-10-0-239-132.ec2.internal Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-6-ip-10-0-239-132.ec2.internal_openshift-kube-controller-manager_f8e187d2-8f12-4186-bd05-ed87428b6797_0(e6bb6f578a29a51873fcc04b8fca1c35dca3ad365ce8a81d851f793c1509052b): error adding pod openshift-kube-controller-manager_installer-6-ip-10-0-239-132.ec2.internal to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-kube-controller-manager/installer-6-ip-10-0-239-132.ec2.internal/f8e187d2-8f12-4186-bd05-ed87428b6797]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-6-ip-10-0-239-132.ec2.internal?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-etcd-operator 47m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" openshift-etcd-operator 47m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 5; 1 nodes are at revision 6\nEtcdMembersAvailable: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 5; 1 nodes are at revision 6\nEtcdMembersAvailable: 3 members are available" openshift-etcd-operator 46m Normal NodeCurrentRevisionChanged deployment/etcd-operator Updated node "ip-10-0-140-6.ec2.internal" from revision 5 to 6 because static pod is ready openshift-etcd-operator 46m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 5; 1 nodes are at revision 6" to "NodeInstallerProgressing: 1 nodes are at revision 5; 2 nodes are at revision 6",Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 5; 1 nodes are at revision 6\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 5; 2 nodes are at revision 6\nEtcdMembersAvailable: 3 members are available" openshift-marketplace 46m Normal Killing pod/community-operators-p676f Stopping container registry-server openshift-kube-controller-manager 46m Warning FailedCreatePodSandBox pod/installer-6-ip-10-0-239-132.ec2.internal Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-6-ip-10-0-239-132.ec2.internal_openshift-kube-controller-manager_f8e187d2-8f12-4186-bd05-ed87428b6797_0(f6cc80b286673bd49fd35b74319ee37ac7b0c84398af48253fb9a55e3a422a08): error adding pod openshift-kube-controller-manager_installer-6-ip-10-0-239-132.ec2.internal to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-kube-controller-manager/installer-6-ip-10-0-239-132.ec2.internal/f8e187d2-8f12-4186-bd05-ed87428b6797]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-6-ip-10-0-239-132.ec2.internal?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-kube-apiserver 46m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container setup openshift-kube-apiserver 46m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-apiserver 46m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container setup openshift-kube-apiserver 46m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-controller-manager-operator 46m Warning InstallerPodFailed deployment/kube-controller-manager-operator Failed to create installer pod for revision 6 count 0 on node "ip-10-0-239-132.ec2.internal": Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-6-ip-10-0-239-132.ec2.internal": dial tcp 172.30.0.1:443: connect: connection refused openshift-kube-apiserver 46m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver openshift-kube-apiserver 46m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver-cert-syncer openshift-kube-apiserver 46m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-apiserver 46m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver openshift-kube-apiserver 46m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 46m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver-insecure-readyz openshift-kube-apiserver 46m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver-cert-regeneration-controller openshift-kube-apiserver 46m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver-cert-syncer openshift-kube-apiserver 46m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 46m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver-cert-regeneration-controller openshift-kube-apiserver 46m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver-insecure-readyz openshift-kube-apiserver 46m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 46m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver-check-endpoints openshift-kube-apiserver 46m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver-check-endpoints openshift-kube-controller-manager-operator 46m Warning InstallerPodFailed deployment/kube-controller-manager-operator Failed to create installer pod for revision 6 count 0 on node "ip-10-0-239-132.ec2.internal": pods "installer-6-ip-10-0-239-132.ec2.internal" is forbidden: User "system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator" cannot get resource "pods" in API group "" in the namespace "openshift-kube-controller-manager": RBAC: [clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found, clusterrole.rbac.authorization.k8s.io "cluster-admin" not found, clusterrole.rbac.authorization.k8s.io "system:oauth-token-deleter" not found, clusterrole.rbac.authorization.k8s.io "basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:scc:restricted-v2" not found, clusterrole.rbac.authorization.k8s.io "cluster-status" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "console-extensions-reader" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "system:webhook" not found, clusterrole.rbac.authorization.k8s.io "self-access-reviewer" not found, clusterrole.rbac.authorization.k8s.io "helm-chartrepos-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-docker" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found] openshift-kube-apiserver 46m Warning FastControllerResync node/ip-10-0-197-197.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver 46m Warning FastControllerResync node/ip-10-0-197-197.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager-operator 46m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "GuardControllerDegraded: Unable to apply PodDisruptionBudget changes: poddisruptionbudgets.policy \"kube-controller-manager-guard-pdb\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"poddisruptionbudgets\" in API group \"policy\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found]\nNodeControllerDegraded: All master nodes are ready" openshift-kube-controller-manager-operator 46m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Unable to apply PodDisruptionBudget changes: poddisruptionbudgets.policy \"kube-controller-manager-guard-pdb\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"poddisruptionbudgets\" in API group \"policy\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found]\nNodeControllerDegraded: All master nodes are ready" to "GuardControllerDegraded: Unable to apply PodDisruptionBudget changes: poddisruptionbudgets.policy \"kube-controller-manager-guard-pdb\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"poddisruptionbudgets\" in API group \"policy\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found]\nNodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/recycler-config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/recycler-config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-controller-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"secrets/csr-signer\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/csr-signer\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": configmaps \"serviceaccount-ca\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found]" kube-system 46m Normal LeaderElection lease/kube-controller-manager ip-10-0-239-132_408c625f-f53e-4209-8dd9-2ec5cc19fddb became leader kube-system 46m Normal LeaderElection configmap/kube-controller-manager ip-10-0-239-132_408c625f-f53e-4209-8dd9-2ec5cc19fddb became leader openshift-kube-controller-manager-operator 46m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Unable to apply PodDisruptionBudget changes: poddisruptionbudgets.policy \"kube-controller-manager-guard-pdb\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"poddisruptionbudgets\" in API group \"policy\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found]\nNodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/recycler-config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/recycler-config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-controller-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"secrets/csr-signer\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/csr-signer\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": configmaps \"serviceaccount-ca\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found]" to "GuardControllerDegraded: Unable to apply PodDisruptionBudget changes: poddisruptionbudgets.policy \"kube-controller-manager-guard-pdb\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"poddisruptionbudgets\" in API group \"policy\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found]\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): namespaces \"openshift-infra\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"namespaces\" in API group \"\" in the namespace \"openshift-infra\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found]\nKubeControllerManagerStaticResourcesDegraded: \nNodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/recycler-config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/recycler-config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-controller-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"secrets/csr-signer\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/csr-signer\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": configmaps \"serviceaccount-ca\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found]" openshift-kube-controller-manager 46m Warning FailedCreatePodSandBox pod/installer-6-ip-10-0-239-132.ec2.internal Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-6-ip-10-0-239-132.ec2.internal_openshift-kube-controller-manager_f8e187d2-8f12-4186-bd05-ed87428b6797_0(adc4f3c65de103ec835560eb92b697970f59c92db0f909aeb4e79a7fdae0bedc): error adding pod openshift-kube-controller-manager_installer-6-ip-10-0-239-132.ec2.internal to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-kube-controller-manager/installer-6-ip-10-0-239-132.ec2.internal/f8e187d2-8f12-4186-bd05-ed87428b6797]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-6-ip-10-0-239-132.ec2.internal?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-kube-controller-manager-operator 46m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Unable to apply PodDisruptionBudget changes: poddisruptionbudgets.policy \"kube-controller-manager-guard-pdb\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"poddisruptionbudgets\" in API group \"policy\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found]\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): namespaces \"openshift-infra\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"namespaces\" in API group \"\" in the namespace \"openshift-infra\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found]\nKubeControllerManagerStaticResourcesDegraded: \nNodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/recycler-config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/recycler-config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-controller-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"secrets/csr-signer\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/csr-signer\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": configmaps \"serviceaccount-ca\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found]" to "InstallerControllerDegraded: pods \"installer-6-ip-10-0-239-132.ec2.internal\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]\nGuardControllerDegraded: Unable to apply PodDisruptionBudget changes: poddisruptionbudgets.policy \"kube-controller-manager-guard-pdb\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"poddisruptionbudgets\" in API group \"policy\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found]\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): namespaces \"openshift-infra\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"namespaces\" in API group \"\" in the namespace \"openshift-infra\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found]\nKubeControllerManagerStaticResourcesDegraded: \nNodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/recycler-config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/recycler-config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-controller-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"secrets/csr-signer\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/csr-signer\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": configmaps \"serviceaccount-ca\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found]" openshift-kube-controller-manager-operator 46m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "InstallerControllerDegraded: pods \"installer-6-ip-10-0-239-132.ec2.internal\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]\nGuardControllerDegraded: Unable to apply PodDisruptionBudget changes: poddisruptionbudgets.policy \"kube-controller-manager-guard-pdb\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"poddisruptionbudgets\" in API group \"policy\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found]\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): namespaces \"openshift-infra\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"namespaces\" in API group \"\" in the namespace \"openshift-infra\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found]\nKubeControllerManagerStaticResourcesDegraded: \nNodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/recycler-config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/recycler-config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-controller-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"secrets/csr-signer\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/csr-signer\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": configmaps \"serviceaccount-ca\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found]" to "InstallerControllerDegraded: pods \"installer-6-ip-10-0-239-132.ec2.internal\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]\nGuardControllerDegraded: Unable to apply PodDisruptionBudget changes: poddisruptionbudgets.policy \"kube-controller-manager-guard-pdb\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"poddisruptionbudgets\" in API group \"policy\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found]\nStaticPodsDegraded: pods \"kube-controller-manager-ip-10-0-239-132.ec2.internal\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found]\nStaticPodsDegraded: pods \"kube-controller-manager-ip-10-0-140-6.ec2.internal\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): namespaces \"openshift-infra\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"namespaces\" in API group \"\" in the namespace \"openshift-infra\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found]\nKubeControllerManagerStaticResourcesDegraded: \nNodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/recycler-config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/recycler-config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-controller-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"secrets/csr-signer\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/csr-signer\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": configmaps \"serviceaccount-ca\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found]" default 46m Normal RegisteredNode node/ip-10-0-140-6.ec2.internal Node ip-10-0-140-6.ec2.internal event: Registered Node ip-10-0-140-6.ec2.internal in Controller default 46m Normal RegisteredNode node/ip-10-0-160-152.ec2.internal Node ip-10-0-160-152.ec2.internal event: Registered Node ip-10-0-160-152.ec2.internal in Controller default 46m Normal RegisteredNode node/ip-10-0-239-132.ec2.internal Node ip-10-0-239-132.ec2.internal event: Registered Node ip-10-0-239-132.ec2.internal in Controller openshift-ingress 46m Normal EnsuringLoadBalancer service/router-default Ensuring load balancer default 46m Normal RegisteredNode node/ip-10-0-232-8.ec2.internal Node ip-10-0-232-8.ec2.internal event: Registered Node ip-10-0-232-8.ec2.internal in Controller openshift-etcd-operator 46m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "ScriptControllerDegraded: \"configmap/etcd-pod\": configmaps \"etcd-scripts\" is forbidden: User \"system:serviceaccount:openshift-etcd-operator:etcd-operator\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"openshift-etcd\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" default 46m Normal RegisteredNode node/ip-10-0-197-197.ec2.internal Node ip-10-0-197-197.ec2.internal event: Registered Node ip-10-0-197-197.ec2.internal in Controller openshift-kube-controller-manager-operator 46m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "InstallerControllerDegraded: pods \"installer-6-ip-10-0-239-132.ec2.internal\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]\nGuardControllerDegraded: Unable to apply PodDisruptionBudget changes: poddisruptionbudgets.policy \"kube-controller-manager-guard-pdb\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"poddisruptionbudgets\" in API group \"policy\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found]\nStaticPodsDegraded: pods \"kube-controller-manager-ip-10-0-239-132.ec2.internal\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found]\nStaticPodsDegraded: pods \"kube-controller-manager-ip-10-0-140-6.ec2.internal\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): namespaces \"openshift-infra\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"namespaces\" in API group \"\" in the namespace \"openshift-infra\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found]\nKubeControllerManagerStaticResourcesDegraded: \nNodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/recycler-config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/recycler-config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-controller-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"secrets/csr-signer\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/csr-signer\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": configmaps \"serviceaccount-ca\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found]" to "GuardControllerDegraded: Unable to apply PodDisruptionBudget changes: poddisruptionbudgets.policy \"kube-controller-manager-guard-pdb\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"poddisruptionbudgets\" in API group \"policy\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found]\nStaticPodsDegraded: pods \"kube-controller-manager-ip-10-0-239-132.ec2.internal\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found]\nStaticPodsDegraded: pods \"kube-controller-manager-ip-10-0-140-6.ec2.internal\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): namespaces \"openshift-infra\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"namespaces\" in API group \"\" in the namespace \"openshift-infra\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found]\nKubeControllerManagerStaticResourcesDegraded: \nNodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/recycler-config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/recycler-config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-controller-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"secrets/csr-signer\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/csr-signer\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": configmaps \"serviceaccount-ca\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found]" openshift-apiserver 46m Normal ScalingReplicaSet deployment/apiserver Scaled down replica set apiserver-6977bc9f6b to 0 from 1 openshift-apiserver 46m Normal SuccessfulCreate replicaset/apiserver-7475f65d84 Created pod: apiserver-7475f65d84-lm7x6 openshift-apiserver 46m Normal SuccessfulDelete replicaset/apiserver-6977bc9f6b Deleted pod: apiserver-6977bc9f6b-wgtnw openshift-dns 46m Warning TopologyAwareHintsDisabled service/dns-default Insufficient Node information: allocatable CPU or zone not specified on one or more nodes, addressType: IPv4 openshift-apiserver 46m Normal Killing pod/apiserver-6977bc9f6b-wgtnw Stopping container openshift-apiserver openshift-apiserver 46m Normal Killing pod/apiserver-6977bc9f6b-wgtnw Stopping container openshift-apiserver-check-endpoints openshift-etcd-operator 46m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "ScriptControllerDegraded: \"configmap/etcd-pod\": configmaps \"etcd-scripts\" is forbidden: User \"system:serviceaccount:openshift-etcd-operator:etcd-operator\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"openshift-etcd\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" openshift-ingress 46m Normal EnsuredLoadBalancer service/router-default Ensured load balancer openshift-apiserver 46m Normal ScalingReplicaSet deployment/apiserver Scaled up replica set apiserver-7475f65d84 to 3 from 2 openshift-console 46m Normal ScalingReplicaSet deployment/console Scaled down replica set console-64949fc89 to 0 from 1 openshift-console 46m Normal SuccessfulDelete replicaset/console-64949fc89 Deleted pod: console-64949fc89-nhxbj openshift-authentication-operator 46m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: need at least 3 kube-apiservers, got 1" openshift-authentication-operator 46m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nWellKnownReadyControllerDegraded: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nWellKnownReadyControllerDegraded: need at least 3 kube-apiservers, got 1" openshift-apiserver-operator 46m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available changed from True to False ("APIServicesAvailable: [Get \"https://172.30.0.1:443/apis/apiregistration.k8s.io/v1/apiservices/v1.apps.openshift.io\": dial tcp 172.30.0.1:443: connect: connection refused, Get \"https://172.30.0.1:443/apis/apiregistration.k8s.io/v1/apiservices/v1.authorization.openshift.io\": dial tcp 172.30.0.1:443: connect: connection refused, Get \"https://172.30.0.1:443/apis/apiregistration.k8s.io/v1/apiservices/v1.build.openshift.io\": dial tcp 172.30.0.1:443: connect: connection refused, Get \"https://172.30.0.1:443/apis/apiregistration.k8s.io/v1/apiservices/v1.image.openshift.io\": dial tcp 172.30.0.1:443: connect: connection refused, Get \"https://172.30.0.1:443/apis/apiregistration.k8s.io/v1/apiservices/v1.project.openshift.io\": dial tcp 172.30.0.1:443: connect: connection refused, apiservices.apiregistration.k8s.io \"v1.quota.openshift.io\" is forbidden: User \"system:serviceaccount:openshift-apiserver-operator:openshift-apiserver-operator\" cannot get resource \"apiservices\" in API group \"apiregistration.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]]") openshift-kube-controller-manager-operator 46m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Unable to apply PodDisruptionBudget changes: poddisruptionbudgets.policy \"kube-controller-manager-guard-pdb\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"poddisruptionbudgets\" in API group \"policy\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found]\nStaticPodsDegraded: pods \"kube-controller-manager-ip-10-0-239-132.ec2.internal\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found]\nStaticPodsDegraded: pods \"kube-controller-manager-ip-10-0-140-6.ec2.internal\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): namespaces \"openshift-infra\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"namespaces\" in API group \"\" in the namespace \"openshift-infra\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found]\nKubeControllerManagerStaticResourcesDegraded: \nNodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/recycler-config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/recycler-config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-controller-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"secrets/csr-signer\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/csr-signer\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": configmaps \"serviceaccount-ca\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found]" to "GuardControllerDegraded: Unable to apply PodDisruptionBudget changes: poddisruptionbudgets.policy \"kube-controller-manager-guard-pdb\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"poddisruptionbudgets\" in API group \"policy\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found]\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): namespaces \"openshift-infra\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"namespaces\" in API group \"\" in the namespace \"openshift-infra\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found]\nKubeControllerManagerStaticResourcesDegraded: \nNodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/recycler-config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/recycler-config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-controller-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"secrets/csr-signer\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/csr-signer\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": configmaps \"serviceaccount-ca\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found]" openshift-console-operator 46m Normal OperatorStatusChanged deployment/console-operator Status for clusteroperator/console changed: Progressing changed from True to False ("All is well"),Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.qeaisrhods-c13.abmw.s1.devshift.org returns '503 Service Unavailable'" to "RouteHealthAvailable: route not yet available, https://console-openshift-console.apps.qeaisrhods-c13.abmw.s1.devshift.org returns '503 Service Unavailable'" openshift-kube-controller-manager-operator 46m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Unable to apply PodDisruptionBudget changes: poddisruptionbudgets.policy \"kube-controller-manager-guard-pdb\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"poddisruptionbudgets\" in API group \"policy\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found]\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): namespaces \"openshift-infra\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"namespaces\" in API group \"\" in the namespace \"openshift-infra\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found]\nKubeControllerManagerStaticResourcesDegraded: \nNodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/recycler-config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/recycler-config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-controller-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"secrets/csr-signer\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/csr-signer\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": configmaps \"serviceaccount-ca\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found]" to "KubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): namespaces \"openshift-infra\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"namespaces\" in API group \"\" in the namespace \"openshift-infra\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found]\nKubeControllerManagerStaticResourcesDegraded: \nNodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/recycler-config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/recycler-config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-controller-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"secrets/csr-signer\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/csr-signer\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": configmaps \"serviceaccount-ca\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found]" openshift-kube-controller-manager-operator 46m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "KubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): namespaces \"openshift-infra\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"namespaces\" in API group \"\" in the namespace \"openshift-infra\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found]\nKubeControllerManagerStaticResourcesDegraded: \nNodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/recycler-config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/recycler-config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-controller-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"secrets/csr-signer\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/csr-signer\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": configmaps \"serviceaccount-ca\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found]" to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/recycler-config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/recycler-config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-controller-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"secrets/csr-signer\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/csr-signer\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": configmaps \"serviceaccount-ca\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found]" openshift-apiserver-operator 46m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-6977bc9f6b-wgtnw pod)",Progressing changed from True to False ("All is well") openshift-authentication-operator 46m Normal OperatorVersionChanged deployment/authentication-operator clusteroperator/authentication version "oauth-openshift" changed from "" to "4.13.0-rc.0_openshift" openshift-authentication-operator 46m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.13.0-rc.0"} {"oauth-apiserver" "4.13.0-rc.0"}] to [{"operator" "4.13.0-rc.0"} {"oauth-apiserver" "4.13.0-rc.0"} {"oauth-openshift" "4.13.0-rc.0_openshift"}] openshift-etcd-operator 46m Normal NodeTargetRevisionChanged deployment/etcd-operator Updating node "ip-10-0-197-197.ec2.internal" from revision 5 to 6 because node ip-10-0-197-197.ec2.internal with revision 5 is the oldest openshift-authentication-operator 46m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://10.0.197.197:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" openshift-authentication-operator 46m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nWellKnownReadyControllerDegraded: need at least 3 kube-apiservers, got 1" to "WellKnownReadyControllerDegraded: need at least 3 kube-apiservers, got 1" openshift-kube-controller-manager 46m Normal Created pod/installer-6-ip-10-0-239-132.ec2.internal Created container installer openshift-kube-controller-manager 46m Normal Pulled pod/installer-6-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 46m Normal AddedInterface pod/installer-6-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.23/23] from ovn-kubernetes openshift-etcd-operator 46m Normal PodCreated deployment/etcd-operator Created Pod/installer-6-ip-10-0-197-197.ec2.internal -n openshift-etcd because it was missing openshift-kube-controller-manager-operator 46m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap/recycler-config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps/recycler-config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-intermediate-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-signer-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/csr-controller-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager-operator/configmaps/csr-controller-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"secrets/csr-signer\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/secrets/csr-signer\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": configmaps \"serviceaccount-ca\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:scc:restricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"console-extensions-reader\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:webhook\" not found, clusterrole.rbac.authorization.k8s.io \"self-access-reviewer\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:oauth-token-deleter\" not found, clusterrole.rbac.authorization.k8s.io \"basic-user\" not found]" to "NodeControllerDegraded: All master nodes are ready" openshift-kube-controller-manager 46m Normal Started pod/installer-6-ip-10-0-239-132.ec2.internal Started container installer openshift-etcd 46m Normal Pulled pod/installer-6-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd 46m Normal Created pod/installer-6-ip-10-0-197-197.ec2.internal Created container installer openshift-etcd 46m Normal Started pod/installer-6-ip-10-0-197-197.ec2.internal Started container installer openshift-etcd 46m Normal AddedInterface pod/installer-6-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.51/23] from ovn-kubernetes openshift-kube-apiserver-operator 46m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 0; 0 nodes have achieved new revision 9" to "NodeInstallerProgressing: 2 nodes are at revision 0; 1 nodes are at revision 9",Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 nodes are at revision 9") openshift-apiserver 46m Warning ProbeError pod/apiserver-6977bc9f6b-wgtnw Readiness probe error: HTTP probe failed with statuscode: 500... openshift-apiserver 46m Warning Unhealthy pod/apiserver-6977bc9f6b-wgtnw Readiness probe failed: HTTP probe failed with statuscode: 500 openshift-kube-apiserver-operator 46m Normal NodeCurrentRevisionChanged deployment/kube-apiserver-operator Updated node "ip-10-0-197-197.ec2.internal" from revision 0 to 9 because static pod is ready openshift-console-operator 46m Normal OperatorStatusChanged deployment/console-operator Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.qeaisrhods-c13.abmw.s1.devshift.org returns '503 Service Unavailable'" to "All is well",Available changed from False to True ("All is well") openshift-kube-apiserver 46m Normal LeaderElection lease/cert-regeneration-controller-lock ip-10-0-197-197_43fcd0bc-52b0-49aa-96c9-7754e6235188 became leader openshift-apiserver 46m Warning Unhealthy pod/apiserver-6977bc9f6b-wgtnw Readiness probe failed: Get "https://10.128.0.40:8443/readyz": dial tcp 10.128.0.40:8443: connect: connection refused openshift-apiserver 46m Warning ProbeError pod/apiserver-6977bc9f6b-wgtnw Readiness probe error: Get "https://10.128.0.40:8443/readyz": dial tcp 10.128.0.40:8443: connect: connection refused... openshift-kube-apiserver-operator 46m Normal NodeTargetRevisionChanged deployment/kube-apiserver-operator Updating node "ip-10-0-239-132.ec2.internal" from revision 0 to 9 because node ip-10-0-239-132.ec2.internal static pod not found openshift-kube-apiserver-operator 45m Normal PodCreated deployment/kube-apiserver-operator Created Pod/installer-9-ip-10-0-239-132.ec2.internal -n openshift-kube-apiserver because it was missing openshift-kube-controller-manager 45m Normal StaticPodInstallerCompleted pod/installer-6-ip-10-0-239-132.ec2.internal Successfully installed revision 6 openshift-etcd 45m Normal Killing pod/etcd-ip-10-0-197-197.ec2.internal Stopping container etcdctl openshift-kube-apiserver 45m Normal Pulled pod/installer-9-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-etcd 45m Normal Killing pod/etcd-ip-10-0-197-197.ec2.internal Stopping container etcd openshift-etcd 45m Normal StaticPodInstallerCompleted pod/installer-6-ip-10-0-197-197.ec2.internal Successfully installed revision 6 openshift-kube-apiserver 45m Normal Started pod/installer-9-ip-10-0-239-132.ec2.internal Started container installer openshift-etcd 45m Normal Killing pod/etcd-ip-10-0-197-197.ec2.internal Stopping container etcd-metrics openshift-kube-apiserver 45m Normal Created pod/installer-9-ip-10-0-239-132.ec2.internal Created container installer openshift-kube-apiserver 45m Normal AddedInterface pod/installer-9-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.24/23] from ovn-kubernetes openshift-etcd 45m Normal Killing pod/etcd-ip-10-0-197-197.ec2.internal Stopping container etcd-readyz openshift-kube-controller-manager 45m Normal Killing pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Stopping container kube-controller-manager-recovery-controller openshift-kube-controller-manager 45m Normal Killing pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Stopping container kube-controller-manager openshift-kube-controller-manager 45m Normal Killing pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Stopping container cluster-policy-controller openshift-kube-controller-manager 45m Normal Killing pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Stopping container kube-controller-manager-cert-syncer openshift-kube-controller-manager-operator 45m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"cluster-policy-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager-cert-syncer\" is terminated: Error: calhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:27:16.621329 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:27:16.621589 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:27:30.351919 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:27:30.351983 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:28:03.280792 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:28:03.280858 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" openshift-kube-controller-manager 45m Warning FastControllerResync pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Controller "pod-security-admission-label-synchronization-controller" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager 45m Warning ProbeError pod/kube-controller-manager-guard-ip-10-0-239-132.ec2.internal Readiness probe error: Get "https://10.0.239.132:10257/healthz": dial tcp 10.0.239.132:10257: connect: connection refused... openshift-kube-controller-manager 45m Warning Unhealthy pod/kube-controller-manager-guard-ip-10-0-239-132.ec2.internal Readiness probe failed: Get "https://10.0.239.132:10257/healthz": dial tcp 10.0.239.132:10257: connect: connection refused openshift-kube-controller-manager 45m Warning FastControllerResync pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Controller "namespace-security-allocation-controller" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager 45m Normal LeaderElection lease/cluster-policy-controller-lock ip-10-0-197-197_169587c4-f93a-4a78-b151-ffe07f64648c became leader openshift-kube-controller-manager 45m Normal LeaderElection configmap/cluster-policy-controller-lock ip-10-0-197-197_169587c4-f93a-4a78-b151-ffe07f64648c became leader openshift-kube-controller-manager 45m Normal Pulled pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-controller-manager 45m Normal Created pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Created container kube-controller-manager-recovery-controller openshift-kube-controller-manager 45m Normal Pulled pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 45m Normal Started pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Started container kube-controller-manager-cert-syncer openshift-kube-controller-manager 45m Normal Created pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Created container kube-controller-manager-cert-syncer openshift-kube-controller-manager 45m Normal Pulled pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 45m Normal Created pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Created container kube-controller-manager openshift-kube-controller-manager 45m Normal Started pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Started container kube-controller-manager-recovery-controller openshift-kube-controller-manager 45m Normal Started pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Started container kube-controller-manager openshift-kube-controller-manager 45m Warning FastControllerResync pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager 45m Warning ClusterInfrastructureStatus namespace/openshift-kube-controller-manager unable to get cluster infrastructure status, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp [::1]:6443: connect: connection refused openshift-kube-controller-manager-operator 45m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"cluster-policy-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager-cert-syncer\" is terminated: Error: calhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:27:16.621329 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:27:16.621589 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:27:30.351919 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:27:30.351983 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:28:03.280792 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:28:03.280858 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" openshift-authentication-operator 45m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded changed from False to True ("WellKnownReadyControllerDegraded: need at least 3 kube-apiservers, got 1") openshift-kube-controller-manager-operator 45m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "GuardControllerDegraded: Missing PodIP in operand kube-controller-manager-ip-10-0-239-132.ec2.internal on node ip-10-0-239-132.ec2.internal\nNodeControllerDegraded: All master nodes are ready" kube-system 45m Normal LeaderElection lease/kube-controller-manager ip-10-0-140-6_728cfc9e-fa3e-4099-a103-c51475684d63 became leader kube-system 45m Normal LeaderElection configmap/kube-controller-manager ip-10-0-140-6_728cfc9e-fa3e-4099-a103-c51475684d63 became leader default 45m Normal RegisteredNode node/ip-10-0-197-197.ec2.internal Node ip-10-0-197-197.ec2.internal event: Registered Node ip-10-0-197-197.ec2.internal in Controller default 45m Normal RegisteredNode node/ip-10-0-160-152.ec2.internal Node ip-10-0-160-152.ec2.internal event: Registered Node ip-10-0-160-152.ec2.internal in Controller default 45m Normal RegisteredNode node/ip-10-0-232-8.ec2.internal Node ip-10-0-232-8.ec2.internal event: Registered Node ip-10-0-232-8.ec2.internal in Controller openshift-ingress 45m Normal EnsuringLoadBalancer service/router-default Ensuring load balancer default 45m Normal RegisteredNode node/ip-10-0-140-6.ec2.internal Node ip-10-0-140-6.ec2.internal event: Registered Node ip-10-0-140-6.ec2.internal in Controller default 45m Normal RegisteredNode node/ip-10-0-239-132.ec2.internal Node ip-10-0-239-132.ec2.internal event: Registered Node ip-10-0-239-132.ec2.internal in Controller openshift-ingress 45m Normal EnsuredLoadBalancer service/router-default Ensured load balancer openshift-dns 45m Warning TopologyAwareHintsDisabled service/dns-default Insufficient Node information: allocatable CPU or zone not specified on one or more nodes, addressType: IPv4 openshift-kube-controller-manager-operator 45m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Missing PodIP in operand kube-controller-manager-ip-10-0-239-132.ec2.internal on node ip-10-0-239-132.ec2.internal\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" openshift-etcd-operator 45m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "StaticPodsDegraded: pod/etcd-ip-10-0-197-197.ec2.internal container \"etcd\" started at 2023-03-21 12:21:12 +0000 UTC is still not ready\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" openshift-etcd-operator 45m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "StaticPodsDegraded: pod/etcd-ip-10-0-197-197.ec2.internal container \"etcd\" started at 2023-03-21 12:21:12 +0000 UTC is still not ready\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "StaticPodsDegraded: pod/etcd-ip-10-0-197-197.ec2.internal container \"etcd\" is terminated: Completed: \nStaticPodsDegraded: pod/etcd-ip-10-0-197-197.ec2.internal container \"etcd-metrics\" is terminated: Error: ain/grpc_proxy.go:558\",\"msg\":\"gRPC proxy listening for metrics\",\"address\":\"https://0.0.0.0:9979\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:21:12.653Z\",\"caller\":\"etcdmain/grpc_proxy.go:261\",\"msg\":\"started gRPC proxy\",\"address\":\"127.0.0.1:9977\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:21:12.653Z\",\"caller\":\"etcdmain/main.go:44\",\"msg\":\"notifying init daemon\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:21:12.653Z\",\"caller\":\"etcdmain/main.go:50\",\"msg\":\"successfully notified init daemon\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:21:12.653Z\",\"caller\":\"etcdmain/grpc_proxy.go:251\",\"msg\":\"gRPC proxy server metrics URL serving\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:21:13.653Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to IDLE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:21:13.653Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc00059c960, IDLE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:21:13.653Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to CONNECTING\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:21:13.653Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel picks a new address \\\"10.0.197.197:9978\\\" to connect\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:21:13.653Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc00059c960, CONNECTING\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:21:13.662Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to READY\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:21:13.662Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc00059c960, READY\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:21:13.662Z\",\"caller\":\nStaticPodsDegraded: pod/etcd-ip-10-0-197-197.ec2.internal container \"etcd-readyz\" is terminated: Completed: \nStaticPodsDegraded: pod/etcd-ip-10-0-197-197.ec2.internal container \"etcdctl\" is terminated: Error: \nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" openshift-apiserver 45m Normal Pulled pod/apiserver-7475f65d84-lm7x6 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-apiserver 45m Normal AddedInterface pod/apiserver-7475f65d84-lm7x6 Add eth0 [10.128.0.36/23] from ovn-kubernetes openshift-apiserver 45m Normal Created pod/apiserver-7475f65d84-lm7x6 Created container openshift-apiserver openshift-apiserver 45m Normal Started pod/apiserver-7475f65d84-lm7x6 Started container openshift-apiserver openshift-apiserver 45m Normal Created pod/apiserver-7475f65d84-lm7x6 Created container openshift-apiserver-check-endpoints openshift-apiserver 45m Normal Pulled pod/apiserver-7475f65d84-lm7x6 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" already present on machine openshift-apiserver 45m Normal Pulled pod/apiserver-7475f65d84-lm7x6 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" already present on machine openshift-apiserver 45m Normal Created pod/apiserver-7475f65d84-lm7x6 Created container fix-audit-permissions openshift-apiserver-operator 45m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-6977bc9f6b-wgtnw pod)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-7475f65d84-lm7x6 pod)" openshift-apiserver 45m Normal Started pod/apiserver-7475f65d84-lm7x6 Started container fix-audit-permissions openshift-apiserver 45m Warning FastControllerResync node/ip-10-0-140-6.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling openshift-apiserver 45m Normal Started pod/apiserver-7475f65d84-lm7x6 Started container openshift-apiserver-check-endpoints openshift-apiserver 45m Warning FastControllerResync node/ip-10-0-140-6.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling openshift-apiserver-operator 45m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-7475f65d84-lm7x6 pod)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-7475f65d84-lm7x6 pod)" openshift-kube-apiserver 45m Normal StaticPodInstallerCompleted pod/installer-9-ip-10-0-239-132.ec2.internal Successfully installed revision 9 openshift-etcd 45m Normal Started pod/etcd-ip-10-0-197-197.ec2.internal Started container setup openshift-etcd 45m Normal Pulled pod/etcd-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-kube-apiserver 45m Normal Created pod/kube-apiserver-ip-10-0-239-132.ec2.internal Created container setup openshift-etcd 45m Normal Created pod/etcd-ip-10-0-197-197.ec2.internal Created container setup openshift-kube-apiserver-operator 45m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" to "GuardControllerDegraded: [Missing PodIP in operand kube-apiserver-ip-10-0-239-132.ec2.internal on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" openshift-kube-apiserver 45m Normal Pulled pod/kube-apiserver-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-etcd 45m Normal Pulled pod/etcd-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-kube-apiserver 45m Normal Started pod/kube-apiserver-ip-10-0-239-132.ec2.internal Started container setup openshift-apiserver-operator 45m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-7475f65d84-lm7x6 pod)" to "All is well" openshift-kube-apiserver 45m Normal Pulled pod/kube-apiserver-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 45m Normal Created pod/kube-apiserver-ip-10-0-239-132.ec2.internal Created container kube-apiserver openshift-etcd 45m Normal Started pod/etcd-ip-10-0-197-197.ec2.internal Started container etcd-ensure-env-vars openshift-kube-apiserver 45m Warning FastControllerResync pod/kube-apiserver-ip-10-0-239-132.ec2.internal Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling openshift-etcd 45m Normal Created pod/etcd-ip-10-0-197-197.ec2.internal Created container etcd-ensure-env-vars openshift-kube-apiserver 45m Normal Created pod/kube-apiserver-ip-10-0-239-132.ec2.internal Created container kube-apiserver-cert-syncer openshift-kube-apiserver 45m Normal Started pod/kube-apiserver-ip-10-0-239-132.ec2.internal Started container kube-apiserver-cert-syncer openshift-etcd 45m Normal Pulled pod/etcd-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-kube-apiserver 45m Normal Started pod/kube-apiserver-ip-10-0-239-132.ec2.internal Started container kube-apiserver openshift-kube-apiserver 45m Normal Pulled pod/kube-apiserver-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-etcd-operator 45m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "StaticPodsDegraded: pod/etcd-ip-10-0-197-197.ec2.internal container \"etcd\" is terminated: Completed: \nStaticPodsDegraded: pod/etcd-ip-10-0-197-197.ec2.internal container \"etcd-metrics\" is terminated: Error: ain/grpc_proxy.go:558\",\"msg\":\"gRPC proxy listening for metrics\",\"address\":\"https://0.0.0.0:9979\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:21:12.653Z\",\"caller\":\"etcdmain/grpc_proxy.go:261\",\"msg\":\"started gRPC proxy\",\"address\":\"127.0.0.1:9977\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:21:12.653Z\",\"caller\":\"etcdmain/main.go:44\",\"msg\":\"notifying init daemon\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:21:12.653Z\",\"caller\":\"etcdmain/main.go:50\",\"msg\":\"successfully notified init daemon\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:21:12.653Z\",\"caller\":\"etcdmain/grpc_proxy.go:251\",\"msg\":\"gRPC proxy server metrics URL serving\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:21:13.653Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to IDLE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:21:13.653Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc00059c960, IDLE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:21:13.653Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to CONNECTING\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:21:13.653Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel picks a new address \\\"10.0.197.197:9978\\\" to connect\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:21:13.653Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc00059c960, CONNECTING\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:21:13.662Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to READY\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:21:13.662Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc00059c960, READY\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:21:13.662Z\",\"caller\":\nStaticPodsDegraded: pod/etcd-ip-10-0-197-197.ec2.internal container \"etcd-readyz\" is terminated: Completed: \nStaticPodsDegraded: pod/etcd-ip-10-0-197-197.ec2.internal container \"etcdctl\" is terminated: Error: \nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" openshift-kube-apiserver 45m Normal Pulled pod/kube-apiserver-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-apiserver 45m Normal Pulled pod/kube-apiserver-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 45m Normal Created pod/kube-apiserver-ip-10-0-239-132.ec2.internal Created container kube-apiserver-cert-regeneration-controller openshift-kube-apiserver 45m Normal Started pod/kube-apiserver-ip-10-0-239-132.ec2.internal Started container kube-apiserver-cert-regeneration-controller openshift-kube-apiserver 45m Normal Created pod/kube-apiserver-ip-10-0-239-132.ec2.internal Created container kube-apiserver-insecure-readyz openshift-kube-controller-manager 45m Warning ProbeError pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Startup probe error: Get "https://10.0.239.132:10357/healthz": dial tcp 10.0.239.132:10357: connect: connection refused... openshift-kube-controller-manager 45m Normal Killing pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Container cluster-policy-controller failed startup probe, will be restarted openshift-kube-apiserver 45m Normal Created pod/kube-apiserver-ip-10-0-239-132.ec2.internal Created container kube-apiserver-check-endpoints openshift-etcd 45m Normal Pulled pod/etcd-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-kube-apiserver 45m Normal Pulled pod/kube-apiserver-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-controller-manager 45m Warning Unhealthy pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Startup probe failed: Get "https://10.0.239.132:10357/healthz": dial tcp 10.0.239.132:10357: connect: connection refused openshift-etcd 45m Normal Started pod/etcd-ip-10-0-197-197.ec2.internal Started container etcd-resources-copy openshift-kube-apiserver 45m Normal Started pod/kube-apiserver-ip-10-0-239-132.ec2.internal Started container kube-apiserver-insecure-readyz openshift-kube-apiserver 45m Normal Started pod/kube-apiserver-ip-10-0-239-132.ec2.internal Started container kube-apiserver-check-endpoints openshift-etcd 45m Normal Created pod/etcd-ip-10-0-197-197.ec2.internal Created container etcd-resources-copy openshift-etcd 45m Normal Started pod/etcd-ip-10-0-197-197.ec2.internal Started container etcd-metrics openshift-etcd 45m Warning Unhealthy pod/etcd-guard-ip-10-0-197-197.ec2.internal Readiness probe failed: Get "https://10.0.197.197:9980/healthz": dial tcp 10.0.197.197:9980: connect: connection refused openshift-etcd 45m Normal Created pod/etcd-ip-10-0-197-197.ec2.internal Created container etcdctl openshift-etcd 45m Normal Pulled pod/etcd-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 45m Normal Created pod/etcd-ip-10-0-197-197.ec2.internal Created container etcd openshift-etcd 45m Normal Pulled pod/etcd-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 45m Normal Started pod/etcd-ip-10-0-197-197.ec2.internal Started container etcdctl openshift-etcd 45m Normal Created pod/etcd-ip-10-0-197-197.ec2.internal Created container etcd-metrics openshift-etcd 45m Normal Started pod/etcd-ip-10-0-197-197.ec2.internal Started container etcd-readyz openshift-etcd 45m Normal Started pod/etcd-ip-10-0-197-197.ec2.internal Started container etcd openshift-etcd 45m Normal Pulled pod/etcd-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd 45m Normal Created pod/etcd-ip-10-0-197-197.ec2.internal Created container etcd-readyz openshift-kube-apiserver 45m Warning FastControllerResync node/ip-10-0-239-132.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 45m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing PodIP in operand kube-apiserver-ip-10-0-239-132.ec2.internal on node ip-10-0-239-132.ec2.internal, Missing operand on node ip-10-0-140-6.ec2.internal]" to "GuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing PodIP in operand kube-apiserver-ip-10-0-239-132.ec2.internal on node ip-10-0-239-132.ec2.internal]" openshift-kube-apiserver 45m Warning FastControllerResync node/ip-10-0-239-132.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling openshift-authentication-operator 45m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: need at least 3 kube-apiservers, got 1" to "WellKnownReadyControllerDegraded: need at least 3 kube-apiservers, got 2" openshift-authentication-operator 45m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: need at least 3 kube-apiservers, got 1" to "WellKnownAvailable: The well-known endpoint is not yet available: need at least 3 kube-apiservers, got 2" openshift-kube-apiserver 45m Normal Created pod/kube-apiserver-guard-ip-10-0-239-132.ec2.internal Created container guard openshift-kube-apiserver 45m Normal AddedInterface pod/kube-apiserver-guard-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.26/23] from ovn-kubernetes openshift-kube-apiserver 45m Normal Pulled pod/kube-apiserver-guard-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 45m Normal Started pod/kube-apiserver-guard-ip-10-0-239-132.ec2.internal Started container guard openshift-kube-apiserver-operator 45m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: [Missing operand on node ip-10-0-140-6.ec2.internal, Missing PodIP in operand kube-apiserver-ip-10-0-239-132.ec2.internal on node ip-10-0-239-132.ec2.internal]" to "GuardControllerDegraded: Missing operand on node ip-10-0-140-6.ec2.internal" openshift-etcd-operator 45m Warning EtcdLeaderChangeMetrics deployment/etcd-operator Detected leader change increase of 2.2358190476190476 over 5 minutes on "AWS"; disk metrics are: etcd-ip-10-0-140-6.ec2.internal=0.006252,etcd-ip-10-0-197-197.ec2.internal=0.007628,etcd-ip-10-0-239-132.ec2.internal=0.004934. Most often this is as a result of inadequate storage or sometimes due to networking issues. openshift-etcd 45m Warning Unhealthy pod/etcd-guard-ip-10-0-197-197.ec2.internal Readiness probe failed: Get "https://10.0.197.197:9980/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) openshift-kube-controller-manager 45m Warning ClusterInfrastructureStatus pod/kube-controller-manager-ip-10-0-239-132.ec2.internal unable to get cluster infrastructure status, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope openshift-etcd 45m Warning ProbeError pod/etcd-guard-ip-10-0-197-197.ec2.internal Readiness probe error: Get "https://10.0.197.197:9980/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers)... openshift-etcd 44m Warning ProbeError pod/etcd-ip-10-0-197-197.ec2.internal Startup probe error: Get "https://10.0.197.197:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers)... openshift-etcd 44m Warning Unhealthy pod/etcd-ip-10-0-197-197.ec2.internal Startup probe failed: Get "https://10.0.197.197:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) openshift-etcd-operator 44m Normal NodeCurrentRevisionChanged deployment/etcd-operator Updated node "ip-10-0-197-197.ec2.internal" from revision 5 to 6 because static pod is ready openshift-etcd-operator 44m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 6\nEtcdMembersProgressing: No unstarted etcd members found"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 5; 2 nodes are at revision 6\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 6\nEtcdMembersAvailable: 3 members are available" openshift-kube-controller-manager 44m Normal Created pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Created container cluster-policy-controller openshift-kube-controller-manager 44m Normal Pulled pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fec19020202ba3b7f09460bc0cb52fc001941fbb3a3fa7d1ca34a2e9ebf0e9" already present on machine openshift-etcd-operator 44m Normal ConfigMapUpdated deployment/etcd-operator Updated ConfigMap/etcd-endpoints -n openshift-etcd: openshift-kube-controller-manager 44m Normal Started pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Started container cluster-policy-controller openshift-kube-controller-manager 44m Warning ClusterInfrastructureStatus pod/kube-controller-manager-ip-10-0-239-132.ec2.internal unable to get cluster infrastructure status, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope openshift-etcd-operator 44m Normal PodCreated deployment/etcd-operator Created Pod/revision-pruner-6-ip-10-0-239-132.ec2.internal -n openshift-etcd because it was missing openshift-etcd 44m Normal Pulled pod/revision-pruner-6-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd 44m Normal AddedInterface pod/revision-pruner-6-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.27/23] from ovn-kubernetes openshift-etcd 44m Normal Started pod/revision-pruner-6-ip-10-0-239-132.ec2.internal Started container pruner openshift-etcd 44m Normal Created pod/revision-pruner-6-ip-10-0-239-132.ec2.internal Created container pruner openshift-etcd-operator 44m Normal PodCreated deployment/etcd-operator Created Pod/revision-pruner-6-ip-10-0-140-6.ec2.internal -n openshift-etcd because it was missing openshift-etcd 44m Normal Created pod/revision-pruner-6-ip-10-0-140-6.ec2.internal Created container pruner openshift-etcd 44m Normal Started pod/revision-pruner-6-ip-10-0-140-6.ec2.internal Started container pruner openshift-etcd 44m Normal Pulled pod/revision-pruner-6-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd 44m Normal AddedInterface pod/revision-pruner-6-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.37/23] from ovn-kubernetes openshift-etcd 44m Normal Pulled pod/revision-pruner-6-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd-operator 44m Normal PodCreated deployment/etcd-operator Created Pod/revision-pruner-6-ip-10-0-197-197.ec2.internal -n openshift-etcd because it was missing openshift-etcd 44m Normal AddedInterface pod/revision-pruner-6-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.52/23] from ovn-kubernetes openshift-etcd 44m Normal Created pod/revision-pruner-6-ip-10-0-197-197.ec2.internal Created container pruner openshift-etcd 44m Normal Started pod/revision-pruner-6-ip-10-0-197-197.ec2.internal Started container pruner openshift-etcd-operator 44m Normal ConfigMapUpdated deployment/etcd-operator Updated ConfigMap/etcd-scripts -n openshift-etcd:... openshift-etcd-operator 44m Normal ConfigMapUpdated deployment/etcd-operator Updated ConfigMap/etcd-pod -n openshift-etcd:... openshift-etcd-operator 44m Normal RevisionTriggered deployment/etcd-operator new revision 7 triggered by "configmap/etcd-pod has changed" openshift-kube-controller-manager-operator 44m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 5; 1 nodes are at revision 6" to "NodeInstallerProgressing: 1 nodes are at revision 5; 2 nodes are at revision 6",Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 5; 1 nodes are at revision 6" to "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 5; 2 nodes are at revision 6" openshift-etcd-operator 44m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/revision-status-7 -n openshift-etcd because it was missing openshift-kube-controller-manager-operator 44m Normal NodeCurrentRevisionChanged deployment/kube-controller-manager-operator Updated node "ip-10-0-239-132.ec2.internal" from revision 5 to 6 because static pod is ready openshift-etcd-operator 44m Normal ConfigMapUpdated deployment/etcd-operator Updated ConfigMap/restore-etcd-pod -n openshift-etcd:... openshift-etcd-operator 44m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-pod-7 -n openshift-etcd because it was missing openshift-etcd-operator 44m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-serving-ca-7 -n openshift-etcd because it was missing openshift-etcd-operator 44m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-peer-client-ca-7 -n openshift-etcd because it was missing openshift-etcd-operator 44m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-metrics-proxy-serving-ca-7 -n openshift-etcd because it was missing openshift-kube-apiserver-operator 44m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 0; 1 nodes are at revision 9" to "NodeInstallerProgressing: 1 nodes are at revision 0; 2 nodes are at revision 9",Available message changed from "StaticPodsAvailable: 1 nodes are active; 2 nodes are at revision 0; 1 nodes are at revision 9" to "StaticPodsAvailable: 2 nodes are active; 1 nodes are at revision 0; 2 nodes are at revision 9" openshift-kube-apiserver-operator 44m Normal NodeCurrentRevisionChanged deployment/kube-apiserver-operator Updated node "ip-10-0-239-132.ec2.internal" from revision 0 to 9 because static pod is ready openshift-etcd-operator 44m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-metrics-proxy-client-ca-7 -n openshift-etcd because it was missing openshift-etcd-operator 44m Normal SecretCreated deployment/etcd-operator Created Secret/etcd-all-certs-7 -n openshift-etcd because it was missing openshift-etcd-operator 44m Normal ConfigMapCreated deployment/etcd-operator Created ConfigMap/etcd-endpoints-7 -n openshift-etcd because it was missing openshift-etcd-operator 44m Normal RevisionCreate deployment/etcd-operator Revision 6 created because configmap/etcd-pod has changed openshift-etcd-operator 44m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Progressing changed from False to True ("NodeInstallerProgressing: 3 nodes are at revision 6; 0 nodes have achieved new revision 7"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 6\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 6; 0 nodes have achieved new revision 7\nEtcdMembersAvailable: 3 members are available" openshift-kube-controller-manager-operator 44m Normal NodeTargetRevisionChanged deployment/kube-controller-manager-operator Updating node "ip-10-0-140-6.ec2.internal" from revision 5 to 6 because node ip-10-0-140-6.ec2.internal with revision 5 is the oldest openshift-kube-apiserver-operator 44m Normal NodeTargetRevisionChanged deployment/kube-apiserver-operator Updating node "ip-10-0-140-6.ec2.internal" from revision 0 to 9 because node ip-10-0-140-6.ec2.internal static pod not found openshift-kube-controller-manager-operator 44m Normal PodCreated deployment/kube-controller-manager-operator Created Pod/installer-6-ip-10-0-140-6.ec2.internal -n openshift-kube-controller-manager because it was missing openshift-etcd 44m Normal Pulled pod/revision-pruner-7-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd 44m Normal AddedInterface pod/revision-pruner-7-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.28/23] from ovn-kubernetes openshift-kube-controller-manager 44m Normal AddedInterface pod/installer-6-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.40/23] from ovn-kubernetes openshift-etcd 44m Normal Started pod/revision-pruner-7-ip-10-0-239-132.ec2.internal Started container pruner openshift-etcd 44m Normal Created pod/revision-pruner-7-ip-10-0-239-132.ec2.internal Created container pruner openshift-kube-controller-manager 44m Normal Created pod/installer-6-ip-10-0-140-6.ec2.internal Created container installer openshift-kube-controller-manager 44m Normal Pulled pod/installer-6-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 44m Normal Started pod/installer-6-ip-10-0-140-6.ec2.internal Started container installer openshift-etcd 44m Normal Pulled pod/revision-pruner-7-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd 44m Normal AddedInterface pod/revision-pruner-7-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.41/23] from ovn-kubernetes openshift-kube-apiserver 44m Normal Created pod/installer-9-ip-10-0-140-6.ec2.internal Created container installer openshift-kube-apiserver-operator 44m Normal PodCreated deployment/kube-apiserver-operator Created Pod/installer-9-ip-10-0-140-6.ec2.internal -n openshift-kube-apiserver because it was missing openshift-kube-apiserver 44m Normal Pulled pod/installer-9-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 44m Normal Started pod/installer-9-ip-10-0-140-6.ec2.internal Started container installer openshift-etcd 44m Normal Created pod/revision-pruner-7-ip-10-0-140-6.ec2.internal Created container pruner openshift-etcd 44m Normal Started pod/revision-pruner-7-ip-10-0-140-6.ec2.internal Started container pruner openshift-kube-apiserver 44m Normal AddedInterface pod/installer-9-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.42/23] from ovn-kubernetes openshift-etcd-operator 44m Normal PodCreated deployment/etcd-operator Created Pod/revision-pruner-7-ip-10-0-197-197.ec2.internal -n openshift-etcd because it was missing openshift-etcd 44m Normal AddedInterface pod/revision-pruner-7-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.53/23] from ovn-kubernetes openshift-etcd 44m Normal Pulled pod/revision-pruner-7-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd-operator 44m Normal NodeTargetRevisionChanged deployment/etcd-operator Updating node "ip-10-0-239-132.ec2.internal" from revision 6 to 7 because node ip-10-0-239-132.ec2.internal with revision 6 is the oldest openshift-etcd 44m Normal Started pod/revision-pruner-7-ip-10-0-197-197.ec2.internal Started container pruner openshift-etcd 44m Normal Created pod/revision-pruner-7-ip-10-0-197-197.ec2.internal Created container pruner openshift-etcd-operator 44m Normal PodCreated deployment/etcd-operator Created Pod/installer-7-ip-10-0-239-132.ec2.internal -n openshift-etcd because it was missing openshift-etcd 44m Normal Created pod/installer-7-ip-10-0-239-132.ec2.internal Created container installer openshift-etcd 44m Normal Pulled pod/installer-7-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd 44m Normal Started pod/installer-7-ip-10-0-239-132.ec2.internal Started container installer openshift-etcd 44m Normal AddedInterface pod/installer-7-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.29/23] from ovn-kubernetes openshift-etcd-operator 44m Warning EtcdLeaderChangeMetrics deployment/etcd-operator Detected leader change increase of 2.2222304527053804 over 5 minutes on "AWS"; disk metrics are: etcd-ip-10-0-140-6.ec2.internal=0.006252,etcd-ip-10-0-197-197.ec2.internal=0.007628,etcd-ip-10-0-239-132.ec2.internal=0.004934. Most often this is as a result of inadequate storage or sometimes due to networking issues. openshift-operator-lifecycle-manager 44m Normal Pulled pod/collect-profiles-27990030-m4gbh Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:497121bfdb3b293af2c939af393bfbda39aeace9d754f7288a244cd7ec2662d7" already present on machine openshift-operator-lifecycle-manager 44m Normal AddedInterface pod/collect-profiles-27990030-m4gbh Add eth0 [10.131.0.17/23] from ovn-kubernetes openshift-operator-lifecycle-manager 44m Normal Created pod/collect-profiles-27990030-m4gbh Created container collect-profiles openshift-operator-lifecycle-manager 44m Normal Started pod/collect-profiles-27990030-m4gbh Started container collect-profiles openshift-operator-lifecycle-manager 44m Normal SuccessfulCreate job/collect-profiles-27990030 Created pod: collect-profiles-27990030-m4gbh openshift-operator-lifecycle-manager 44m Normal SuccessfulCreate cronjob/collect-profiles Created job collect-profiles-27990030 openshift-operator-lifecycle-manager 43m Normal Completed job/collect-profiles-27990030 Job completed openshift-operator-lifecycle-manager 43m Normal SawCompletedJob cronjob/collect-profiles Saw completed job: collect-profiles-27990030, status: Complete openshift-kube-controller-manager 43m Normal Killing pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Stopping container kube-controller-manager-recovery-controller openshift-kube-controller-manager 43m Normal Killing pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Stopping container cluster-policy-controller openshift-kube-controller-manager 43m Normal Killing pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Stopping container kube-controller-manager-cert-syncer openshift-kube-controller-manager-operator 43m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"cluster-policy-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager-cert-syncer\" is terminated: Error: \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:29:24.669351 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:29:24.669412 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:29:28.614183 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:29:28.614250 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:30:03.545364 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:30:03.545405 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" openshift-kube-controller-manager 43m Normal Killing pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Stopping container kube-controller-manager openshift-kube-controller-manager 43m Normal StaticPodInstallerCompleted pod/installer-6-ip-10-0-140-6.ec2.internal Successfully installed revision 6 openshift-etcd 43m Normal StaticPodInstallerCompleted pod/installer-7-ip-10-0-239-132.ec2.internal Successfully installed revision 7 openshift-etcd 43m Normal Killing pod/etcd-ip-10-0-239-132.ec2.internal Stopping container etcdctl openshift-kube-apiserver 43m Normal Pulled pod/kube-apiserver-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-etcd 43m Normal Killing pod/etcd-ip-10-0-239-132.ec2.internal Stopping container etcd-readyz openshift-kube-apiserver 43m Normal Created pod/kube-apiserver-ip-10-0-140-6.ec2.internal Created container setup openshift-kube-apiserver 43m Normal Started pod/kube-apiserver-ip-10-0-140-6.ec2.internal Started container setup openshift-etcd 43m Normal Killing pod/etcd-ip-10-0-239-132.ec2.internal Stopping container etcd-metrics openshift-kube-apiserver 43m Normal StaticPodInstallerCompleted pod/installer-9-ip-10-0-140-6.ec2.internal Successfully installed revision 9 openshift-kube-apiserver-operator 43m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "GuardControllerDegraded: Missing operand on node ip-10-0-140-6.ec2.internal" to "GuardControllerDegraded: Missing PodIP in operand kube-apiserver-ip-10-0-140-6.ec2.internal on node ip-10-0-140-6.ec2.internal" openshift-kube-apiserver 43m Normal Pulled pod/kube-apiserver-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-apiserver 43m Normal Started pod/kube-apiserver-ip-10-0-140-6.ec2.internal Started container kube-apiserver-cert-syncer openshift-kube-apiserver 43m Normal Created pod/kube-apiserver-ip-10-0-140-6.ec2.internal Created container kube-apiserver-cert-syncer openshift-kube-apiserver 43m Normal Started pod/kube-apiserver-ip-10-0-140-6.ec2.internal Started container kube-apiserver-cert-regeneration-controller openshift-kube-apiserver 43m Normal Started pod/kube-apiserver-ip-10-0-140-6.ec2.internal Started container kube-apiserver openshift-kube-apiserver 43m Normal Pulled pod/kube-apiserver-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 43m Normal Started pod/kube-apiserver-ip-10-0-140-6.ec2.internal Started container kube-apiserver-check-endpoints openshift-kube-apiserver 43m Normal Created pod/kube-apiserver-ip-10-0-140-6.ec2.internal Created container kube-apiserver-check-endpoints openshift-kube-apiserver 43m Normal Created pod/kube-apiserver-ip-10-0-140-6.ec2.internal Created container kube-apiserver-insecure-readyz openshift-kube-apiserver 43m Normal Pulled pod/kube-apiserver-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 43m Normal Created pod/kube-apiserver-ip-10-0-140-6.ec2.internal Created container kube-apiserver-cert-regeneration-controller openshift-kube-apiserver 43m Normal Started pod/kube-apiserver-ip-10-0-140-6.ec2.internal Started container kube-apiserver-insecure-readyz openshift-kube-apiserver 43m Normal Pulled pod/kube-apiserver-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 43m Normal Pulled pod/kube-apiserver-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 43m Normal Created pod/kube-apiserver-ip-10-0-140-6.ec2.internal Created container kube-apiserver openshift-kube-controller-manager 43m Normal Created pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Created container cluster-policy-controller openshift-kube-apiserver 43m Warning FastControllerResync node/ip-10-0-140-6.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager 43m Normal Pulled pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-apiserver 43m Warning FastControllerResync node/ip-10-0-140-6.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager 43m Normal Pulled pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-controller-manager 43m Normal Created pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Created container kube-controller-manager openshift-kube-controller-manager 43m Normal Started pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Started container kube-controller-manager openshift-kube-controller-manager 43m Normal Started pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Started container cluster-policy-controller openshift-kube-controller-manager 43m Normal Pulled pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fec19020202ba3b7f09460bc0cb52fc001941fbb3a3fa7d1ca34a2e9ebf0e9" already present on machine openshift-kube-controller-manager 43m Warning FastControllerResync pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager 43m Warning ClusterInfrastructureStatus pod/kube-controller-manager-ip-10-0-140-6.ec2.internal unable to get cluster infrastructure status, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope openshift-authentication-operator 43m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available changed from False to True ("All is well") openshift-kube-controller-manager 43m Normal Created pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Created container kube-controller-manager-recovery-controller openshift-etcd 43m Warning ProbeError pod/etcd-guard-ip-10-0-239-132.ec2.internal Readiness probe error: Get "https://10.0.239.132:9980/healthz": dial tcp 10.0.239.132:9980: connect: connection refused... openshift-kube-controller-manager 43m Normal Created pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Created container kube-controller-manager-cert-syncer openshift-kube-controller-manager 43m Normal Started pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Started container kube-controller-manager-cert-syncer openshift-kube-controller-manager 43m Normal Pulled pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-authentication-operator 43m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded changed from True to False ("All is well") openshift-kube-controller-manager 43m Normal Started pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Started container kube-controller-manager-recovery-controller openshift-kube-controller-manager-operator 43m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"cluster-policy-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager-cert-syncer\" is terminated: Error: \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:29:24.669351 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:29:24.669412 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:29:28.614183 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:29:28.614250 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/configmaps?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: W0321 12:30:03.545364 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: E0321 12:30:03.545405 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/secrets?limit=500&resourceVersion=0\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" openshift-kube-apiserver 43m Normal AddedInterface pod/kube-apiserver-guard-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.44/23] from ovn-kubernetes kube-system 43m Normal LeaderElection lease/kube-controller-manager ip-10-0-239-132_1370d47d-8644-4671-9b0e-1ba6c1dc0121 became leader kube-system 43m Normal LeaderElection configmap/kube-controller-manager ip-10-0-239-132_1370d47d-8644-4671-9b0e-1ba6c1dc0121 became leader openshift-kube-apiserver 43m Normal Pulled pod/kube-apiserver-guard-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine default 43m Normal RegisteredNode node/ip-10-0-232-8.ec2.internal Node ip-10-0-232-8.ec2.internal event: Registered Node ip-10-0-232-8.ec2.internal in Controller default 43m Normal RegisteredNode node/ip-10-0-197-197.ec2.internal Node ip-10-0-197-197.ec2.internal event: Registered Node ip-10-0-197-197.ec2.internal in Controller default 43m Normal RegisteredNode node/ip-10-0-239-132.ec2.internal Node ip-10-0-239-132.ec2.internal event: Registered Node ip-10-0-239-132.ec2.internal in Controller default 43m Normal RegisteredNode node/ip-10-0-160-152.ec2.internal Node ip-10-0-160-152.ec2.internal event: Registered Node ip-10-0-160-152.ec2.internal in Controller default 43m Normal RegisteredNode node/ip-10-0-140-6.ec2.internal Node ip-10-0-140-6.ec2.internal event: Registered Node ip-10-0-140-6.ec2.internal in Controller openshift-ingress 43m Normal EnsuringLoadBalancer service/router-default Ensuring load balancer openshift-kube-apiserver 43m Normal Started pod/kube-apiserver-guard-ip-10-0-140-6.ec2.internal Started container guard openshift-kube-apiserver 43m Normal Created pod/kube-apiserver-guard-ip-10-0-140-6.ec2.internal Created container guard openshift-ingress 43m Normal EnsuredLoadBalancer service/router-default Ensured load balancer openshift-dns 43m Warning TopologyAwareHintsDisabled service/dns-default Insufficient Node information: allocatable CPU or zone not specified on one or more nodes, addressType: IPv4 openshift-kube-apiserver-operator 43m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready") openshift-kube-controller-manager-operator 43m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 6"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 5; 2 nodes are at revision 6" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 6" openshift-kube-controller-manager-operator 43m Normal NodeCurrentRevisionChanged deployment/kube-controller-manager-operator Updated node "ip-10-0-140-6.ec2.internal" from revision 5 to 6 because static pod is ready openshift-etcd-operator 43m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "StaticPodsDegraded: pod/etcd-ip-10-0-239-132.ec2.internal container \"etcd\" started at 2023-03-21 12:24:47 +0000 UTC is still not ready\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" openshift-etcd-operator 43m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "StaticPodsDegraded: pod/etcd-ip-10-0-239-132.ec2.internal container \"etcd\" started at 2023-03-21 12:24:47 +0000 UTC is still not ready\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "StaticPodsDegraded: pod/etcd-ip-10-0-239-132.ec2.internal container \"etcd\" is terminated: Completed: \nStaticPodsDegraded: pod/etcd-ip-10-0-239-132.ec2.internal container \"etcd-metrics\" is terminated: Error: ,\"caller\":\"zapgrpc/zapgrpc.go:191\",\"msg\":\"[core] grpc: addrConn.createTransport failed to connect to {10.0.239.132:9978 10.0.239.132 0 }. Err: connection error: desc = \\\"transport: Error while dialing dial tcp 10.0.239.132:9978: connect: connection refused\\\"\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:25:04.107Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to TRANSIENT_FAILURE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:25:04.107Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000522940, TRANSIENT_FAILURE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:25:14.860Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to IDLE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:25:14.860Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000522940, IDLE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:25:14.860Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to CONNECTING\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:25:14.860Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel picks a new address \\\"10.0.239.132:9978\\\" to connect\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:25:14.860Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000522940, CONNECTING\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:25:14.866Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to READY\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:25:14.867Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000522940, READY\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:25:14.867Z\",\"caller\":\nStaticPodsDegraded: pod/etcd-ip-10-0-239-132.ec2.internal container \"etcd-readyz\" is terminated: Completed: \nStaticPodsDegraded: pod/etcd-ip-10-0-239-132.ec2.internal container \"etcdctl\" is terminated: Error: \nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" openshift-kube-apiserver-operator 43m Normal CustomResourceDefinitionCreated deployment/kube-apiserver-operator Created CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io because it was missing openshift-apiserver-operator 43m Warning CustomResourceDefinitionCreateFailed deployment/openshift-apiserver-operator Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists openshift-etcd-operator 43m Warning EtcdLeaderChangeMetrics deployment/etcd-operator Detected leader change increase of 2.2222304527053804 over 5 minutes on "AWS"; disk metrics are: etcd-ip-10-0-140-6.ec2.internal=0.006231,etcd-ip-10-0-197-197.ec2.internal=0.006264,etcd-ip-10-0-239-132.ec2.internal=0.004448. Most often this is as a result of inadequate storage or sometimes due to networking issues. openshift-etcd 43m Normal Pulled pod/etcd-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 43m Normal Pulled pod/etcd-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 43m Normal Created pod/etcd-ip-10-0-239-132.ec2.internal Created container setup openshift-etcd 43m Normal Started pod/etcd-ip-10-0-239-132.ec2.internal Started container setup openshift-etcd 43m Normal Created pod/etcd-ip-10-0-239-132.ec2.internal Created container etcd-ensure-env-vars openshift-etcd 43m Normal Pulled pod/etcd-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 43m Normal Started pod/etcd-ip-10-0-239-132.ec2.internal Started container etcd-ensure-env-vars openshift-etcd-operator 43m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "StaticPodsDegraded: pod/etcd-ip-10-0-239-132.ec2.internal container \"etcd\" is terminated: Completed: \nStaticPodsDegraded: pod/etcd-ip-10-0-239-132.ec2.internal container \"etcd-metrics\" is terminated: Error: ,\"caller\":\"zapgrpc/zapgrpc.go:191\",\"msg\":\"[core] grpc: addrConn.createTransport failed to connect to {10.0.239.132:9978 10.0.239.132 0 }. Err: connection error: desc = \\\"transport: Error while dialing dial tcp 10.0.239.132:9978: connect: connection refused\\\"\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:25:04.107Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to TRANSIENT_FAILURE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:25:04.107Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000522940, TRANSIENT_FAILURE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:25:14.860Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to IDLE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:25:14.860Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000522940, IDLE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:25:14.860Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to CONNECTING\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:25:14.860Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel picks a new address \\\"10.0.239.132:9978\\\" to connect\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:25:14.860Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000522940, CONNECTING\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:25:14.866Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to READY\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:25:14.867Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000522940, READY\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:25:14.867Z\",\"caller\":\nStaticPodsDegraded: pod/etcd-ip-10-0-239-132.ec2.internal container \"etcd-readyz\" is terminated: Completed: \nStaticPodsDegraded: pod/etcd-ip-10-0-239-132.ec2.internal container \"etcdctl\" is terminated: Error: \nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" openshift-etcd 43m Normal Created pod/etcd-ip-10-0-239-132.ec2.internal Created container etcd-resources-copy openshift-etcd 43m Normal Started pod/etcd-ip-10-0-239-132.ec2.internal Started container etcd-resources-copy openshift-etcd 43m Normal Pulled pod/etcd-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 43m Normal Created pod/etcd-ip-10-0-239-132.ec2.internal Created container etcdctl openshift-etcd 43m Normal Created pod/etcd-ip-10-0-239-132.ec2.internal Created container etcd-readyz openshift-etcd 43m Normal Pulled pod/etcd-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-kube-controller-manager 43m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-addon-operator namespace openshift-etcd 43m Normal Started pod/etcd-ip-10-0-239-132.ec2.internal Started container etcd-metrics openshift-etcd 43m Normal Started pod/etcd-ip-10-0-239-132.ec2.internal Started container etcd-readyz openshift-etcd 43m Normal Started pod/etcd-ip-10-0-239-132.ec2.internal Started container etcdctl openshift-etcd 43m Normal Pulled pod/etcd-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 43m Normal Created pod/etcd-ip-10-0-239-132.ec2.internal Created container etcd openshift-etcd 43m Normal Started pod/etcd-ip-10-0-239-132.ec2.internal Started container etcd openshift-etcd 43m Normal Pulled pod/etcd-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 43m Normal Created pod/etcd-ip-10-0-239-132.ec2.internal Created container etcd-metrics openshift-kube-controller-manager 43m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-backplane-mobb namespace openshift-kube-controller-manager 43m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-backplane-cse namespace openshift-kube-controller-manager 43m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-backplane-tam namespace openshift-machine-api 43m Normal Create machine/qeaisrhods-c13-28wr5-infra-us-east-1a-54lb2 Created Machine qeaisrhods-c13-28wr5-infra-us-east-1a-54lb2 openshift-kube-controller-manager 43m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-backplane-srep namespace openshift-kube-controller-manager 43m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-cloud-ingress-operator namespace openshift-kube-controller-manager 43m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-backplane-cee namespace openshift-kube-controller-manager 43m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-backplane-csa namespace openshift-kube-controller-manager 43m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-backplane-csm namespace openshift-ingress 42m Normal Killing pod/router-default-699d8c97f-9xbcx Stopping container router openshift-monitoring 42m Normal ScalingReplicaSet deployment/prometheus-operator-admission-webhook Scaled down replica set prometheus-operator-admission-webhook-5c549f4449 to 1 from 2 openshift-monitoring 42m Normal ScalingReplicaSet deployment/prometheus-operator-admission-webhook Scaled up replica set prometheus-operator-admission-webhook-5c9b9d98cc to 2 from 1 openshift-monitoring 42m Normal SuccessfulDelete replicaset/prometheus-operator-admission-webhook-5c549f4449 Deleted pod: prometheus-operator-admission-webhook-5c549f4449-d5c7w openshift-monitoring 42m Normal SuccessfulCreate replicaset/prometheus-operator-admission-webhook-5c9b9d98cc Created pod: prometheus-operator-admission-webhook-5c9b9d98cc-4mv5m openshift-monitoring 42m Normal SuccessfulCreate replicaset/prometheus-operator-admission-webhook-5c9b9d98cc Created pod: prometheus-operator-admission-webhook-5c9b9d98cc-nznt8 openshift-ingress 42m Normal SuccessfulCreate replicaset/router-default-7898b977d4 Created pod: router-default-7898b977d4-vhrfb openshift-ingress 42m Normal SuccessfulCreate replicaset/router-default-7898b977d4 Created pod: router-default-7898b977d4-l6kqr openshift-ingress 42m Normal ScalingReplicaSet deployment/router-default Scaled up replica set router-default-7898b977d4 to 1 openshift-kube-controller-manager 42m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-custom-domains-operator namespace openshift-ingress 42m Normal ScalingReplicaSet deployment/router-default Scaled up replica set router-default-7898b977d4 to 2 from 1 openshift-ingress 42m Normal ScalingReplicaSet deployment/router-default Scaled down replica set router-default-699d8c97f to 1 from 2 openshift-ingress 42m Normal SuccessfulDelete replicaset/router-default-699d8c97f Deleted pod: router-default-699d8c97f-9xbcx openshift-monitoring 42m Normal ScalingReplicaSet deployment/prometheus-operator-admission-webhook Scaled up replica set prometheus-operator-admission-webhook-5c9b9d98cc to 1 openshift-monitoring 42m Normal Killing pod/prometheus-operator-admission-webhook-5c549f4449-d5c7w Stopping container prometheus-operator-admission-webhook openshift-kube-controller-manager 42m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-deployment-validation-operator namespace openshift-ingress 42m Normal Pulled pod/router-default-7898b977d4-l6kqr Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0743d54d3acaf6558295618248ff446b4352dde0234d52465d7578c7c261e6fd" already present on machine openshift-validation-webhook 42m Warning FailedMount pod/validation-webhook-dt8g2 MountVolume.SetUp failed for volume "service-certs" : secret "webhook-cert" not found openshift-monitoring 42m Normal AddedInterface pod/configure-alertmanager-operator-registry-ztskr Add eth0 [10.131.0.19/23] from ovn-kubernetes openshift-monitoring 42m Normal SuccessfulCreate daemonset/sre-dns-latency-exporter Created pod: sre-dns-latency-exporter-62rmk openshift-monitoring 42m Normal SuccessfulCreate daemonset/sre-dns-latency-exporter Created pod: sre-dns-latency-exporter-t9jjt openshift-validation-webhook 42m Normal SuccessfulCreate daemonset/validation-webhook Created pod: validation-webhook-p4gz5 openshift-ingress 42m Normal Started pod/router-default-7898b977d4-vhrfb Started container router openshift-kube-controller-manager 42m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-managed-node-metadata-operator namespace openshift-validation-webhook 42m Normal SuccessfulCreate daemonset/validation-webhook Created pod: validation-webhook-j7r6j openshift-kube-controller-manager 42m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-managed-upgrade-operator namespace openshift-machine-api 42m Normal Create machine/qeaisrhods-c13-28wr5-infra-us-east-1a-qww78 Created Machine qeaisrhods-c13-28wr5-infra-us-east-1a-qww78 openshift-ingress 42m Normal Created pod/router-default-7898b977d4-vhrfb Created container router openshift-validation-webhook 42m Warning FailedMount pod/validation-webhook-p4gz5 MountVolume.SetUp failed for volume "service-certs" : secret "webhook-cert" not found openshift-ingress 42m Normal Pulled pod/router-default-7898b977d4-vhrfb Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0743d54d3acaf6558295618248ff446b4352dde0234d52465d7578c7c261e6fd" already present on machine openshift-monitoring 42m Normal SuccessfulCreate daemonset/sre-dns-latency-exporter Created pod: sre-dns-latency-exporter-4j7vx openshift-ingress 42m Normal AddedInterface pod/router-default-7898b977d4-vhrfb Add eth0 [10.131.0.18/23] from ovn-kubernetes openshift-ingress 42m Normal Started pod/router-default-7898b977d4-l6kqr Started container router openshift-ingress 42m Normal Created pod/router-default-7898b977d4-l6kqr Created container router openshift-monitoring 42m Normal SuccessfulCreate daemonset/sre-dns-latency-exporter Created pod: sre-dns-latency-exporter-snmkd openshift-kube-controller-manager 42m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-validation-webhook namespace openshift-ingress 42m Normal AddedInterface pod/router-default-7898b977d4-l6kqr Add eth0 [10.128.2.19/23] from ovn-kubernetes openshift-monitoring 42m Normal SuccessfulCreate daemonset/sre-dns-latency-exporter Created pod: sre-dns-latency-exporter-fvnpq openshift-monitoring 42m Normal Pulling pod/configure-alertmanager-operator-registry-ztskr Pulling image "quay.io/app-sre/configure-alertmanager-operator-registry@sha256:4cd6cdcb961b519e306ff2ea3c276ef4037edb429e14df405bc3ccbed8531ac9" openshift-kube-controller-manager 42m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-velero namespace openshift-validation-webhook 42m Warning FailedMount pod/validation-webhook-j7r6j MountVolume.SetUp failed for volume "service-certs" : secret "webhook-cert" not found openshift-validation-webhook 42m Normal SuccessfulCreate daemonset/validation-webhook Created pod: validation-webhook-dt8g2 openshift-monitoring 42m Normal Pulling pod/sre-dns-latency-exporter-fvnpq Pulling image "quay.io/app-sre/managed-prometheus-exporter-base:latest" openshift-kube-controller-manager 42m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-ocm-agent-operator namespace openshift-kube-controller-manager 42m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-aqua namespace openshift-monitoring 42m Normal Pulling pod/sre-dns-latency-exporter-4j7vx Pulling image "quay.io/app-sre/managed-prometheus-exporter-base:latest" openshift-kube-controller-manager 42m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-backplane-managed-scripts namespace openshift-machine-api 42m Warning FailedUpdate machine/qeaisrhods-c13-28wr5-infra-us-east-1a-qww78 qeaisrhods-c13-28wr5-infra-us-east-1a-qww78: reconciler failed to Update machine: requeue in: 20s openshift-kube-controller-manager 42m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-observability-operator namespace openshift-validation-webhook 42m Normal AddedInterface pod/validation-webhook-j7r6j Add eth0 [10.128.0.46/23] from ovn-kubernetes openshift-monitoring 42m Normal Pulling pod/sre-dns-latency-exporter-t9jjt Pulling image "quay.io/app-sre/managed-prometheus-exporter-base:latest" openshift-monitoring 42m Normal AddedInterface pod/sre-dns-latency-exporter-62rmk Add eth0 [10.130.0.55/23] from ovn-kubernetes openshift-validation-webhook 42m Normal Pulling pod/validation-webhook-j7r6j Pulling image "quay.io/app-sre/managed-cluster-validating-webhooks@sha256:3b13c3a89da30c5fbfaf7529ec3175dd43053c508d4bd09c79ef369d53ecc023" openshift-kube-controller-manager 42m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-must-gather-operator namespace openshift-monitoring 42m Normal AddedInterface pod/sre-dns-latency-exporter-snmkd Add eth0 [10.128.2.20/23] from ovn-kubernetes openshift-monitoring 42m Normal AddedInterface pod/sre-dns-latency-exporter-4j7vx Add eth0 [10.131.0.20/23] from ovn-kubernetes openshift-validation-webhook 42m Normal Pulling pod/validation-webhook-dt8g2 Pulling image "quay.io/app-sre/managed-cluster-validating-webhooks@sha256:3b13c3a89da30c5fbfaf7529ec3175dd43053c508d4bd09c79ef369d53ecc023" openshift-validation-webhook 42m Normal AddedInterface pod/validation-webhook-dt8g2 Add eth0 [10.129.0.32/23] from ovn-kubernetes openshift-validation-webhook 42m Normal Pulling pod/validation-webhook-p4gz5 Pulling image "quay.io/app-sre/managed-cluster-validating-webhooks@sha256:3b13c3a89da30c5fbfaf7529ec3175dd43053c508d4bd09c79ef369d53ecc023" openshift-validation-webhook 42m Normal AddedInterface pod/validation-webhook-p4gz5 Add eth0 [10.130.0.54/23] from ovn-kubernetes openshift-monitoring 42m Normal AddedInterface pod/sre-dns-latency-exporter-t9jjt Add eth0 [10.128.0.47/23] from ovn-kubernetes openshift-monitoring 42m Normal Pulling pod/sre-dns-latency-exporter-62rmk Pulling image "quay.io/app-sre/managed-prometheus-exporter-base:latest" openshift-machine-api 42m Warning FailedUpdate machine/qeaisrhods-c13-28wr5-infra-us-east-1a-54lb2 qeaisrhods-c13-28wr5-infra-us-east-1a-54lb2: reconciler failed to Update machine: requeue in: 20s openshift-monitoring 42m Normal AddedInterface pod/sre-dns-latency-exporter-fvnpq Add eth0 [10.129.0.33/23] from ovn-kubernetes openshift-monitoring 42m Normal Pulling pod/sre-dns-latency-exporter-snmkd Pulling image "quay.io/app-sre/managed-prometheus-exporter-base:latest" openshift-kube-controller-manager 42m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-customer-monitoring namespace openshift-kube-controller-manager 42m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-codeready-workspaces namespace openshift-monitoring 42m Normal SuccessfulCreate job/osd-cluster-ready Created pod: osd-cluster-ready-thb5j openshift-monitoring 42m Normal Pulled pod/configure-alertmanager-operator-registry-ztskr Successfully pulled image "quay.io/app-sre/configure-alertmanager-operator-registry@sha256:4cd6cdcb961b519e306ff2ea3c276ef4037edb429e14df405bc3ccbed8531ac9" in 2.391977331s (2.391984406s including waiting) openshift-monitoring 42m Normal Created pod/configure-alertmanager-operator-registry-ztskr Created container registry-server openshift-ingress 42m Normal SuccessfulDelete replicaset/router-default-7898b977d4 Deleted pod: router-default-7898b977d4-vhrfb openshift-ingress 42m Normal SuccessfulCreate replicaset/router-default-75b548b966 Created pod: router-default-75b548b966-br22c openshift-monitoring 42m Normal DeploymentCreated deploymentconfig/sre-ebs-iops-reporter Created new replication controller "sre-ebs-iops-reporter-1" for version 1 openshift-ingress 42m Normal SuccessfulCreate replicaset/router-default-75b548b966 Created pod: router-default-75b548b966-bd28g openshift-ingress 42m Normal ScalingReplicaSet deployment/router-default Scaled down replica set router-default-7898b977d4 to 0 from 2 openshift-etcd 42m Warning ProbeError pod/etcd-guard-ip-10-0-239-132.ec2.internal Readiness probe error: Get "https://10.0.239.132:9980/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers)... openshift-ingress 42m Normal ScalingReplicaSet deployment/router-default Scaled up replica set router-default-75b548b966 to 2 from 0 openshift-kube-controller-manager 42m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-backplane namespace openshift-monitoring 42m Normal Started pod/configure-alertmanager-operator-registry-ztskr Started container registry-server openshift-kube-controller-manager 42m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-logging namespace openshift-ingress 42m Normal SuccessfulDelete replicaset/router-default-7898b977d4 Deleted pod: router-default-7898b977d4-l6kqr openshift-console 42m Normal SuccessfulCreate replicaset/console-569c4c4669 Created pod: console-569c4c4669-gdr7m openshift-console 42m Normal ScalingReplicaSet deployment/console Scaled down replica set console-7dc48fc574 to 1 from 2 openshift-monitoring 42m Normal DeploymentCreated deploymentconfig/sre-stuck-ebs-vols Created new replication controller "sre-stuck-ebs-vols-1" for version 1 openshift-kube-controller-manager 42m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-osd-metrics namespace openshift-ingress 42m Normal Killing pod/router-default-7898b977d4-l6kqr Stopping container router openshift-kube-controller-manager 42m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-operators-redhat namespace openshift-console 42m Normal SuccessfulDelete replicaset/console-7dc48fc574 Deleted pod: console-7dc48fc574-4kqrk openshift-authentication-operator 42m Normal ObservedConfigChanged deployment/authentication-operator Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\"oauthConfig\": map[string]any{\n\u00a0\u00a0\t\t\"assetPublicURL\": string(\"https://console-openshift-console.apps.qeaisrhods-c13.abmw.s1.de\"...),\n\u00a0\u00a0\t\t\"loginURL\": string(\"https://api.qeaisrhods-c13.abmw.s1.devshift.org:6443\"),\n\u00a0\u00a0\t\t\"templates\": map[string]any{\n\u00a0\u00a0\t\t\t\"error\": strings.Join({\n\u00a0\u00a0\t\t\t\t\"/var/config/\",\n-\u00a0\t\t\t\t\"system/secrets/v4-0-config-system-ocp-branding-template\",\n+\u00a0\t\t\t\t\"user/template/secret/v4-0-config-user-template-error\",\n\u00a0\u00a0\t\t\t\t\"/errors.html\",\n\u00a0\u00a0\t\t\t}, \"\"),\n\u00a0\u00a0\t\t\t\"login\": strings.Join({\n\u00a0\u00a0\t\t\t\t\"/var/config/\",\n-\u00a0\t\t\t\t\"system/secrets/v4-0-config-system-ocp-branding-template\",\n+\u00a0\t\t\t\t\"user/template/secret/v4-0-config-user-template-login\",\n\u00a0\u00a0\t\t\t\t\"/login.html\",\n\u00a0\u00a0\t\t\t}, \"\"),\n\u00a0\u00a0\t\t\t\"providerSelection\": strings.Join({\n\u00a0\u00a0\t\t\t\t\"/var/config/\",\n-\u00a0\t\t\t\t\"system/secrets/v4-0-config-system-ocp-branding-template\",\n+\u00a0\t\t\t\t\"user/template/secret/v4-0-config-user-template-provider-selectio\",\n+\u00a0\t\t\t\t\"n\",\n\u00a0\u00a0\t\t\t\t\"/providers.html\",\n\u00a0\u00a0\t\t\t}, \"\"),\n\u00a0\u00a0\t\t},\n\u00a0\u00a0\t\t\"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)},\n\u00a0\u00a0\t},\n\u00a0\u00a0\t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n\u00a0\u00a0\t\"servingInfo\": map[string]any{\"cipherSuites\": []any{string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\"), string(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"), ...}, \"minTLSVersion\": string(\"VersionTLS12\"), \"namedCertificates\": []any{map[string]any{\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"names\": []any{string(\"*.apps.qeaisrhods-c13.abmw.s1.devshift.org\")}}}},\n\u00a0\u00a0\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n\u00a0\u00a0}\n" openshift-console 42m Normal ScalingReplicaSet deployment/console Scaled up replica set console-569c4c4669 to 2 openshift-authentication-operator 42m Normal ObserveTemplates deployment/authentication-operator templates changed to map["error":"/var/config/user/template/secret/v4-0-config-user-template-error/errors.html" "login":"/var/config/user/template/secret/v4-0-config-user-template-login/login.html" "providerSelection":"/var/config/user/template/secret/v4-0-config-user-template-provider-selection/providers.html"] openshift-console 42m Normal Killing pod/console-7dc48fc574-4kqrk Stopping container console openshift-console 42m Normal SuccessfulCreate replicaset/console-569c4c4669 Created pod: console-569c4c4669-p6rk8 openshift-monitoring 42m Normal AddedInterface pod/sre-stuck-ebs-vols-1-deploy Add eth0 [10.128.2.22/23] from ovn-kubernetes openshift-monitoring 42m Normal Pulled pod/sre-stuck-ebs-vols-1-deploy Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:60eaeef7848be3f86d22f5db7de382d1eac9e3d0cb24eca75b0cddc25f0baeda" in 678.096685ms (678.113636ms including waiting) default 42m Normal RenderedConfigGenerated machineconfigpool/worker rendered-worker-c37c7a9e551f049d382df8406f11fe9b successfully generated (release version: 4.13.0-rc.0, controller version: 40575b862f7bd42a2c40c8e6b7203cd4c29b0021) openshift-monitoring 42m Normal Pulling pod/osd-cluster-ready-thb5j Pulling image "quay.io/app-sre/osd-cluster-ready@sha256:f70aa8033565fc73c006acb9199845242b1f729cb5a407b5174cf22656b4e2d5" openshift-monitoring 42m Normal Pulling pod/sre-stuck-ebs-vols-1-deploy Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:60eaeef7848be3f86d22f5db7de382d1eac9e3d0cb24eca75b0cddc25f0baeda" openshift-monitoring 42m Normal Created pod/sre-dns-latency-exporter-snmkd Created container main openshift-monitoring 42m Normal Started pod/sre-dns-latency-exporter-snmkd Started container main openshift-monitoring 42m Normal Started pod/sre-dns-latency-exporter-4j7vx Started container main openshift-monitoring 42m Normal AddedInterface pod/osd-cluster-ready-thb5j Add eth0 [10.128.2.21/23] from ovn-kubernetes openshift-monitoring 42m Normal Pulling pod/sre-ebs-iops-reporter-1-deploy Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:60eaeef7848be3f86d22f5db7de382d1eac9e3d0cb24eca75b0cddc25f0baeda" openshift-monitoring 42m Normal AddedInterface pod/sre-ebs-iops-reporter-1-deploy Add eth0 [10.131.0.21/23] from ovn-kubernetes openshift-monitoring 42m Normal Created pod/sre-dns-latency-exporter-4j7vx Created container main openshift-ingress 42m Normal Killing pod/router-default-7898b977d4-vhrfb Stopping container router openshift-monitoring 42m Normal Pulled pod/sre-dns-latency-exporter-4j7vx Successfully pulled image "quay.io/app-sre/managed-prometheus-exporter-base:latest" in 4.591252919s (4.591266408s including waiting) openshift-console-operator 42m Normal OperatorStatusChanged deployment/console-operator Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: Changes made during sync updates, additional sync expected.") openshift-monitoring 42m Normal Pulled pod/sre-dns-latency-exporter-snmkd Successfully pulled image "quay.io/app-sre/managed-prometheus-exporter-base:latest" in 3.865429348s (3.865443254s including waiting) default 42m Normal RenderedConfigGenerated machineconfigpool/master rendered-master-d273453f5fe4894c22cd393f5c0dbfa3 successfully generated (release version: 4.13.0-rc.0, controller version: 40575b862f7bd42a2c40c8e6b7203cd4c29b0021) openshift-validation-webhook 42m Normal Pulled pod/validation-webhook-dt8g2 Successfully pulled image "quay.io/app-sre/managed-cluster-validating-webhooks@sha256:3b13c3a89da30c5fbfaf7529ec3175dd43053c508d4bd09c79ef369d53ecc023" in 5.270930151s (5.270938896s including waiting) openshift-monitoring 42m Normal Pulled pod/sre-dns-latency-exporter-fvnpq Successfully pulled image "quay.io/app-sre/managed-prometheus-exporter-base:latest" in 5.577380045s (5.577389778s including waiting) openshift-monitoring 42m Normal SuccessfulCreate replicationcontroller/sre-ebs-iops-reporter-1 Created pod: sre-ebs-iops-reporter-1-x89c4 openshift-validation-webhook 42m Normal Created pod/validation-webhook-dt8g2 Created container webhooks openshift-monitoring 42m Normal Pulled pod/sre-ebs-iops-reporter-1-deploy Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:60eaeef7848be3f86d22f5db7de382d1eac9e3d0cb24eca75b0cddc25f0baeda" in 592.75188ms (592.771084ms including waiting) openshift-monitoring 42m Normal Created pod/sre-ebs-iops-reporter-1-deploy Created container deployment openshift-kube-controller-manager 42m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-rbac-permissions namespace openshift-ingress 42m Warning ProbeError pod/router-default-699d8c97f-9xbcx Readiness probe error: HTTP probe failed with statuscode: 500... openshift-image-registry 42m Normal ScalingReplicaSet deployment/image-registry Scaled down replica set image-registry-5588bdd7b4 to 1 from 2 openshift-monitoring 42m Normal Pulled pod/osd-cluster-ready-thb5j Successfully pulled image "quay.io/app-sre/osd-cluster-ready@sha256:f70aa8033565fc73c006acb9199845242b1f729cb5a407b5174cf22656b4e2d5" in 1.68295663s (1.682971048s including waiting) openshift-kube-controller-manager 42m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for dedicated-admin namespace openshift-image-registry 42m Normal ScalingReplicaSet deployment/image-registry Scaled up replica set image-registry-55b7d998b9 to 1 openshift-kube-controller-manager 42m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-strimzi namespace openshift-monitoring 42m Normal SuccessfulCreate replicationcontroller/sre-stuck-ebs-vols-1 Created pod: sre-stuck-ebs-vols-1-7pl6b openshift-console 42m Normal AddedInterface pod/console-569c4c4669-gdr7m Add eth0 [10.129.0.34/23] from ovn-kubernetes openshift-validation-webhook 42m Normal Started pod/validation-webhook-dt8g2 Started container webhooks openshift-image-registry 42m Normal SuccessfulCreate replicaset/image-registry-55b7d998b9 Created pod: image-registry-55b7d998b9-4mbwh openshift-monitoring 42m Normal Created pod/sre-dns-latency-exporter-fvnpq Created container main openshift-image-registry 42m Normal DeploymentUpdated deployment/cluster-image-registry-operator Updated Deployment.apps/image-registry -n openshift-image-registry because it changed openshift-monitoring 42m Normal Started pod/sre-dns-latency-exporter-fvnpq Started container main openshift-monitoring 42m Normal Started pod/sre-ebs-iops-reporter-1-deploy Started container deployment openshift-monitoring 42m Normal Created pod/sre-stuck-ebs-vols-1-deploy Created container deployment openshift-monitoring 42m Normal Pulling pod/sre-stuck-ebs-vols-1-7pl6b Pulling image "quay.io/app-sre/managed-prometheus-exporter-initcontainer:latest" openshift-monitoring 42m Normal Started pod/sre-stuck-ebs-vols-1-deploy Started container deployment openshift-monitoring 42m Normal SuccessfulCreate replicaset/token-refresher-5dbcf88876 Created pod: token-refresher-5dbcf88876-cbn8j openshift-monitoring 42m Normal AddedInterface pod/sre-stuck-ebs-vols-1-7pl6b Add eth0 [10.131.0.22/23] from ovn-kubernetes openshift-authentication-operator 42m Normal SecretCreated deployment/authentication-operator Created Secret/v4-0-config-user-template-provider-selection -n openshift-authentication because it was missing openshift-monitoring 42m Normal ScalingReplicaSet deployment/token-refresher Scaled up replica set token-refresher-5dbcf88876 to 1 openshift-console 42m Normal Created pod/console-569c4c4669-gdr7m Created container console openshift-console-operator 42m Normal ConfigMapUpdated deployment/console-operator Updated ConfigMap/console-config -n openshift-console:... openshift-kube-apiserver-operator 42m Normal ObservedConfigChanged deployment/kube-apiserver-operator Writing updated observed config:   map[string]any{... openshift-kube-apiserver-operator 42m Normal ObserveExternalRegistryHostnameChanged deployment/kube-apiserver-operator External registry hostname changed to [default-route-openshift-image-registry.apps.qeaisrhods-c13.abmw.s1.devshift.org] openshift-monitoring 42m Normal Pulling pod/sre-ebs-iops-reporter-1-x89c4 Pulling image "quay.io/app-sre/managed-prometheus-exporter-initcontainer:latest" openshift-image-registry 42m Normal ScalingReplicaSet deployment/image-registry Scaled up replica set image-registry-55b7d998b9 to 2 from 1 openshift-image-registry 42m Normal SuccessfulCreate replicaset/image-registry-55b7d998b9 Created pod: image-registry-55b7d998b9-479fl openshift-monitoring 42m Normal AddedInterface pod/sre-ebs-iops-reporter-1-x89c4 Add eth0 [10.131.0.23/23] from ovn-kubernetes openshift-console 42m Normal Pulled pod/console-569c4c4669-gdr7m Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f8ed86b29b0df00f0cfb8b6d170e5fa8d9b0092ee92140788ec5a0a1eb60a10" already present on machine openshift-image-registry 42m Normal SuccessfulDelete replicaset/image-registry-5588bdd7b4 Deleted pod: image-registry-5588bdd7b4-4mffb openshift-image-registry 42m Normal Killing pod/image-registry-5588bdd7b4-4mffb Stopping container registry openshift-monitoring 42m Normal Pulling pod/token-refresher-5dbcf88876-cbn8j Pulling image "quay.io/observatorium/token-refresher@sha256:6ce9b80cd1d907cb6c9ed2a18612f386f7503257772d1d88155a4a2e6773fd00" openshift-console 42m Normal Started pod/console-569c4c4669-gdr7m Started container console openshift-validation-webhook 42m Normal Pulled pod/validation-webhook-j7r6j Successfully pulled image "quay.io/app-sre/managed-cluster-validating-webhooks@sha256:3b13c3a89da30c5fbfaf7529ec3175dd43053c508d4bd09c79ef369d53ecc023" in 5.837281457s (5.837289722s including waiting) openshift-etcd-operator 42m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 6; 0 nodes have achieved new revision 7\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 6; 0 nodes have achieved new revision 7\nEtcdMembersAvailable: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy" openshift-monitoring 42m Normal Pulled pod/sre-dns-latency-exporter-t9jjt Successfully pulled image "quay.io/app-sre/managed-prometheus-exporter-base:latest" in 6.077398377s (6.077412429s including waiting) openshift-monitoring 42m Normal Created pod/sre-dns-latency-exporter-t9jjt Created container main openshift-validation-webhook 42m Normal Created pod/validation-webhook-j7r6j Created container webhooks openshift-validation-webhook 42m Normal Started pod/validation-webhook-j7r6j Started container webhooks openshift-monitoring 42m Normal Started pod/sre-dns-latency-exporter-t9jjt Started container main openshift-kube-controller-manager 42m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-route-monitor-operator namespace openshift-kube-controller-manager 42m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-splunk-forwarder-operator namespace openshift-kube-controller-manager 42m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-security namespace openshift-kube-controller-manager 42m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-build-test namespace openshift-kube-apiserver-operator 42m Normal RevisionTriggered deployment/kube-apiserver-operator new revision 10 triggered by "required configmap/config has changed" openshift-monitoring 42m Normal AddedInterface pod/token-refresher-5dbcf88876-cbn8j Add eth0 [10.128.2.23/23] from ovn-kubernetes openshift-monitoring 42m Normal Pulled pod/token-refresher-5dbcf88876-cbn8j Successfully pulled image "quay.io/observatorium/token-refresher@sha256:6ce9b80cd1d907cb6c9ed2a18612f386f7503257772d1d88155a4a2e6773fd00" in 928.921029ms (928.933615ms including waiting) openshift-kube-controller-manager 42m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-197-197.ec2.internal created SCC ranges for openshift-sre-pruning namespace openshift-console-operator 42m Normal DeploymentUpdated deployment/console-operator Updated Deployment.apps/console -n openshift-console because it changed openshift-console 42m Normal SuccessfulDelete replicaset/console-569c4c4669 Deleted pod: console-569c4c4669-gdr7m openshift-console 42m Normal SuccessfulDelete replicaset/console-569c4c4669 Deleted pod: console-569c4c4669-p6rk8 openshift-console 42m Normal ScalingReplicaSet deployment/console Scaled down replica set console-569c4c4669 to 0 from 2 openshift-console 42m Normal ScalingReplicaSet deployment/console Scaled up replica set console-7db75d8d45 to 2 openshift-monitoring 42m Normal Started pod/token-refresher-5dbcf88876-cbn8j Started container token-refresher openshift-monitoring 42m Normal Created pod/token-refresher-5dbcf88876-cbn8j Created container token-refresher openshift-console 42m Normal SuccessfulCreate replicaset/console-7db75d8d45 Created pod: console-7db75d8d45-dzkhb openshift-console 42m Normal SuccessfulCreate replicaset/console-7db75d8d45 Created pod: console-7db75d8d45-7vkqx openshift-authentication-operator 42m Normal SecretCreated deployment/authentication-operator Created Secret/v4-0-config-user-template-error -n openshift-authentication because it was missing openshift-kube-apiserver-operator 42m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/revision-status-10 -n openshift-kube-apiserver because it was missing openshift-monitoring 42m Normal Started pod/sre-dns-latency-exporter-62rmk Started container main openshift-validation-webhook 42m Normal Pulled pod/validation-webhook-p4gz5 Successfully pulled image "quay.io/app-sre/managed-cluster-validating-webhooks@sha256:3b13c3a89da30c5fbfaf7529ec3175dd43053c508d4bd09c79ef369d53ecc023" in 7.757377241s (7.757391043s including waiting) openshift-validation-webhook 42m Normal Created pod/validation-webhook-p4gz5 Created container webhooks openshift-monitoring 42m Normal Pulled pod/sre-dns-latency-exporter-62rmk Successfully pulled image "quay.io/app-sre/managed-prometheus-exporter-base:latest" in 7.891972814s (7.891980894s including waiting) openshift-validation-webhook 42m Normal Started pod/validation-webhook-p4gz5 Started container webhooks openshift-console 42m Normal Killing pod/console-569c4c4669-gdr7m Stopping container console openshift-monitoring 42m Normal Created pod/sre-dns-latency-exporter-62rmk Created container main default 42m Normal SetDesiredConfig machineconfigpool/worker Targeted node ip-10-0-160-152.ec2.internal to config rendered-worker-c37c7a9e551f049d382df8406f11fe9b openshift-authentication-operator 42m Normal SecretCreated deployment/authentication-operator Created Secret/v4-0-config-user-template-login -n openshift-authentication because it was missing openshift-kube-apiserver-operator 42m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-pod-10 -n openshift-kube-apiserver because it was missing openshift-monitoring 42m Normal Started pod/sre-ebs-iops-reporter-1-x89c4 Started container setupcreds openshift-kube-apiserver-operator 42m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/config-10 -n openshift-kube-apiserver because it was missing openshift-monitoring 42m Normal Started pod/sre-stuck-ebs-vols-1-7pl6b Started container setupcreds openshift-security 42m Normal SuccessfulCreate daemonset/audit-exporter Created pod: audit-exporter-th592 openshift-security 42m Normal SuccessfulCreate daemonset/audit-exporter Created pod: audit-exporter-vscxm openshift-security 42m Normal SuccessfulCreate daemonset/audit-exporter Created pod: audit-exporter-7bwkj default 42m Normal SetDesiredConfig machineconfigpool/master Targeted node ip-10-0-239-132.ec2.internal to config rendered-master-d273453f5fe4894c22cd393f5c0dbfa3 openshift-monitoring 42m Normal Created pod/sre-stuck-ebs-vols-1-7pl6b Created container setupcreds default 42m Normal ConfigDriftMonitorStopped node/ip-10-0-160-152.ec2.internal Config Drift Monitor stopped openshift-security 42m Warning FailedMount pod/audit-exporter-vscxm MountVolume.SetUp failed for volume "tls-certs-secret" : secret "audit-exporter-certs" not found openshift-monitoring 42m Normal Created pod/sre-ebs-iops-reporter-1-x89c4 Created container setupcreds openshift-monitoring 42m Normal Pulled pod/sre-ebs-iops-reporter-1-x89c4 Successfully pulled image "quay.io/app-sre/managed-prometheus-exporter-initcontainer:latest" in 4.048853284s (4.048862598s including waiting) openshift-security 42m Warning FailedMount pod/audit-exporter-th592 MountVolume.SetUp failed for volume "tls-certs-secret" : secret "audit-exporter-certs" not found default 42m Normal Cordon node/ip-10-0-160-152.ec2.internal Cordoned node to apply update openshift-monitoring 42m Normal Pulled pod/sre-stuck-ebs-vols-1-7pl6b Successfully pulled image "quay.io/app-sre/managed-prometheus-exporter-initcontainer:latest" in 4.578465695s (4.578473518s including waiting) default 42m Normal Drain node/ip-10-0-160-152.ec2.internal Draining node to update config. openshift-security 42m Warning FailedMount pod/audit-exporter-7bwkj MountVolume.SetUp failed for volume "tls-certs-secret" : secret "audit-exporter-certs" not found openshift-security 42m Normal AddedInterface pod/audit-exporter-th592 Add eth0 [10.128.0.48/23] from ovn-kubernetes openshift-security 42m Normal AddedInterface pod/audit-exporter-7bwkj Add eth0 [10.129.0.35/23] from ovn-kubernetes openshift-security 42m Normal Pulling pod/audit-exporter-7bwkj Pulling image "quay.io/app-sre/splunk-audit-exporter@sha256:bbca8dfd77d15c6dde3495985c1a75354ad79339ecba6820e7ceef2282422964" openshift-apiserver-operator 42m Normal RevisionTriggered deployment/openshift-apiserver-operator new revision 2 triggered by "configmap/audit has changed" openshift-console-operator 42m Normal OperatorStatusChanged deployment/console-operator Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: Changes made during sync updates, additional sync expected." to "SyncLoopRefreshProgressing: Working toward version 4.13.0-rc.0, 1 replicas available" default 42m Normal AnnotationChange machineconfigpool/master (combined from similar events): Node ip-10-0-239-132.ec2.internal now has machineconfiguration.openshift.io/desiredConfig=rendered-master-d273453f5fe4894c22cd393f5c0dbfa3 openshift-apiserver-operator 42m Normal ConfigMapUpdated deployment/openshift-apiserver-operator Updated ConfigMap/audit -n openshift-apiserver:... openshift-security 42m Normal Pulling pod/audit-exporter-th592 Pulling image "quay.io/app-sre/splunk-audit-exporter@sha256:bbca8dfd77d15c6dde3495985c1a75354ad79339ecba6820e7ceef2282422964" openshift-authentication-operator 42m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Progressing changed from False to True ("OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.") openshift-kube-apiserver-operator 42m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-10 -n openshift-kube-apiserver because it was missing openshift-authentication 42m Normal ScalingReplicaSet deployment/oauth-openshift Scaled up replica set oauth-openshift-6cd75d67b9 to 1 from 0 openshift-authentication 42m Normal ScalingReplicaSet deployment/oauth-openshift Scaled down replica set oauth-openshift-86966797f8 to 2 from 3 default 42m Normal ConfigDriftMonitorStopped node/ip-10-0-239-132.ec2.internal Config Drift Monitor stopped openshift-authentication 42m Normal SuccessfulDelete replicaset/oauth-openshift-86966797f8 Deleted pod: oauth-openshift-86966797f8-g5rm7 default 42m Normal Cordon node/ip-10-0-239-132.ec2.internal Cordoned node to apply update default 42m Normal Drain node/ip-10-0-239-132.ec2.internal Draining node to update config. openshift-security 42m Normal Pulling pod/audit-exporter-vscxm Pulling image "quay.io/app-sre/splunk-audit-exporter@sha256:bbca8dfd77d15c6dde3495985c1a75354ad79339ecba6820e7ceef2282422964" openshift-security 42m Normal AddedInterface pod/audit-exporter-vscxm Add eth0 [10.130.0.56/23] from ovn-kubernetes openshift-apiserver-operator 42m Normal ConfigMapCreated deployment/openshift-apiserver-operator Created ConfigMap/revision-status-2 -n openshift-apiserver because it was missing openshift-apiserver-operator 42m Normal ConfigMapCreated deployment/openshift-apiserver-operator Created ConfigMap/audit-2 -n openshift-apiserver because it was missing default 42m Normal AnnotationChange machineconfigpool/master Node ip-10-0-239-132.ec2.internal now has machineconfiguration.openshift.io/state=Working openshift-authentication 42m Normal Killing pod/oauth-openshift-86966797f8-g5rm7 Stopping container oauth-openshift openshift-authentication 42m Normal SuccessfulCreate replicaset/oauth-openshift-6cd75d67b9 Created pod: oauth-openshift-6cd75d67b9-btb4m openshift-apiserver-operator 42m Normal RevisionCreate deployment/openshift-apiserver-operator Revision 1 created because configmap/audit has changed openshift-authentication-operator 42m Normal ConfigMapUpdated deployment/authentication-operator Updated ConfigMap/audit -n openshift-oauth-apiserver:... openshift-authentication-operator 42m Normal RevisionTriggered deployment/authentication-operator new revision 2 triggered by "configmap/audit has changed" openshift-security 42m Normal Pulled pod/audit-exporter-th592 Successfully pulled image "quay.io/app-sre/splunk-audit-exporter@sha256:bbca8dfd77d15c6dde3495985c1a75354ad79339ecba6820e7ceef2282422964" in 1.47573789s (1.475749226s including waiting) openshift-etcd 42m Warning Unhealthy pod/etcd-ip-10-0-239-132.ec2.internal Startup probe failed: Get "https://10.0.239.132:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) openshift-etcd 42m Warning ProbeError pod/etcd-ip-10-0-239-132.ec2.internal Startup probe error: Get "https://10.0.239.132:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers)... openshift-monitoring 42m Normal Pulled pod/sre-stuck-ebs-vols-1-7pl6b Container image "quay.io/app-sre/managed-prometheus-exporter-base:latest" already present on machine openshift-security 42m Normal Started pod/audit-exporter-th592 Started container audit-exporter openshift-security 42m Normal Created pod/audit-exporter-th592 Created container audit-exporter openshift-security 42m Normal Pulled pod/audit-exporter-vscxm Successfully pulled image "quay.io/app-sre/splunk-audit-exporter@sha256:bbca8dfd77d15c6dde3495985c1a75354ad79339ecba6820e7ceef2282422964" in 1.548780971s (1.548793674s including waiting) openshift-monitoring 42m Normal Started pod/sre-ebs-iops-reporter-1-x89c4 Started container main openshift-kube-apiserver-operator 42m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/oauth-metadata-10 -n openshift-kube-apiserver because it was missing openshift-monitoring 42m Normal Created pod/sre-ebs-iops-reporter-1-x89c4 Created container main openshift-security 42m Normal Created pod/audit-exporter-7bwkj Created container audit-exporter openshift-security 42m Normal Pulled pod/audit-exporter-7bwkj Successfully pulled image "quay.io/app-sre/splunk-audit-exporter@sha256:bbca8dfd77d15c6dde3495985c1a75354ad79339ecba6820e7ceef2282422964" in 1.399076425s (1.399088947s including waiting) openshift-monitoring 42m Normal Pulled pod/sre-ebs-iops-reporter-1-x89c4 Container image "quay.io/app-sre/managed-prometheus-exporter-base:latest" already present on machine openshift-monitoring 42m Normal Created pod/sre-stuck-ebs-vols-1-7pl6b Created container main openshift-monitoring 42m Normal Started pod/sre-stuck-ebs-vols-1-7pl6b Started container main openshift-security 42m Normal Started pod/audit-exporter-7bwkj Started container audit-exporter openshift-apiserver-operator 42m Normal ObservedConfigChanged deployment/openshift-apiserver-operator Writing updated observed config:   map[string]any{... openshift-security 42m Normal Created pod/audit-exporter-vscxm Created container audit-exporter openshift-kube-apiserver-operator 42m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/bound-sa-token-signing-certs-10 -n openshift-kube-apiserver because it was missing openshift-apiserver-operator 42m Normal ConfigMapUpdated deployment/openshift-apiserver-operator Updated ConfigMap/config -n openshift-apiserver:... openshift-kube-apiserver-operator 42m Normal ConfigMapUpdated deployment/kube-apiserver-operator Updated ConfigMap/kube-apiserver-audit-policies -n openshift-kube-apiserver:... openshift-security 42m Normal Started pod/audit-exporter-vscxm Started container audit-exporter openshift-monitoring 42m Normal Killing pod/prometheus-k8s-1 Stopping container kube-rbac-proxy-thanos openshift-monitoring 42m Normal Killing pod/alertmanager-main-1 Stopping container prom-label-proxy openshift-network-diagnostics 42m Normal Killing pod/network-check-source-677bdb7d9-4sw4t Stopping container check-endpoints openshift-monitoring 42m Normal SuccessfulCreate replicationcontroller/sre-ebs-iops-reporter-1 Created pod: sre-ebs-iops-reporter-1-5p7mx openshift-monitoring 42m Normal Killing pod/sre-ebs-iops-reporter-1-deploy Stopping container deployment openshift-monitoring 42m Normal Killing pod/alertmanager-main-1 Stopping container kube-rbac-proxy openshift-network-diagnostics 42m Normal SuccessfulCreate replicaset/network-check-source-677bdb7d9 Created pod: network-check-source-677bdb7d9-m9sqk openshift-monitoring 42m Normal Killing pod/alertmanager-main-1 Stopping container kube-rbac-proxy-metric openshift-monitoring 42m Normal Killing pod/prometheus-k8s-1 Stopping container config-reloader openshift-monitoring 42m Normal Killing pod/alertmanager-main-1 Stopping container config-reloader openshift-monitoring 42m Normal Killing pod/prometheus-k8s-1 Stopping container prometheus-proxy openshift-monitoring 42m Normal Killing pod/prometheus-k8s-1 Stopping container kube-rbac-proxy openshift-monitoring 42m Normal Killing pod/prometheus-k8s-1 Stopping container prometheus default 42m Normal NodeNotSchedulable node/ip-10-0-160-152.ec2.internal Node ip-10-0-160-152.ec2.internal status is now: NodeNotSchedulable openshift-authentication-operator 42m Normal ConfigMapCreated deployment/authentication-operator Created ConfigMap/revision-status-2 -n openshift-oauth-apiserver because it was missing openshift-authentication-operator 42m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation" openshift-kube-apiserver-operator 42m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 9"),Available message changed from "StaticPodsAvailable: 2 nodes are active; 1 nodes are at revision 0; 2 nodes are at revision 9" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 9" openshift-console 42m Normal SuccessfulCreate replicaset/downloads-fcdb597fd Created pod: downloads-fcdb597fd-grdr7 openshift-monitoring 42m Normal Killing pod/prometheus-adapter-5b77f96bd4-lkn8s Stopping container prometheus-adapter openshift-console 42m Normal Killing pod/downloads-fcdb597fd-24zcn Stopping container download-server openshift-kube-apiserver-operator 42m Normal NodeCurrentRevisionChanged deployment/kube-apiserver-operator Updated node "ip-10-0-140-6.ec2.internal" from revision 0 to 9 because static pod is ready openshift-monitoring 42m Normal SuccessfulCreate replicationcontroller/sre-stuck-ebs-vols-1 Created pod: sre-stuck-ebs-vols-1-ws5wv openshift-monitoring 42m Normal SuccessfulCreate replicaset/prometheus-adapter-5b77f96bd4 Created pod: prometheus-adapter-5b77f96bd4-7lwwj openshift-monitoring 42m Normal Killing pod/alertmanager-main-1 Stopping container alertmanager-proxy openshift-kube-apiserver-operator 42m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/etcd-serving-ca-10 -n openshift-kube-apiserver because it was missing openshift-console 42m Normal Pulling pod/downloads-fcdb597fd-grdr7 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec5351e220112a5b70451310b563175ae713c4d2864765c861b969730515a21b" openshift-monitoring 42m Normal Pulling pod/sre-stuck-ebs-vols-1-ws5wv Pulling image "quay.io/app-sre/managed-prometheus-exporter-initcontainer:latest" openshift-monitoring 42m Normal Killing pod/sre-stuck-ebs-vols-1-7pl6b Stopping container main openshift-console 42m Normal AddedInterface pod/downloads-fcdb597fd-grdr7 Add eth0 [10.128.2.27/23] from ovn-kubernetes openshift-monitoring 42m Normal Killing pod/configure-alertmanager-operator-registry-ztskr Stopping container registry-server openshift-monitoring 42m Normal Killing pod/thanos-querier-7bbf5b5dcd-nrjft Stopping container thanos-query openshift-network-diagnostics 42m Normal AddedInterface pod/network-check-source-677bdb7d9-m9sqk Add eth0 [10.128.2.25/23] from ovn-kubernetes openshift-monitoring 42m Normal ReplicationControllerScaled deploymentconfig/sre-ebs-iops-reporter Scaled replication controller "sre-ebs-iops-reporter-1" from 1 to 0 openshift-monitoring 42m Normal Killing pod/thanos-querier-7bbf5b5dcd-nrjft Stopping container kube-rbac-proxy-rules openshift-network-diagnostics 42m Normal Started pod/network-check-source-677bdb7d9-m9sqk Started container check-endpoints openshift-network-diagnostics 42m Normal Created pod/network-check-source-677bdb7d9-m9sqk Created container check-endpoints openshift-network-diagnostics 42m Normal Pulled pod/network-check-source-677bdb7d9-m9sqk Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" already present on machine openshift-monitoring 42m Normal SuccessfulDelete replicationcontroller/sre-ebs-iops-reporter-1 Deleted pod: sre-ebs-iops-reporter-1-5p7mx openshift-monitoring 42m Normal SuccessfulCreate replicaset/thanos-querier-7bbf5b5dcd Created pod: thanos-querier-7bbf5b5dcd-fvmbq openshift-monitoring 42m Normal Killing pod/sre-ebs-iops-reporter-1-x89c4 Stopping container main openshift-monitoring 42m Normal AddedInterface pod/sre-stuck-ebs-vols-1-ws5wv Add eth0 [10.128.2.26/23] from ovn-kubernetes openshift-network-diagnostics 42m Warning FastControllerResync node/ip-10-0-232-8.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling openshift-monitoring 42m Normal AddedInterface pod/sre-ebs-iops-reporter-1-5p7mx Add eth0 [10.128.2.24/23] from ovn-kubernetes openshift-monitoring 42m Normal Pulling pod/sre-ebs-iops-reporter-1-5p7mx Pulling image "quay.io/app-sre/managed-prometheus-exporter-initcontainer:latest" openshift-network-diagnostics 42m Warning FastControllerResync node/ip-10-0-232-8.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling openshift-apiserver 42m Normal ScalingReplicaSet deployment/apiserver Scaled up replica set apiserver-5f568869f to 1 from 0 openshift-apiserver 42m Normal Killing pod/apiserver-7475f65d84-lm7x6 Stopping container openshift-apiserver-check-endpoints openshift-apiserver-operator 42m Normal DeploymentUpdated deployment/openshift-apiserver-operator Updated Deployment.apps/apiserver -n openshift-apiserver because it changed openshift-apiserver-operator 42m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 6, desired generation is 7." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 5, desired generation is 6.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 6, desired generation is 7." openshift-apiserver 42m Normal SuccessfulCreate replicaset/apiserver-5f568869f Created pod: apiserver-5f568869f-mpswm openshift-monitoring 42m Normal SuccessfulCreate statefulset/alertmanager-main create Pod alertmanager-main-1 in StatefulSet alertmanager-main successful openshift-apiserver 42m Normal Killing pod/apiserver-7475f65d84-lm7x6 Stopping container openshift-apiserver openshift-apiserver 42m Normal SuccessfulDelete replicaset/apiserver-7475f65d84 Deleted pod: apiserver-7475f65d84-lm7x6 openshift-apiserver-operator 42m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 6, desired generation is 7.") openshift-apiserver 42m Normal ScalingReplicaSet deployment/apiserver Scaled down replica set apiserver-7475f65d84 to 2 from 3 openshift-authentication-operator 42m Normal ConfigMapCreated deployment/authentication-operator Created ConfigMap/audit-2 -n openshift-oauth-apiserver because it was missing openshift-authentication 42m Normal SuccessfulCreate replicaset/oauth-openshift-86966797f8 Created pod: oauth-openshift-86966797f8-vtzkz openshift-route-controller-manager 42m Normal SuccessfulCreate replicaset/route-controller-manager-9b45479c5 Created pod: route-controller-manager-9b45479c5-nfwk9 openshift-cluster-storage-operator 42m Normal SuccessfulCreate replicaset/csi-snapshot-controller-f58c44499 Created pod: csi-snapshot-controller-f58c44499-rnqw9 openshift-console-operator 42m Normal SuccessfulCreate replicaset/console-operator-57cbc6b88f Created pod: console-operator-57cbc6b88f-tbq55 openshift-console-operator 42m Normal Killing pod/console-operator-57cbc6b88f-snwcj Stopping container conversion-webhook-server openshift-kube-apiserver-operator 42m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-server-ca-10 -n openshift-kube-apiserver because it was missing openshift-cluster-storage-operator 42m Normal SuccessfulCreate replicaset/csi-snapshot-webhook-75476bf784 Created pod: csi-snapshot-webhook-75476bf784-sfhhx openshift-cloud-credential-operator 42m Normal SuccessfulCreate replicaset/pod-identity-webhook-b645775d7 Created pod: pod-identity-webhook-b645775d7-bhp9j openshift-cloud-credential-operator 42m Normal Killing pod/pod-identity-webhook-b645775d7-js8hv Stopping container pod-identity-webhook openshift-authentication 42m Normal Killing pod/oauth-openshift-86966797f8-sbdp5 Stopping container oauth-openshift openshift-monitoring 42m Normal SuccessfulCreate statefulset/prometheus-k8s create Pod prometheus-k8s-1 in StatefulSet prometheus-k8s successful openshift-cloud-controller-manager-operator 42m Normal SuccessfulCreate replicaset/cluster-cloud-controller-manager-operator-5dcbbcf757 Created pod: cluster-cloud-controller-manager-operator-5dcbbcf757-wggmm openshift-cloud-controller-manager-operator 42m Normal Killing pod/cluster-cloud-controller-manager-operator-5dcbbcf757-fqvtw Stopping container cluster-cloud-controller-manager openshift-cluster-version 42m Normal SuccessfulCreate replicaset/cluster-version-operator-5d74b9d6f5 Created pod: cluster-version-operator-5d74b9d6f5-689xc openshift-cloud-credential-operator 42m Normal AddedInterface pod/pod-identity-webhook-b645775d7-bhp9j Add eth0 [10.130.0.61/23] from ovn-kubernetes openshift-cluster-storage-operator 42m Normal AddedInterface pod/csi-snapshot-webhook-75476bf784-sfhhx Add eth0 [10.130.0.60/23] from ovn-kubernetes openshift-console-operator 42m Normal AddedInterface pod/console-operator-57cbc6b88f-tbq55 Add eth0 [10.128.0.49/23] from ovn-kubernetes openshift-cluster-storage-operator 42m Normal Pulling pod/csi-snapshot-controller-f58c44499-rnqw9 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f6985210e2dec2b96cd8cd1dc6965ce2710b23b2c515d9ae67a694245bd41082" openshift-cluster-storage-operator 42m Normal AddedInterface pod/csi-snapshot-controller-f58c44499-rnqw9 Add eth0 [10.130.0.59/23] from ovn-kubernetes openshift-console-operator 42m Normal Pulling pod/console-operator-57cbc6b88f-tbq55 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6dd6ba37d430e9e8e248b4c5911ef0903f8bd8d05451ed65eeb1d9d2b3c42e4" openshift-authentication-operator 42m Normal RevisionCreate deployment/authentication-operator Revision 1 created because configmap/audit has changed openshift-cluster-version 42m Normal Pulling pod/cluster-version-operator-5d74b9d6f5-689xc Pulling image "quay.io/openshift-release-dev/ocp-release@sha256:e686d3cd173d9848fc304da0ebe4d348c6e3be902989f500c5382590e2e41a11" openshift-cluster-storage-operator 42m Normal Pulling pod/csi-snapshot-webhook-75476bf784-sfhhx Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e7a32310238d69d56d35be8f7de426bdbedf96ff73edcd198698ac174c6d3c34" openshift-cloud-controller-manager-operator 42m Normal Pulling pod/cluster-cloud-controller-manager-operator-5dcbbcf757-wggmm Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77345c48a82b167f67364ffd41788160b5d06e746946d9ea67191fa18cf34806" openshift-cloud-credential-operator 42m Normal Pulling pod/pod-identity-webhook-b645775d7-bhp9j Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e248571068c87bc5b2f69bd4fc2bc3934d8bcd2b2a7fecadc754a30e06ac592" openshift-controller-manager 42m Normal SuccessfulCreate replicaset/controller-manager-c5c84d6f9 Created pod: controller-manager-c5c84d6f9-vpk76 openshift-controller-manager 42m Normal Killing pod/controller-manager-c5c84d6f9-x72pp Stopping container controller-manager openshift-route-controller-manager 42m Normal Killing pod/route-controller-manager-9b45479c5-kkjqb Stopping container route-controller-manager openshift-cluster-storage-operator 42m Normal Killing pod/csi-snapshot-controller-f58c44499-qvgsh Stopping container snapshot-controller openshift-cluster-version 42m Normal Killing pod/cluster-version-operator-5d74b9d6f5-qzcfb Stopping container cluster-version-operator openshift-kube-controller-manager 42m Normal Killing pod/kube-controller-manager-guard-ip-10-0-239-132.ec2.internal Stopping container guard openshift-cloud-controller-manager-operator 42m Normal Killing pod/cluster-cloud-controller-manager-operator-5dcbbcf757-fqvtw Stopping container config-sync-controllers openshift-cluster-storage-operator 42m Normal Killing pod/csi-snapshot-webhook-75476bf784-7vh6f Stopping container webhook openshift-etcd-operator 42m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy" openshift-kube-apiserver 42m Normal Killing pod/kube-apiserver-guard-ip-10-0-239-132.ec2.internal Stopping container guard openshift-console-operator 42m Normal Killing pod/console-operator-57cbc6b88f-snwcj Stopping container console-operator openshift-kube-apiserver-operator 42m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kubelet-serving-ca-10 -n openshift-kube-apiserver because it was missing openshift-monitoring 42m Normal Pulled pod/sre-ebs-iops-reporter-1-5p7mx Successfully pulled image "quay.io/app-sre/managed-prometheus-exporter-initcontainer:latest" in 5.23431352s (5.234321591s including waiting) openshift-monitoring 42m Normal Started pod/sre-stuck-ebs-vols-1-ws5wv Started container setupcreds openshift-machine-config-operator 42m Normal SuccessfulCreate replicaset/machine-config-controller-7f488c778d Created pod: machine-config-controller-7f488c778d-c8svb openshift-machine-config-operator 42m Normal Killing pod/machine-config-controller-7f488c778d-fvfx4 Stopping container oauth-proxy openshift-monitoring 42m Normal Created pod/sre-stuck-ebs-vols-1-ws5wv Created container setupcreds openshift-apiserver-operator 42m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 5, desired generation is 6." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" openshift-monitoring 42m Normal Pulled pod/sre-stuck-ebs-vols-1-ws5wv Successfully pulled image "quay.io/app-sre/managed-prometheus-exporter-initcontainer:latest" in 5.216817092s (5.216832467s including waiting) openshift-monitoring 42m Normal AddedInterface pod/configure-alertmanager-operator-registry-w7zdk Add eth0 [10.128.2.28/23] from ovn-kubernetes openshift-monitoring 42m Normal Created pod/sre-ebs-iops-reporter-1-5p7mx Created container setupcreds openshift-apiserver-operator 42m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 5, desired generation is 6.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 6, desired generation is 7." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 5, desired generation is 6." openshift-monitoring 42m Normal Started pod/sre-ebs-iops-reporter-1-5p7mx Started container setupcreds openshift-monitoring 42m Normal Killing pod/sre-ebs-iops-reporter-1-5p7mx Stopping container setupcreds openshift-monitoring 42m Normal Pulling pod/configure-alertmanager-operator-registry-w7zdk Pulling image "quay.io/app-sre/configure-alertmanager-operator-registry@sha256:4cd6cdcb961b519e306ff2ea3c276ef4037edb429e14df405bc3ccbed8531ac9" openshift-machine-config-operator 42m Normal Killing pod/machine-config-controller-7f488c778d-fvfx4 Stopping container machine-config-controller openshift-cluster-version 42m Normal Pulled pod/cluster-version-operator-5d74b9d6f5-689xc Successfully pulled image "quay.io/openshift-release-dev/ocp-release@sha256:e686d3cd173d9848fc304da0ebe4d348c6e3be902989f500c5382590e2e41a11" in 3.114646106s (3.114653372s including waiting) openshift-cluster-storage-operator 42m Normal Pulled pod/csi-snapshot-controller-f58c44499-rnqw9 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f6985210e2dec2b96cd8cd1dc6965ce2710b23b2c515d9ae67a694245bd41082" in 2.390943036s (2.390952921s including waiting) openshift-cloud-credential-operator 42m Normal Pulled pod/pod-identity-webhook-b645775d7-bhp9j Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e248571068c87bc5b2f69bd4fc2bc3934d8bcd2b2a7fecadc754a30e06ac592" in 2.251926317s (2.251932869s including waiting) openshift-cloud-credential-operator 42m Normal Created pod/pod-identity-webhook-b645775d7-bhp9j Created container pod-identity-webhook openshift-cluster-storage-operator 42m Normal Created pod/csi-snapshot-controller-f58c44499-rnqw9 Created container snapshot-controller openshift-cloud-credential-operator 42m Normal Started pod/pod-identity-webhook-b645775d7-bhp9j Started container pod-identity-webhook openshift-cluster-storage-operator 42m Normal Started pod/csi-snapshot-controller-f58c44499-rnqw9 Started container snapshot-controller openshift-console-operator 42m Normal Created pod/console-operator-57cbc6b88f-tbq55 Created container console-operator openshift-cluster-storage-operator 42m Normal Pulled pod/csi-snapshot-webhook-75476bf784-sfhhx Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e7a32310238d69d56d35be8f7de426bdbedf96ff73edcd198698ac174c6d3c34" in 2.310914307s (2.310929064s including waiting) openshift-cluster-storage-operator 42m Normal Created pod/csi-snapshot-webhook-75476bf784-sfhhx Created container webhook openshift-console-operator 42m Normal Started pod/console-operator-57cbc6b88f-tbq55 Started container console-operator openshift-console-operator 42m Normal Pulled pod/console-operator-57cbc6b88f-tbq55 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6dd6ba37d430e9e8e248b4c5911ef0903f8bd8d05451ed65eeb1d9d2b3c42e4" already present on machine openshift-cluster-storage-operator 42m Normal OperatorStatusChanged deployment/csi-snapshot-controller-operator Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods\nCSISnapshotWebhookControllerProgressing: Waiting for Deployment to deploy pods" to "CSISnapshotWebhookControllerProgressing: Waiting for Deployment to deploy pods" openshift-cluster-version 42m Normal Started pod/cluster-version-operator-5d74b9d6f5-689xc Started container cluster-version-operator openshift-cluster-storage-operator 42m Normal Started pod/csi-snapshot-webhook-75476bf784-sfhhx Started container webhook openshift-cluster-version 42m Normal Created pod/cluster-version-operator-5d74b9d6f5-689xc Created container cluster-version-operator openshift-console-operator 42m Normal Started pod/console-operator-57cbc6b88f-tbq55 Started container conversion-webhook-server openshift-console-operator 42m Normal Pulled pod/console-operator-57cbc6b88f-tbq55 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6dd6ba37d430e9e8e248b4c5911ef0903f8bd8d05451ed65eeb1d9d2b3c42e4" in 2.692262921s (2.692268738s including waiting) openshift-kube-apiserver-operator 42m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/sa-token-signing-certs-10 -n openshift-kube-apiserver because it was missing openshift-cluster-version 42m Normal LeaderElection configmap/version ip-10-0-140-6_6f0b4ad2-b0fe-42b9-9245-7ec1d816bb3a became leader openshift-console-operator 42m Normal Created pod/console-operator-57cbc6b88f-tbq55 Created container conversion-webhook-server openshift-cloud-controller-manager-operator 42m Normal Started pod/cluster-cloud-controller-manager-operator-5dcbbcf757-wggmm Started container cluster-cloud-controller-manager openshift-cloud-controller-manager-operator 42m Normal Created pod/cluster-cloud-controller-manager-operator-5dcbbcf757-wggmm Created container cluster-cloud-controller-manager openshift-cloud-controller-manager-operator 42m Normal Pulled pod/cluster-cloud-controller-manager-operator-5dcbbcf757-wggmm Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77345c48a82b167f67364ffd41788160b5d06e746946d9ea67191fa18cf34806" in 3.484405049s (3.484410781s including waiting) openshift-cluster-version 42m Normal LeaderElection lease/version ip-10-0-140-6_6f0b4ad2-b0fe-42b9-9245-7ec1d816bb3a became leader openshift-cloud-controller-manager-operator 42m Normal Pulled pod/cluster-cloud-controller-manager-operator-5dcbbcf757-wggmm Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77345c48a82b167f67364ffd41788160b5d06e746946d9ea67191fa18cf34806" already present on machine openshift-machine-config-operator 42m Normal Created pod/machine-config-controller-7f488c778d-c8svb Created container machine-config-controller openshift-oauth-apiserver 42m Normal ScalingReplicaSet deployment/apiserver Scaled up replica set apiserver-74455c7c5 to 1 from 0 openshift-oauth-apiserver 42m Normal ScalingReplicaSet deployment/apiserver Scaled down replica set apiserver-8ddbf84fd to 2 from 3 openshift-authentication-operator 42m Normal DeploymentUpdated deployment/authentication-operator Updated Deployment.apps/apiserver -n openshift-oauth-apiserver because it changed openshift-cloud-controller-manager-operator 42m Normal Started pod/cluster-cloud-controller-manager-operator-5dcbbcf757-wggmm Started container config-sync-controllers openshift-authentication-operator 42m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 3, desired generation is 4.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation" openshift-oauth-apiserver 42m Normal Killing pod/apiserver-8ddbf84fd-g8ssl Stopping container oauth-apiserver openshift-cloud-controller-manager-operator 42m Normal Created pod/cluster-cloud-controller-manager-operator-5dcbbcf757-wggmm Created container config-sync-controllers openshift-etcd-operator 42m Normal NodeCurrentRevisionChanged deployment/etcd-operator Updated node "ip-10-0-239-132.ec2.internal" from revision 6 to 7 because static pod is ready openshift-etcd-operator 42m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 6; 0 nodes have achieved new revision 7" to "NodeInstallerProgressing: 2 nodes are at revision 6; 1 nodes are at revision 7",Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 6; 0 nodes have achieved new revision 7\nEtcdMembersAvailable: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 6; 1 nodes are at revision 7\nEtcdMembersAvailable: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy" openshift-monitoring 42m Normal Pulled pod/configure-alertmanager-operator-registry-w7zdk Successfully pulled image "quay.io/app-sre/configure-alertmanager-operator-registry@sha256:4cd6cdcb961b519e306ff2ea3c276ef4037edb429e14df405bc3ccbed8531ac9" in 2.282541266s (2.282555691s including waiting) openshift-oauth-apiserver 42m Normal SuccessfulCreate replicaset/apiserver-74455c7c5 Created pod: apiserver-74455c7c5-rpzl9 openshift-machine-config-operator 42m Normal Started pod/machine-config-controller-7f488c778d-c8svb Started container oauth-proxy openshift-machine-config-operator 42m Normal AddedInterface pod/machine-config-controller-7f488c778d-c8svb Add eth0 [10.128.0.50/23] from ovn-kubernetes openshift-machine-config-operator 42m Normal Pulled pod/machine-config-controller-7f488c778d-c8svb Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" already present on machine openshift-monitoring 42m Normal Started pod/configure-alertmanager-operator-registry-w7zdk Started container registry-server openshift-machine-config-operator 42m Normal Created pod/machine-config-controller-7f488c778d-c8svb Created container oauth-proxy openshift-machine-config-operator 42m Normal Pulled pod/machine-config-controller-7f488c778d-c8svb Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-monitoring 42m Normal Created pod/configure-alertmanager-operator-registry-w7zdk Created container registry-server openshift-machine-config-operator 42m Normal Started pod/machine-config-controller-7f488c778d-c8svb Started container machine-config-controller openshift-oauth-apiserver 42m Normal SuccessfulDelete replicaset/apiserver-8ddbf84fd Deleted pod: apiserver-8ddbf84fd-g8ssl openshift-kube-apiserver-operator 42m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-audit-policies-10 -n openshift-kube-apiserver because it was missing openshift-monitoring 42m Normal Created pod/sre-stuck-ebs-vols-1-ws5wv Created container main openshift-etcd 42m Normal AddedInterface pod/revision-pruner-7-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.36/23] from ovn-kubernetes openshift-monitoring 42m Normal Started pod/sre-stuck-ebs-vols-1-ws5wv Started container main openshift-monitoring 42m Normal Pulled pod/sre-stuck-ebs-vols-1-ws5wv Container image "quay.io/app-sre/managed-prometheus-exporter-base:latest" already present on machine openshift-etcd 42m Normal Started pod/revision-pruner-7-ip-10-0-239-132.ec2.internal Started container pruner openshift-cluster-version 42m Normal LoadPayload clusterversion/version Loading payload version="4.13.0-rc.0" image="quay.io/openshift-release-dev/ocp-release@sha256:e686d3cd173d9848fc304da0ebe4d348c6e3be902989f500c5382590e2e41a11" openshift-cluster-version 42m Normal RetrievePayload clusterversion/version Retrieving and verifying payload version="4.13.0-rc.0" image="quay.io/openshift-release-dev/ocp-release@sha256:e686d3cd173d9848fc304da0ebe4d348c6e3be902989f500c5382590e2e41a11" openshift-etcd 42m Normal Pulled pod/revision-pruner-7-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd 42m Normal Created pod/revision-pruner-7-ip-10-0-239-132.ec2.internal Created container pruner default 42m Normal NodeNotSchedulable node/ip-10-0-239-132.ec2.internal Node ip-10-0-239-132.ec2.internal status is now: NodeNotSchedulable openshift-cluster-version 42m Normal PayloadLoaded clusterversion/version Payload loaded version="4.13.0-rc.0" image="quay.io/openshift-release-dev/ocp-release@sha256:e686d3cd173d9848fc304da0ebe4d348c6e3be902989f500c5382590e2e41a11" architecture="amd64" openshift-monitoring 42m Normal Killing pod/sre-ebs-iops-reporter-1-5p7mx Stopping container setupcreds openshift-kube-apiserver-operator 42m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/etcd-client-10 -n openshift-kube-apiserver because it was missing openshift-authentication-operator 42m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 3, desired generation is 4.\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation" openshift-apiserver 42m Warning ProbeError pod/apiserver-7475f65d84-lm7x6 Readiness probe error: HTTP probe failed with statuscode: 500... openshift-machine-api 42m Normal Update machine/qeaisrhods-c13-28wr5-infra-us-east-1a-54lb2 Updated Machine qeaisrhods-c13-28wr5-infra-us-east-1a-54lb2 openshift-apiserver 42m Warning Unhealthy pod/apiserver-7475f65d84-lm7x6 Readiness probe failed: HTTP probe failed with statuscode: 500 openshift-kube-apiserver-operator 42m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/localhost-recovery-serving-certkey-10 -n openshift-kube-apiserver because it was missing openshift-console 42m Normal Created pod/downloads-fcdb597fd-grdr7 Created container download-server openshift-machine-api 42m Normal Update machine/qeaisrhods-c13-28wr5-infra-us-east-1a-qww78 Updated Machine qeaisrhods-c13-28wr5-infra-us-east-1a-qww78 openshift-console 42m Normal Pulled pod/downloads-fcdb597fd-grdr7 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec5351e220112a5b70451310b563175ae713c4d2864765c861b969730515a21b" in 14.189427797s (14.189442642s including waiting) openshift-kube-apiserver-operator 42m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/localhost-recovery-client-token-10 -n openshift-kube-apiserver because it was missing openshift-machine-api 42m Normal DetectedUnhealthy machine/qeaisrhods-c13-28wr5-infra-us-east-1a-qww78 Machine openshift-machine-api/srep-infra-healthcheck/qeaisrhods-c13-28wr5-infra-us-east-1a-qww78/ has unhealthy node openshift-console 42m Normal Started pod/downloads-fcdb597fd-grdr7 Started container download-server openshift-kube-apiserver-operator 42m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/webhook-authenticator-10 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 42m Normal RevisionCreate deployment/kube-apiserver-operator Revision 9 created because required configmap/config has changed openshift-network-operator 42m Normal SuccessfulCreate replicaset/network-operator-6c9d58d76b Created pod: network-operator-6c9d58d76b-m2fjb openshift-console 42m Warning Unhealthy pod/downloads-fcdb597fd-grdr7 Readiness probe failed: Get "http://10.128.2.27:8080/": dial tcp 10.128.2.27:8080: connect: connection refused openshift-operator-lifecycle-manager 42m Normal Killing pod/packageserver-7c998868c6-mxs6q Stopping container packageserver openshift-console 42m Warning ProbeError pod/downloads-fcdb597fd-grdr7 Readiness probe error: Get "http://10.128.2.27:8080/": dial tcp 10.128.2.27:8080: connect: connection refused... openshift-operator-lifecycle-manager 42m Normal SuccessfulCreate replicaset/packageserver-7c998868c6 Created pod: packageserver-7c998868c6-wnqfz openshift-operator-lifecycle-manager 42m Normal Pulled pod/packageserver-7c998868c6-wnqfz Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:497121bfdb3b293af2c939af393bfbda39aeace9d754f7288a244cd7ec2662d7" already present on machine openshift-operator-lifecycle-manager 42m Normal Created pod/packageserver-7c998868c6-wnqfz Created container packageserver openshift-operator-lifecycle-manager 42m Warning ProbeError pod/packageserver-7c998868c6-mxs6q Readiness probe error: Get "https://10.129.0.11:5443/healthz": dial tcp 10.129.0.11:5443: connect: connection refused... openshift-operator-lifecycle-manager 42m Normal Started pod/packageserver-7c998868c6-wnqfz Started container packageserver openshift-operator-lifecycle-manager 42m Warning Unhealthy pod/packageserver-7c998868c6-mxs6q Readiness probe failed: Get "https://10.129.0.11:5443/healthz": dial tcp 10.129.0.11:5443: connect: connection refused openshift-network-operator 42m Normal Started pod/network-operator-6c9d58d76b-m2fjb Started container network-operator openshift-kube-storage-version-migrator 42m Normal SuccessfulCreate replicaset/migrator-579f5cd9c5 Created pod: migrator-579f5cd9c5-flz72 openshift-network-operator 42m Normal Created pod/network-operator-6c9d58d76b-m2fjb Created container network-operator openshift-operator-lifecycle-manager 42m Normal AddedInterface pod/packageserver-7c998868c6-wnqfz Add eth0 [10.130.0.62/23] from ovn-kubernetes openshift-kube-scheduler 42m Normal Killing pod/openshift-kube-scheduler-guard-ip-10-0-239-132.ec2.internal Stopping container guard openshift-network-operator 42m Normal Pulled pod/network-operator-6c9d58d76b-m2fjb Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" already present on machine openshift-network-operator 42m Normal LeaderElection lease/network-operator-lock ip-10-0-140-6_659156a2-1f4f-456d-9edc-81fe8e981f5a became leader openshift-network-operator 42m Normal LeaderElection configmap/network-operator-lock ip-10-0-140-6_659156a2-1f4f-456d-9edc-81fe8e981f5a became leader openshift-kube-storage-version-migrator 42m Normal Killing pod/migrator-579f5cd9c5-sk4xj Stopping container migrator openshift-etcd-operator 42m Normal NodeTargetRevisionChanged deployment/etcd-operator Updating node "ip-10-0-140-6.ec2.internal" from revision 6 to 7 because node ip-10-0-140-6.ec2.internal with revision 6 is the oldest openshift-kube-storage-version-migrator 42m Normal AddedInterface pod/migrator-579f5cd9c5-flz72 Add eth0 [10.128.2.29/23] from ovn-kubernetes openshift-kube-storage-version-migrator 42m Normal Pulling pod/migrator-579f5cd9c5-flz72 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39ef66439265e28941d847694107b349dff04d9cc64f0b713882e1895ea2acb9" openshift-network-operator 42m Warning FastControllerResync deployment/network-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-network-operator 42m Normal Killing pod/network-operator-6c9d58d76b-pl9td Stopping container network-operator openshift-kube-storage-version-migrator 42m Normal Pulled pod/migrator-579f5cd9c5-flz72 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39ef66439265e28941d847694107b349dff04d9cc64f0b713882e1895ea2acb9" in 1.659909968s (1.659924188s including waiting) openshift-console 42m Normal Started pod/console-7db75d8d45-7vkqx Started container console openshift-kube-storage-version-migrator 42m Normal Created pod/migrator-579f5cd9c5-flz72 Created container migrator openshift-console 42m Normal Created pod/console-7db75d8d45-7vkqx Created container console openshift-kube-storage-version-migrator 42m Normal Started pod/migrator-579f5cd9c5-flz72 Started container migrator openshift-console 42m Normal Pulled pod/console-7db75d8d45-7vkqx Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f8ed86b29b0df00f0cfb8b6d170e5fa8d9b0092ee92140788ec5a0a1eb60a10" already present on machine openshift-console 42m Normal AddedInterface pod/console-7db75d8d45-7vkqx Add eth0 [10.130.0.63/23] from ovn-kubernetes openshift-kube-apiserver 42m Normal Pulled pod/revision-pruner-10-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver-operator 42m Normal PodCreated deployment/kube-apiserver-operator Created Pod/revision-pruner-10-ip-10-0-197-197.ec2.internal -n openshift-kube-apiserver because it was missing openshift-kube-apiserver 42m Normal AddedInterface pod/revision-pruner-10-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.64/23] from ovn-kubernetes openshift-console 42m Normal Killing pod/console-7dc48fc574-fvlls Stopping container console openshift-console 42m Normal ScalingReplicaSet deployment/console Scaled down replica set console-7dc48fc574 to 0 from 1 openshift-kube-apiserver 42m Normal Started pod/revision-pruner-10-ip-10-0-197-197.ec2.internal Started container pruner openshift-kube-apiserver 42m Normal Created pod/revision-pruner-10-ip-10-0-197-197.ec2.internal Created container pruner openshift-console 42m Normal SuccessfulDelete replicaset/console-7dc48fc574 Deleted pod: console-7dc48fc574-fvlls openshift-monitoring 42m Normal RequirementsNotMet clusterserviceversion/configure-alertmanager-operator.v0.1.516-bdea4ea one or more requirements couldn't be found openshift-console 42m Normal Killing pod/downloads-fcdb597fd-qhkwv Stopping container download-server openshift-monitoring 42m Normal RequirementsUnknown clusterserviceversion/configure-alertmanager-operator.v0.1.516-bdea4ea requirements not yet checked openshift-console 42m Normal Pulling pod/downloads-fcdb597fd-tr9zh Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec5351e220112a5b70451310b563175ae713c4d2864765c861b969730515a21b" openshift-console 42m Normal AddedInterface pod/downloads-fcdb597fd-tr9zh Add eth0 [10.128.0.52/23] from ovn-kubernetes openshift-etcd 42m Normal Started pod/revision-pruner-7-ip-10-0-239-132.ec2.internal Started container pruner openshift-etcd 42m Normal AddedInterface pod/revision-pruner-7-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.37/23] from ovn-kubernetes openshift-controller-manager-operator 42m Normal ObservedConfigChanged deployment/openshift-controller-manager-operator Writing updated observed config:   map[string]any{... openshift-console 42m Normal SuccessfulCreate replicaset/downloads-fcdb597fd Created pod: downloads-fcdb597fd-tr9zh openshift-etcd-operator 42m Normal PodCreated deployment/etcd-operator Created Pod/installer-7-ip-10-0-140-6.ec2.internal -n openshift-etcd because it was missing openshift-etcd 42m Normal Created pod/revision-pruner-7-ip-10-0-239-132.ec2.internal Created container pruner openshift-etcd 42m Normal Pulled pod/revision-pruner-7-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-kube-apiserver-operator 42m Normal PodCreated deployment/kube-apiserver-operator Created Pod/revision-pruner-10-ip-10-0-239-132.ec2.internal -n openshift-kube-apiserver because it was missing openshift-ingress 42m Warning ProbeError pod/router-default-7898b977d4-l6kqr Readiness probe error: HTTP probe failed with statuscode: 500... openshift-kube-apiserver 42m Normal Pulled pod/revision-pruner-10-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 42m Normal Created pod/revision-pruner-10-ip-10-0-239-132.ec2.internal Created container pruner openshift-kube-apiserver 42m Normal Started pod/revision-pruner-10-ip-10-0-239-132.ec2.internal Started container pruner openshift-kube-apiserver 42m Normal AddedInterface pod/revision-pruner-10-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.38/23] from ovn-kubernetes openshift-etcd 42m Normal Started pod/installer-7-ip-10-0-140-6.ec2.internal Started container installer openshift-etcd 42m Normal Created pod/installer-7-ip-10-0-140-6.ec2.internal Created container installer openshift-etcd 42m Normal Pulled pod/installer-7-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd 42m Normal AddedInterface pod/installer-7-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.51/23] from ovn-kubernetes openshift-authentication 42m Normal Created pod/oauth-openshift-6cd75d67b9-btb4m Created container oauth-openshift openshift-authentication 42m Normal AddedInterface pod/oauth-openshift-6cd75d67b9-btb4m Add eth0 [10.128.0.53/23] from ovn-kubernetes openshift-authentication 42m Normal Pulled pod/oauth-openshift-6cd75d67b9-btb4m Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2156db516db2fda43363a063c11f91bd201dde2de7eb4fbab9c1210d17e699ba" already present on machine openshift-authentication 42m Normal Started pod/oauth-openshift-6cd75d67b9-btb4m Started container oauth-openshift openshift-monitoring 42m Normal ScalingReplicaSet deployment/configure-alertmanager-operator Scaled up replica set configure-alertmanager-operator-7b9b57dbdd to 1 openshift-monitoring 42m Normal AddedInterface pod/configure-alertmanager-operator-7b9b57dbdd-xgqtw Add eth0 [10.128.2.30/23] from ovn-kubernetes openshift-kube-apiserver-operator 42m Normal PodCreated deployment/kube-apiserver-operator Created Pod/revision-pruner-10-ip-10-0-140-6.ec2.internal -n openshift-kube-apiserver because it was missing openshift-monitoring 42m Normal SuccessfulCreate replicaset/configure-alertmanager-operator-7b9b57dbdd Created pod: configure-alertmanager-operator-7b9b57dbdd-xgqtw openshift-monitoring 42m Normal Pulling pod/configure-alertmanager-operator-7b9b57dbdd-xgqtw Pulling image "quay.io/app-sre/configure-alertmanager-operator@sha256:6ecbda84a8bf59a69d77329a32bf63939018d4ea4899a6c9fe4bde1adbace56e" openshift-kube-apiserver 42m Normal Started pod/revision-pruner-10-ip-10-0-140-6.ec2.internal Started container pruner openshift-kube-apiserver 42m Normal Pulled pod/revision-pruner-10-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-authentication 42m Normal SuccessfulDelete replicaset/oauth-openshift-86966797f8 Deleted pod: oauth-openshift-86966797f8-vtzkz openshift-authentication 42m Normal ScalingReplicaSet deployment/oauth-openshift Scaled up replica set oauth-openshift-6cd75d67b9 to 2 from 1 openshift-kube-apiserver 42m Normal Created pod/revision-pruner-10-ip-10-0-140-6.ec2.internal Created container pruner openshift-kube-apiserver 42m Normal AddedInterface pod/revision-pruner-10-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.54/23] from ovn-kubernetes openshift-authentication 42m Normal ScalingReplicaSet deployment/oauth-openshift Scaled down replica set oauth-openshift-86966797f8 to 1 from 2 openshift-authentication 42m Normal SuccessfulCreate replicaset/oauth-openshift-6cd75d67b9 Created pod: oauth-openshift-6cd75d67b9-hnvl6 openshift-authentication-operator 42m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation" openshift-authentication-operator 42m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused" openshift-authentication-operator 42m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available changed from True to False ("OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused") openshift-multus 42m Normal ScalingReplicaSet deployment/multus-admission-controller Scaled up replica set multus-admission-controller-757b6fbf74 to 1 openshift-multus 42m Normal SuccessfulCreate replicaset/multus-admission-controller-757b6fbf74 Created pod: multus-admission-controller-757b6fbf74-mz54v openshift-multus 42m Normal Started pod/multus-admission-controller-757b6fbf74-mz54v Started container multus-admission-controller openshift-multus 42m Normal Created pod/multus-admission-controller-757b6fbf74-mz54v Created container kube-rbac-proxy openshift-multus 42m Normal Created pod/multus-admission-controller-757b6fbf74-mz54v Created container multus-admission-controller openshift-multus 42m Normal AddedInterface pod/multus-admission-controller-757b6fbf74-mz54v Add eth0 [10.128.0.55/23] from ovn-kubernetes openshift-multus 42m Normal Started pod/multus-admission-controller-757b6fbf74-mz54v Started container kube-rbac-proxy openshift-multus 42m Normal Pulled pod/multus-admission-controller-757b6fbf74-mz54v Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c3cca6e2da92a6cd38e7f20f77bffc675895bd800157fdb50261b7f7ea9fc90" already present on machine openshift-authentication-operator 42m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": dial tcp 172.30.33.148:443: connect: connection refused" to "All is well" openshift-multus 42m Normal Pulled pod/multus-admission-controller-757b6fbf74-mz54v Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 42m Normal Started pod/configure-alertmanager-operator-7b9b57dbdd-xgqtw Started container configure-alertmanager-operator openshift-multus 42m Normal SuccessfulDelete replicaset/multus-admission-controller-6896747cbb Deleted pod: multus-admission-controller-6896747cbb-ljc49 openshift-kube-apiserver-operator 42m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 3 nodes are at revision 9; 0 nodes have achieved new revision 10"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 9" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 9; 0 nodes have achieved new revision 10" openshift-multus 42m Normal Killing pod/multus-admission-controller-6896747cbb-ljc49 Stopping container multus-admission-controller openshift-monitoring 42m Normal Pulled pod/configure-alertmanager-operator-7b9b57dbdd-xgqtw Successfully pulled image "quay.io/app-sre/configure-alertmanager-operator@sha256:6ecbda84a8bf59a69d77329a32bf63939018d4ea4899a6c9fe4bde1adbace56e" in 3.577046203s (3.577058452s including waiting) openshift-monitoring 42m Normal Created pod/configure-alertmanager-operator-7b9b57dbdd-xgqtw Created container configure-alertmanager-operator openshift-multus 42m Normal ScalingReplicaSet deployment/multus-admission-controller Scaled up replica set multus-admission-controller-757b6fbf74 to 2 from 1 openshift-multus 42m Normal ScalingReplicaSet deployment/multus-admission-controller Scaled down replica set multus-admission-controller-6896747cbb to 1 from 2 openshift-multus 42m Normal SuccessfulCreate replicaset/multus-admission-controller-757b6fbf74 Created pod: multus-admission-controller-757b6fbf74-5hdn7 openshift-multus 42m Normal Killing pod/multus-admission-controller-6896747cbb-ljc49 Stopping container kube-rbac-proxy openshift-multus 42m Normal Killing pod/multus-admission-controller-6896747cbb-rlm9s Stopping container multus-admission-controller openshift-multus 42m Normal ScalingReplicaSet deployment/multus-admission-controller Scaled down replica set multus-admission-controller-6896747cbb to 0 from 1 openshift-multus 42m Normal Started pod/multus-admission-controller-757b6fbf74-5hdn7 Started container kube-rbac-proxy openshift-multus 42m Normal Pulled pod/multus-admission-controller-757b6fbf74-5hdn7 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-multus 42m Normal Started pod/multus-admission-controller-757b6fbf74-5hdn7 Started container multus-admission-controller openshift-multus 42m Normal Created pod/multus-admission-controller-757b6fbf74-5hdn7 Created container multus-admission-controller openshift-multus 42m Normal AddedInterface pod/multus-admission-controller-757b6fbf74-5hdn7 Add eth0 [10.130.0.65/23] from ovn-kubernetes openshift-multus 42m Normal SuccessfulDelete replicaset/multus-admission-controller-6896747cbb Deleted pod: multus-admission-controller-6896747cbb-rlm9s openshift-multus 42m Normal Pulled pod/multus-admission-controller-757b6fbf74-5hdn7 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c3cca6e2da92a6cd38e7f20f77bffc675895bd800157fdb50261b7f7ea9fc90" already present on machine openshift-multus 42m Normal Killing pod/multus-admission-controller-6896747cbb-rlm9s Stopping container kube-rbac-proxy openshift-multus 42m Normal Created pod/multus-admission-controller-757b6fbf74-5hdn7 Created container kube-rbac-proxy openshift-apiserver 42m Warning ProbeError pod/apiserver-7475f65d84-lm7x6 Readiness probe error: Get "https://10.128.0.36:8443/readyz": dial tcp 10.128.0.36:8443: connect: connection refused... openshift-apiserver 42m Warning Unhealthy pod/apiserver-7475f65d84-lm7x6 Readiness probe failed: Get "https://10.128.0.36:8443/readyz": dial tcp 10.128.0.36:8443: connect: connection refused openshift-authentication-operator 42m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" to "All is well" openshift-authentication-operator 42m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available changed from True to False ("OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.33.148:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)") openshift-authentication-operator 42m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "OAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.33.148:443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" openshift-ingress 42m Warning ProbeError pod/router-default-7898b977d4-vhrfb Readiness probe error: HTTP probe failed with statuscode: 500... openshift-etcd-operator 42m Warning EtcdLeaderChangeMetrics deployment/etcd-operator Detected leader change increase of 2.22223045270538 over 5 minutes on "AWS"; disk metrics are: etcd-ip-10-0-140-6.ec2.internal=0.006000,etcd-ip-10-0-197-197.ec2.internal=0.007487,etcd-ip-10-0-239-132.ec2.internal=0.003836. Most often this is as a result of inadequate storage or sometimes due to networking issues. openshift-etcd-operator 42m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 6; 1 nodes are at revision 7\nEtcdMembersAvailable: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 6; 1 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available" openshift-console 42m Normal Pulled pod/downloads-fcdb597fd-tr9zh Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec5351e220112a5b70451310b563175ae713c4d2864765c861b969730515a21b" in 13.410702021s (13.410709441s including waiting) openshift-console 42m Normal Created pod/downloads-fcdb597fd-tr9zh Created container download-server openshift-console 42m Normal Started pod/downloads-fcdb597fd-tr9zh Started container download-server openshift-console 42m Warning ProbeError pod/downloads-fcdb597fd-tr9zh Readiness probe error: Get "http://10.128.0.52:8080/": dial tcp 10.128.0.52:8080: connect: connection refused... openshift-console 42m Warning Unhealthy pod/downloads-fcdb597fd-tr9zh Readiness probe failed: Get "http://10.128.0.52:8080/": dial tcp 10.128.0.52:8080: connect: connection refused openshift-kube-apiserver-operator 42m Normal NodeTargetRevisionChanged deployment/kube-apiserver-operator Updating node "ip-10-0-197-197.ec2.internal" from revision 9 to 10 because node ip-10-0-197-197.ec2.internal with revision 9 is the oldest openshift-controller-manager-operator 41m Normal ConfigMapUpdated deployment/openshift-controller-manager-operator Updated ConfigMap/config -n openshift-controller-manager:... openshift-controller-manager-operator 41m Normal ConfigMapUpdated deployment/openshift-controller-manager-operator Updated ConfigMap/config -n openshift-route-controller-manager:... openshift-controller-manager-operator 41m Normal DeploymentUpdated deployment/openshift-controller-manager-operator Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed openshift-controller-manager-operator 41m Normal DeploymentUpdated deployment/openshift-controller-manager-operator Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed openshift-controller-manager 41m Normal ScalingReplicaSet deployment/controller-manager Scaled up replica set controller-manager-66b447958d to 1 from 0 openshift-controller-manager 41m Normal ScalingReplicaSet deployment/controller-manager Scaled down replica set controller-manager-c5c84d6f9 to 2 from 3 openshift-controller-manager-operator 41m Normal OperatorStatusChanged deployment/openshift-controller-manager-operator Status for clusteroperator/openshift-controller-manager changed: Progressing changed from False to True ("Progressing: deployment/controller-manager: observed generation is 7, desired generation is 8.\nProgressing: deployment/route-controller-manager: observed generation is 6, desired generation is 7.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 3, desired generation is 4.") openshift-controller-manager 41m Normal SuccessfulCreate replicaset/controller-manager-66b447958d Created pod: controller-manager-66b447958d-6mqfl openshift-route-controller-manager 41m Normal SuccessfulCreate replicaset/route-controller-manager-6594987c6f Created pod: route-controller-manager-6594987c6f-dcrpz openshift-route-controller-manager 41m Normal SuccessfulDelete replicaset/route-controller-manager-9b45479c5 Deleted pod: route-controller-manager-9b45479c5-nfwk9 openshift-route-controller-manager 41m Normal ScalingReplicaSet deployment/route-controller-manager Scaled down replica set route-controller-manager-9b45479c5 to 2 from 3 openshift-controller-manager 41m Normal SuccessfulDelete replicaset/controller-manager-c5c84d6f9 Deleted pod: controller-manager-c5c84d6f9-vpk76 openshift-route-controller-manager 41m Normal ScalingReplicaSet deployment/route-controller-manager Scaled up replica set route-controller-manager-6594987c6f to 1 from 0 openshift-kube-apiserver-operator 41m Normal PodCreated deployment/kube-apiserver-operator Created Pod/installer-10-ip-10-0-197-197.ec2.internal -n openshift-kube-apiserver because it was missing openshift-kube-apiserver 41m Normal Pulled pod/installer-10-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 41m Normal Started pod/installer-10-ip-10-0-197-197.ec2.internal Started container installer openshift-kube-apiserver 41m Normal AddedInterface pod/installer-10-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.66/23] from ovn-kubernetes openshift-kube-apiserver 41m Normal Created pod/installer-10-ip-10-0-197-197.ec2.internal Created container installer openshift-console 41m Normal Pulled pod/console-7db75d8d45-dzkhb Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f8ed86b29b0df00f0cfb8b6d170e5fa8d9b0092ee92140788ec5a0a1eb60a10" already present on machine openshift-console 41m Normal Started pod/console-7db75d8d45-dzkhb Started container console openshift-console 41m Normal Created pod/console-7db75d8d45-dzkhb Created container console openshift-console 41m Normal AddedInterface pod/console-7db75d8d45-dzkhb Add eth0 [10.128.0.56/23] from ovn-kubernetes openshift-oauth-apiserver 41m Warning ProbeError pod/apiserver-8ddbf84fd-g8ssl Readiness probe error: Get "https://10.130.0.20:8443/readyz": dial tcp 10.130.0.20:8443: connect: connection refused... openshift-oauth-apiserver 41m Warning Unhealthy pod/apiserver-8ddbf84fd-g8ssl Readiness probe failed: Get "https://10.130.0.20:8443/readyz": dial tcp 10.130.0.20:8443: connect: connection refused openshift-etcd 41m Normal Killing pod/etcd-ip-10-0-140-6.ec2.internal Stopping container etcdctl openshift-etcd 41m Normal Killing pod/etcd-ip-10-0-140-6.ec2.internal Stopping container etcd-readyz openshift-etcd 41m Normal StaticPodInstallerCompleted pod/installer-7-ip-10-0-140-6.ec2.internal Successfully installed revision 7 openshift-etcd 41m Normal Killing pod/etcd-ip-10-0-140-6.ec2.internal Stopping container etcd-metrics openshift-etcd 41m Warning ProbeError pod/etcd-guard-ip-10-0-140-6.ec2.internal Readiness probe error: Get "https://10.0.140.6:9980/healthz": dial tcp 10.0.140.6:9980: connect: connection refused... openshift-controller-manager-operator 41m Normal OperatorStatusChanged deployment/openshift-controller-manager-operator Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 7, desired generation is 8.\nProgressing: deployment/route-controller-manager: observed generation is 6, desired generation is 7.\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 3, desired generation is 4." to "Progressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" openshift-ingress 41m Warning Unhealthy pod/router-default-7898b977d4-l6kqr Readiness probe failed: HTTP probe failed with statuscode: 500 openshift-ingress 41m Warning ProbeError pod/router-default-7898b977d4-l6kqr Readiness probe error: HTTP probe failed with statuscode: 500... openshift-ingress 41m Warning Unhealthy pod/router-default-7898b977d4-vhrfb Readiness probe failed: HTTP probe failed with statuscode: 500 openshift-ingress 41m Warning ProbeError pod/router-default-7898b977d4-vhrfb Readiness probe error: HTTP probe failed with statuscode: 500... openshift-ingress-operator 41m Normal Admitted ingresscontroller/default ingresscontroller passed validation openshift-authentication-operator 41m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available changed from True to False ("OAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": x509: certificate signed by unknown authority") openshift-authentication-operator 41m Normal ConfigMapUpdated deployment/authentication-operator Updated ConfigMap/oauth-serving-cert -n openshift-config-managed:... openshift-authentication-operator 41m Normal SecretUpdated deployment/authentication-operator Updated Secret/v4-0-config-system-router-certs -n openshift-authentication because it changed openshift-config-managed 41m Normal UpdatedPublishedRouterCertificates secret/router-certs Updated the published router certificates openshift-authentication-operator 41m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": x509: certificate signed by unknown authority" openshift-config-managed 41m Normal UpdatedPublishedRouterCA configmap/default-ingress-cert Updated the published "default-ingress-cert" in "openshift-config-managed" openshift-ingress-operator 41m Normal DeletedDefaultCertificate ingresscontroller/default Deleted default wildcard certificate "router-certs-default" openshift-ingress 41m Normal SuccessfulCreate replicaset/router-default-7cf4c94d4 Created pod: router-default-7cf4c94d4-zs7xj openshift-monitoring 41m Normal Started pod/osd-cluster-ready-thb5j Started container osd-cluster-ready openshift-monitoring 41m Normal Created pod/osd-cluster-ready-thb5j Created container osd-cluster-ready openshift-ingress 41m Normal ScalingReplicaSet deployment/router-default Scaled up replica set router-default-7cf4c94d4 to 2 from 0 openshift-ingress 41m Normal SuccessfulDelete replicaset/router-default-75b548b966 Deleted pod: router-default-75b548b966-bd28g openshift-ingress 41m Normal ScalingReplicaSet deployment/router-default Scaled down replica set router-default-75b548b966 to 0 from 2 openshift-ingress 41m Normal SuccessfulDelete replicaset/router-default-75b548b966 Deleted pod: router-default-75b548b966-br22c openshift-monitoring 41m Normal Pulled pod/osd-cluster-ready-thb5j Container image "quay.io/app-sre/osd-cluster-ready@sha256:f70aa8033565fc73c006acb9199845242b1f729cb5a407b5174cf22656b4e2d5" already present on machine openshift-ingress 41m Normal SuccessfulCreate replicaset/router-default-7cf4c94d4 Created pod: router-default-7cf4c94d4-s4mh5 openshift-kube-scheduler-operator 41m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/revision-status-8 -n openshift-kube-scheduler because it was missing openshift-kube-scheduler-operator 41m Normal ConfigMapUpdated deployment/openshift-kube-scheduler-operator Updated ConfigMap/serviceaccount-ca -n openshift-kube-scheduler:... openshift-oauth-apiserver 41m Normal Started pod/apiserver-74455c7c5-rpzl9 Started container fix-audit-permissions openshift-apiserver 41m Normal Started pod/apiserver-5f568869f-mpswm Started container openshift-apiserver openshift-apiserver 41m Normal Pulled pod/apiserver-5f568869f-mpswm Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-apiserver 41m Normal Created pod/apiserver-5f568869f-mpswm Created container openshift-apiserver openshift-apiserver 41m Normal Created pod/apiserver-5f568869f-mpswm Created container openshift-apiserver-check-endpoints openshift-authentication-operator 41m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 4, desired generation is 5." openshift-apiserver 41m Normal Pulled pod/apiserver-5f568869f-mpswm Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" already present on machine openshift-apiserver 41m Normal Started pod/apiserver-5f568869f-mpswm Started container fix-audit-permissions openshift-apiserver 41m Normal AddedInterface pod/apiserver-5f568869f-mpswm Add eth0 [10.128.0.57/23] from ovn-kubernetes openshift-apiserver 41m Normal Started pod/apiserver-5f568869f-mpswm Started container openshift-apiserver-check-endpoints openshift-apiserver 41m Normal Pulled pod/apiserver-5f568869f-mpswm Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" already present on machine openshift-authentication 41m Normal SuccessfulCreate replicaset/oauth-openshift-58cb97bf44 Created pod: oauth-openshift-58cb97bf44-dtw8g openshift-authentication 41m Normal SuccessfulDelete replicaset/oauth-openshift-6cd75d67b9 Deleted pod: oauth-openshift-6cd75d67b9-hnvl6 openshift-authentication 41m Normal ScalingReplicaSet deployment/oauth-openshift Scaled up replica set oauth-openshift-58cb97bf44 to 1 from 0 openshift-oauth-apiserver 41m Normal AddedInterface pod/apiserver-74455c7c5-rpzl9 Add eth0 [10.130.0.67/23] from ovn-kubernetes openshift-kube-scheduler-operator 41m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/kube-scheduler-pod-8 -n openshift-kube-scheduler because it was missing openshift-authentication 41m Normal ScalingReplicaSet deployment/oauth-openshift Scaled down replica set oauth-openshift-6cd75d67b9 to 1 from 2 openshift-oauth-apiserver 41m Normal Pulled pod/apiserver-74455c7c5-rpzl9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" already present on machine openshift-apiserver 41m Normal Created pod/apiserver-5f568869f-mpswm Created container fix-audit-permissions openshift-oauth-apiserver 41m Normal Created pod/apiserver-74455c7c5-rpzl9 Created container fix-audit-permissions openshift-apiserver 41m Warning FastControllerResync node/ip-10-0-140-6.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling openshift-oauth-apiserver 41m Normal Pulled pod/apiserver-74455c7c5-rpzl9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" already present on machine openshift-oauth-apiserver 41m Normal Created pod/apiserver-74455c7c5-rpzl9 Created container oauth-apiserver openshift-oauth-apiserver 41m Normal Started pod/apiserver-74455c7c5-rpzl9 Started container oauth-apiserver openshift-apiserver 41m Warning FastControllerResync node/ip-10-0-140-6.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler-operator 41m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/config-8 -n openshift-kube-scheduler because it was missing openshift-kube-scheduler-operator 41m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/serviceaccount-ca-8 -n openshift-kube-scheduler because it was missing openshift-etcd-operator 41m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "StaticPodsDegraded: pod/etcd-ip-10-0-140-6.ec2.internal container \"etcd\" started at 2023-03-21 12:26:37 +0000 UTC is still not ready\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" openshift-kube-scheduler-operator 41m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-8 -n openshift-kube-scheduler because it was missing openshift-kube-scheduler-operator 41m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/scheduler-kubeconfig-8 -n openshift-kube-scheduler because it was missing openshift-kube-controller-manager-operator 41m Normal ConfigMapUpdated deployment/kube-controller-manager-operator Updated ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager:... openshift-kube-scheduler-operator 41m Normal SecretCreated deployment/openshift-kube-scheduler-operator Created Secret/serving-cert-8 -n openshift-kube-scheduler because it was missing openshift-kube-controller-manager-operator 41m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/revision-status-7 -n openshift-kube-controller-manager because it was missing openshift-kube-scheduler-operator 41m Normal SecretCreated deployment/openshift-kube-scheduler-operator Created Secret/localhost-recovery-client-token-8 -n openshift-kube-scheduler because it was missing openshift-kube-scheduler-operator 41m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: conflicting latestAvailableRevision 8" openshift-kube-scheduler-operator 41m Normal RevisionTriggered deployment/openshift-kube-scheduler-operator new revision 8 triggered by "configmap/serviceaccount-ca has changed" openshift-kube-scheduler-operator 41m Normal RevisionCreate deployment/openshift-kube-scheduler-operator Revision 7 created because configmap/serviceaccount-ca has changed openshift-kube-controller-manager-operator 41m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/kube-controller-manager-pod-7 -n openshift-kube-controller-manager because it was missing openshift-kube-scheduler-operator 41m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: conflicting latestAvailableRevision 8" to "NodeControllerDegraded: All master nodes are ready" openshift-apiserver 41m Normal Killing pod/apiserver-7475f65d84-4ncn2 Stopping container openshift-apiserver-check-endpoints openshift-apiserver 41m Normal SuccessfulDelete replicaset/apiserver-7475f65d84 Deleted pod: apiserver-7475f65d84-4ncn2 openshift-apiserver 41m Normal ScalingReplicaSet deployment/apiserver Scaled down replica set apiserver-7475f65d84 to 1 from 2 openshift-apiserver 41m Normal ScalingReplicaSet deployment/apiserver Scaled up replica set apiserver-5f568869f to 2 from 1 openshift-apiserver 41m Normal Killing pod/apiserver-7475f65d84-4ncn2 Stopping container openshift-apiserver openshift-apiserver 41m Normal SuccessfulCreate replicaset/apiserver-5f568869f Created pod: apiserver-5f568869f-8zhkc openshift-kube-controller-manager-operator 41m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/cluster-policy-controller-config-7 -n openshift-kube-controller-manager because it was missing openshift-kube-controller-manager-operator 41m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/config-7 -n openshift-kube-controller-manager because it was missing openshift-oauth-apiserver 41m Normal ScalingReplicaSet deployment/apiserver Scaled up replica set apiserver-74455c7c5 to 2 from 1 openshift-oauth-apiserver 41m Normal Killing pod/apiserver-8ddbf84fd-4jwnk Stopping container oauth-apiserver openshift-kube-apiserver 41m Normal Killing pod/kube-apiserver-ip-10-0-197-197.ec2.internal Stopping container kube-apiserver-cert-syncer openshift-kube-scheduler 41m Normal Pulled pod/revision-pruner-8-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-oauth-apiserver 41m Normal SuccessfulDelete replicaset/apiserver-8ddbf84fd Deleted pod: apiserver-8ddbf84fd-4jwnk openshift-kube-scheduler 41m Normal AddedInterface pod/revision-pruner-8-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.39/23] from ovn-kubernetes openshift-kube-controller-manager-operator 41m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/controller-manager-kubeconfig-7 -n openshift-kube-controller-manager because it was missing openshift-oauth-apiserver 41m Normal SuccessfulCreate replicaset/apiserver-74455c7c5 Created pod: apiserver-74455c7c5-h9ck5 openshift-kube-apiserver 41m Normal StaticPodInstallerCompleted pod/installer-10-ip-10-0-197-197.ec2.internal Successfully installed revision 10 openshift-kube-controller-manager-operator 41m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/kube-controller-cert-syncer-kubeconfig-7 -n openshift-kube-controller-manager because it was missing openshift-oauth-apiserver 41m Normal ScalingReplicaSet deployment/apiserver Scaled down replica set apiserver-8ddbf84fd to 1 from 2 openshift-apiserver-operator 41m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation" openshift-kube-apiserver 41m Normal Killing pod/kube-apiserver-ip-10-0-197-197.ec2.internal Stopping container kube-apiserver-insecure-readyz openshift-kube-apiserver 41m Normal Killing pod/kube-apiserver-ip-10-0-197-197.ec2.internal Stopping container kube-apiserver-check-endpoints openshift-kube-apiserver 41m Normal Killing pod/kube-apiserver-ip-10-0-197-197.ec2.internal Stopping container kube-apiserver openshift-kube-apiserver 41m Normal Killing pod/kube-apiserver-ip-10-0-197-197.ec2.internal Stopping container kube-apiserver-cert-regeneration-controller openshift-kube-scheduler 41m Normal Created pod/revision-pruner-8-ip-10-0-239-132.ec2.internal Created container pruner openshift-kube-apiserver 41m Warning ProbeError pod/kube-apiserver-guard-ip-10-0-197-197.ec2.internal Readiness probe error: HTTP probe failed with statuscode: 500... openshift-kube-controller-manager-operator 41m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/serviceaccount-ca-7 -n openshift-kube-controller-manager because it was missing openshift-authentication-operator 41m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 1/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 4, desired generation is 5." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 4, desired generation is 5." openshift-kube-scheduler 41m Normal Started pod/revision-pruner-8-ip-10-0-239-132.ec2.internal Started container pruner openshift-etcd-operator 41m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "StaticPodsDegraded: pod/etcd-ip-10-0-140-6.ec2.internal container \"etcd\" started at 2023-03-21 12:26:37 +0000 UTC is still not ready\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "StaticPodsDegraded: pod/etcd-ip-10-0-140-6.ec2.internal container \"etcd\" is terminated: Completed: \nStaticPodsDegraded: pod/etcd-ip-10-0-140-6.ec2.internal container \"etcd-metrics\" is terminated: Error: :26:55.304Z\",\"caller\":\"zapgrpc/zapgrpc.go:191\",\"msg\":\"[core] grpc: addrConn.createTransport failed to connect to {10.0.140.6:9978 10.0.140.6 0 }. Err: connection error: desc = \\\"transport: Error while dialing dial tcp 10.0.140.6:9978: connect: connection refused\\\"\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:26:55.304Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to TRANSIENT_FAILURE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:26:55.304Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc00007d240, TRANSIENT_FAILURE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:27:06.468Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to IDLE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:27:06.468Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc00007d240, IDLE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:27:06.468Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to CONNECTING\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:27:06.468Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel picks a new address \\\"10.0.140.6:9978\\\" to connect\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:27:06.468Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc00007d240, CONNECTING\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:27:06.473Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to READY\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:27:06.473Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc00007d240, READY\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:27:06.473Z\",\"call\nStaticPodsDegraded: pod/etcd-ip-10-0-140-6.ec2.internal container \"etcd-readyz\" is terminated: Completed: \nStaticPodsDegraded: pod/etcd-ip-10-0-140-6.ec2.internal container \"etcdctl\" is terminated: Error: \nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" openshift-kube-scheduler-operator 41m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/revision-pruner-8-ip-10-0-140-6.ec2.internal -n openshift-kube-scheduler because it was missing openshift-kube-controller-manager-operator 41m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/service-ca-7 -n openshift-kube-controller-manager because it was missing openshift-kube-controller-manager-operator 41m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/recycler-config-7 -n openshift-kube-controller-manager because it was missing openshift-kube-scheduler 41m Normal Started pod/revision-pruner-8-ip-10-0-140-6.ec2.internal Started container pruner openshift-kube-scheduler 41m Normal Created pod/revision-pruner-8-ip-10-0-140-6.ec2.internal Created container pruner openshift-kube-controller-manager-operator 41m Normal SecretCreated deployment/kube-controller-manager-operator Created Secret/service-account-private-key-7 -n openshift-kube-controller-manager because it was missing openshift-kube-scheduler 41m Normal AddedInterface pod/revision-pruner-8-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.58/23] from ovn-kubernetes openshift-kube-scheduler 41m Normal Pulled pod/revision-pruner-8-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-controller-manager-operator 41m Normal SecretCreated deployment/kube-controller-manager-operator Created Secret/serving-cert-7 -n openshift-kube-controller-manager because it was missing openshift-kube-apiserver 41m Warning Unhealthy pod/kube-apiserver-ip-10-0-197-197.ec2.internal Readiness probe failed: Get "https://10.0.197.197:17697/healthz": dial tcp 10.0.197.197:17697: connect: connection refused openshift-kube-apiserver 41m Warning ProbeError pod/kube-apiserver-ip-10-0-197-197.ec2.internal Readiness probe error: HTTP probe failed with statuscode: 500... openshift-kube-apiserver 41m Warning ProbeError pod/kube-apiserver-ip-10-0-197-197.ec2.internal Readiness probe error: Get "https://10.0.197.197:17697/healthz": dial tcp 10.0.197.197:17697: connect: connection refused... openshift-kube-controller-manager-operator 41m Normal RevisionCreate deployment/kube-controller-manager-operator Revision 6 created because configmap/serviceaccount-ca has changed openshift-kube-scheduler-operator 41m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Progressing changed from False to True ("NodeInstallerProgressing: 3 nodes are at revision 7; 0 nodes have achieved new revision 8"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7; 0 nodes have achieved new revision 8" openshift-kube-scheduler-operator 41m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/revision-pruner-8-ip-10-0-197-197.ec2.internal -n openshift-kube-scheduler because it was missing openshift-kube-controller-manager-operator 41m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "RevisionControllerDegraded: conflicting latestAvailableRevision 7\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" openshift-kube-controller-manager-operator 41m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "RevisionControllerDegraded: conflicting latestAvailableRevision 7\nNodeControllerDegraded: All master nodes are ready" openshift-kube-controller-manager-operator 41m Normal SecretCreated deployment/kube-controller-manager-operator Created Secret/localhost-recovery-client-token-7 -n openshift-kube-controller-manager because it was missing openshift-kube-scheduler-operator 41m Normal NodeTargetRevisionChanged deployment/openshift-kube-scheduler-operator Updating node "ip-10-0-239-132.ec2.internal" from revision 7 to 8 because node ip-10-0-239-132.ec2.internal with revision 7 is the oldest openshift-kube-controller-manager-operator 41m Normal RevisionTriggered deployment/kube-controller-manager-operator new revision 7 triggered by "configmap/serviceaccount-ca has changed" openshift-kube-scheduler 41m Normal AddedInterface pod/revision-pruner-8-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.69/23] from ovn-kubernetes openshift-kube-scheduler 41m Normal Created pod/revision-pruner-8-ip-10-0-197-197.ec2.internal Created container pruner openshift-kube-scheduler 41m Normal Started pod/revision-pruner-8-ip-10-0-197-197.ec2.internal Started container pruner openshift-kube-scheduler 41m Normal Pulled pod/revision-pruner-8-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 41m Normal AddedInterface pod/installer-8-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.40/23] from ovn-kubernetes openshift-kube-scheduler 41m Normal Pulled pod/installer-8-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 41m Normal Created pod/installer-8-ip-10-0-239-132.ec2.internal Created container installer openshift-kube-scheduler 41m Normal Started pod/installer-8-ip-10-0-239-132.ec2.internal Started container installer openshift-etcd-operator 41m Warning EtcdLeaderChangeMetrics deployment/etcd-operator Detected leader change increase of 2.2222222222222223 over 5 minutes on "AWS"; disk metrics are: etcd-ip-10-0-140-6.ec2.internal=0.006040,etcd-ip-10-0-197-197.ec2.internal=0.012130,etcd-ip-10-0-239-132.ec2.internal=0.003889. Most often this is as a result of inadequate storage or sometimes due to networking issues. openshift-kube-controller-manager-operator 41m Normal NodeTargetRevisionChanged deployment/kube-controller-manager-operator Updating node "ip-10-0-239-132.ec2.internal" from revision 6 to 7 because node ip-10-0-239-132.ec2.internal with revision 6 is the oldest openshift-apiserver 41m Warning ProbeError pod/apiserver-7475f65d84-4ncn2 Readiness probe error: HTTP probe failed with statuscode: 500... openshift-apiserver 41m Warning Unhealthy pod/apiserver-7475f65d84-4ncn2 Readiness probe failed: HTTP probe failed with statuscode: 500 openshift-kube-controller-manager-operator 41m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 3 nodes are at revision 6; 0 nodes have achieved new revision 7"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 6" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 6; 0 nodes have achieved new revision 7" openshift-kube-controller-manager-operator 41m Normal PodCreated deployment/kube-controller-manager-operator Created Pod/installer-7-ip-10-0-239-132.ec2.internal -n openshift-kube-controller-manager because it was missing openshift-etcd 41m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container setup openshift-kube-controller-manager 41m Normal Started pod/installer-7-ip-10-0-239-132.ec2.internal Started container installer openshift-etcd 41m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-kube-controller-manager 41m Normal Created pod/installer-7-ip-10-0-239-132.ec2.internal Created container installer openshift-etcd 41m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcd-ensure-env-vars openshift-etcd 41m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcd-ensure-env-vars openshift-etcd 41m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-kube-controller-manager 41m Normal Pulled pod/installer-7-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-etcd 41m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container setup openshift-kube-controller-manager 41m Normal AddedInterface pod/installer-7-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.41/23] from ovn-kubernetes openshift-etcd 41m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcd-resources-copy openshift-etcd 41m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcd-resources-copy openshift-etcd-operator 41m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "StaticPodsDegraded: pod/etcd-ip-10-0-140-6.ec2.internal container \"etcd\" is terminated: Completed: \nStaticPodsDegraded: pod/etcd-ip-10-0-140-6.ec2.internal container \"etcd-metrics\" is terminated: Error: :26:55.304Z\",\"caller\":\"zapgrpc/zapgrpc.go:191\",\"msg\":\"[core] grpc: addrConn.createTransport failed to connect to {10.0.140.6:9978 10.0.140.6 0 }. Err: connection error: desc = \\\"transport: Error while dialing dial tcp 10.0.140.6:9978: connect: connection refused\\\"\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:26:55.304Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to TRANSIENT_FAILURE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:26:55.304Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc00007d240, TRANSIENT_FAILURE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:27:06.468Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to IDLE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:27:06.468Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc00007d240, IDLE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:27:06.468Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to CONNECTING\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:27:06.468Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel picks a new address \\\"10.0.140.6:9978\\\" to connect\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:27:06.468Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc00007d240, CONNECTING\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:27:06.473Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to READY\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:27:06.473Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc00007d240, READY\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:27:06.473Z\",\"call\nStaticPodsDegraded: pod/etcd-ip-10-0-140-6.ec2.internal container \"etcd-readyz\" is terminated: Completed: \nStaticPodsDegraded: pod/etcd-ip-10-0-140-6.ec2.internal container \"etcdctl\" is terminated: Error: \nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" openshift-etcd 41m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 41m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcd openshift-etcd 41m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcdctl openshift-etcd 41m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcdctl openshift-etcd 41m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-kube-apiserver-operator 41m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/user-serving-cert-000 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver 41m Normal CertificateUpdated pod/kube-apiserver-ip-10-0-140-6.ec2.internal Wrote updated secret: openshift-kube-apiserver/user-serving-cert-000 openshift-etcd 41m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 41m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcd openshift-kube-apiserver 41m Normal CertificateUpdated pod/kube-apiserver-ip-10-0-239-132.ec2.internal Wrote updated secret: openshift-kube-apiserver/user-serving-cert-000 openshift-etcd 41m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 41m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcd-metrics openshift-etcd 41m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcd-readyz openshift-etcd 41m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd 41m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcd-readyz openshift-etcd 41m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcd-metrics openshift-kube-apiserver 41m Normal CertificateUpdated pod/kube-apiserver-ip-10-0-140-6.ec2.internal Wrote updated secret: openshift-kube-apiserver/user-serving-cert-001 openshift-kube-apiserver-operator 41m Normal ObservedConfigChanged deployment/kube-apiserver-operator Writing updated observed config:   map[string]any{... openshift-kube-apiserver 41m Normal CertificateUpdated pod/kube-apiserver-ip-10-0-239-132.ec2.internal Wrote updated secret: openshift-kube-apiserver/user-serving-cert-001 openshift-etcd-operator 41m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" openshift-kube-apiserver-operator 41m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/user-serving-cert-001 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 40m Normal ConfigMapUpdated deployment/kube-apiserver-operator Updated ConfigMap/config -n openshift-kube-apiserver:... openshift-kube-apiserver-operator 40m Normal RevisionTriggered deployment/kube-apiserver-operator new revision 11 triggered by "required configmap/config has changed" openshift-kube-apiserver 40m Normal LeaderElection lease/cert-regeneration-controller-lock ip-10-0-239-132_762fe916-542c-41bc-8c1b-e1ab64cad71b became leader openshift-kube-apiserver-operator 40m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/revision-status-11 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 40m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-pod-11 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 40m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/config-11 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 40m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-11 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 40m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/oauth-metadata-11 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 40m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/bound-sa-token-signing-certs-11 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 40m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/etcd-serving-ca-11 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 40m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" started at 2023-03-21 12:27:14 +0000 UTC is still not ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" started at 2023-03-21 12:27:15 +0000 UTC is still not ready" openshift-apiserver 40m Warning Unhealthy pod/apiserver-7475f65d84-4ncn2 Readiness probe failed: Get "https://10.129.0.21:8443/readyz": dial tcp 10.129.0.21:8443: connect: connection refused openshift-kube-apiserver-operator 40m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-server-ca-11 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 40m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kubelet-serving-ca-11 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 40m Normal ConfigMapUpdated deployment/kube-apiserver-operator Updated ConfigMap/kube-apiserver-pod -n openshift-kube-apiserver:... openshift-kube-apiserver-operator 40m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/sa-token-signing-certs-11 -n openshift-kube-apiserver because it was missing openshift-etcd-operator 40m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 6; 1 nodes are at revision 7" to "NodeInstallerProgressing: 1 nodes are at revision 6; 2 nodes are at revision 7",Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 6; 1 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 6; 2 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available" openshift-etcd-operator 40m Normal NodeCurrentRevisionChanged deployment/etcd-operator Updated node "ip-10-0-140-6.ec2.internal" from revision 6 to 7 because static pod is ready openshift-apiserver 40m Warning ProbeError pod/apiserver-7475f65d84-4ncn2 Readiness probe error: Get "https://10.129.0.21:8443/readyz": dial tcp 10.129.0.21:8443: connect: connection refused... openshift-oauth-apiserver 40m Warning Unhealthy pod/apiserver-8ddbf84fd-4jwnk Readiness probe failed: Get "https://10.128.0.29:8443/readyz": dial tcp 10.128.0.29:8443: connect: connection refused openshift-kube-apiserver-operator 40m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-audit-policies-11 -n openshift-kube-apiserver because it was missing openshift-oauth-apiserver 40m Warning ProbeError pod/apiserver-8ddbf84fd-4jwnk Readiness probe error: Get "https://10.128.0.29:8443/readyz": dial tcp 10.128.0.29:8443: connect: connection refused... openshift-authentication-operator 40m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.qeaisrhods-c13.abmw.s1.devshift.org/healthz\": x509: certificate signed by unknown authority" to "All is well" openshift-kube-apiserver-operator 40m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/etcd-client-11 -n openshift-kube-apiserver because it was missing openshift-kube-scheduler 40m Normal StaticPodInstallerCompleted pod/installer-8-ip-10-0-239-132.ec2.internal Successfully installed revision 8 openshift-kube-apiserver-operator 40m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/localhost-recovery-serving-certkey-11 -n openshift-kube-apiserver because it was missing openshift-etcd-operator 40m Normal NodeTargetRevisionChanged deployment/etcd-operator Updating node "ip-10-0-197-197.ec2.internal" from revision 6 to 7 because node ip-10-0-197-197.ec2.internal with revision 6 is the oldest openshift-kube-apiserver-operator 40m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/localhost-recovery-client-token-11 -n openshift-kube-apiserver because it was missing openshift-kube-scheduler 40m Normal Killing pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Stopping container kube-scheduler openshift-kube-scheduler 40m Normal Killing pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Stopping container kube-scheduler-cert-syncer openshift-kube-scheduler 40m Normal Killing pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Stopping container kube-scheduler-recovery-controller openshift-etcd-operator 40m Normal PodCreated deployment/etcd-operator Created Pod/installer-7-ip-10-0-197-197.ec2.internal -n openshift-etcd because it was missing openshift-kube-scheduler-operator 40m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal container \"kube-scheduler\" is terminated: Completed: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal container \"kube-scheduler-cert-syncer\" is terminated: Error: 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:32:44.003352 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:32:44.003371 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:32:44.596800 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:32:44.596814 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:32:45.196664 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:32:45.196683 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:32:45.796463 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:32:45.796478 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:32:46.395921 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:32:46.395938 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:32:47.004042 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:32:47.004059 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:32:51.843143 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:32:51.843157 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:32:55.443308 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:32:55.443326 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:33:17.861785 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:33:17.861803 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:33:21.465240 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:33:21.465257 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal container \"kube-scheduler-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" openshift-kube-apiserver-operator 40m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/webhook-authenticator-11 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 40m Normal RevisionCreate deployment/kube-apiserver-operator Revision 10 created because required configmap/config has changed openshift-etcd 40m Normal AddedInterface pod/installer-7-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.70/23] from ovn-kubernetes openshift-kube-apiserver-operator 40m Normal RevisionTriggered deployment/kube-apiserver-operator new revision 11 triggered by "required configmap/kube-apiserver-pod has changed,required configmap/config has changed" openshift-etcd 40m Normal Pulled pod/installer-7-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd 40m Normal Started pod/installer-7-ip-10-0-197-197.ec2.internal Started container installer openshift-etcd 40m Normal Created pod/installer-7-ip-10-0-197-197.ec2.internal Created container installer openshift-kube-apiserver-operator 40m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 9; 0 nodes have achieved new revision 10" to "NodeInstallerProgressing: 3 nodes are at revision 9; 0 nodes have achieved new revision 11",Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 9; 0 nodes have achieved new revision 10" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 9; 0 nodes have achieved new revision 11" openshift-kube-apiserver-operator 40m Normal ConfigMapUpdated deployment/kube-apiserver-operator Updated ConfigMap/revision-status-11 -n openshift-kube-apiserver:... openshift-kube-controller-manager 40m Normal Killing pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Stopping container kube-controller-manager openshift-kube-apiserver-operator 40m Normal PodCreated deployment/kube-apiserver-operator Created Pod/revision-pruner-11-ip-10-0-197-197.ec2.internal -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 40m Normal ConfigMapUpdated deployment/kube-apiserver-operator Updated ConfigMap/kube-apiserver-pod-11 -n openshift-kube-apiserver:... openshift-kube-controller-manager 40m Normal StaticPodInstallerCompleted pod/installer-7-ip-10-0-239-132.ec2.internal Successfully installed revision 7 openshift-authentication-operator 40m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 4, desired generation is 5." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation" openshift-kube-controller-manager 40m Normal Killing pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Stopping container cluster-policy-controller openshift-kube-controller-manager 40m Normal Killing pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Stopping container kube-controller-manager-recovery-controller openshift-kube-controller-manager 40m Normal Killing pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Stopping container kube-controller-manager-cert-syncer openshift-kube-apiserver 40m Normal Pulled pod/revision-pruner-11-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 40m Normal Created pod/revision-pruner-11-ip-10-0-197-197.ec2.internal Created container pruner openshift-kube-apiserver 40m Normal Started pod/revision-pruner-11-ip-10-0-197-197.ec2.internal Started container pruner openshift-kube-apiserver 40m Normal AddedInterface pod/revision-pruner-11-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.71/23] from ovn-kubernetes openshift-kube-apiserver-operator 40m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: conflicting latestAvailableRevision 11\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" started at 2023-03-21 12:27:14 +0000 UTC is still not ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" started at 2023-03-21 12:27:15 +0000 UTC is still not ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" started at 2023-03-21 12:27:14 +0000 UTC is still not ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" started at 2023-03-21 12:27:15 +0000 UTC is still not ready" openshift-kube-apiserver-operator 40m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" started at 2023-03-21 12:27:14 +0000 UTC is still not ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" started at 2023-03-21 12:27:15 +0000 UTC is still not ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: conflicting latestAvailableRevision 11\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" started at 2023-03-21 12:27:14 +0000 UTC is still not ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" started at 2023-03-21 12:27:15 +0000 UTC is still not ready" openshift-kube-controller-manager-operator 40m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"cluster-policy-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager-cert-syncer\" is terminated: Error: 5363 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:32:52.205609 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:32:53.404182 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:32:53.405559 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:32:54.205614 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:32:54.205821 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:32:56.611673 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:32:56.611919 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:33:05.110819 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:33:05.111093 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:33:22.643151 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:33:22.643415 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:33:31.130324 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:33:31.130569 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" openshift-kube-apiserver 40m Normal Created pod/revision-pruner-11-ip-10-0-239-132.ec2.internal Created container pruner openshift-kube-apiserver 40m Normal AddedInterface pod/revision-pruner-11-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.42/23] from ovn-kubernetes openshift-kube-apiserver 40m Normal Pulled pod/revision-pruner-11-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 40m Normal Started pod/revision-pruner-11-ip-10-0-239-132.ec2.internal Started container pruner openshift-kube-scheduler-operator 40m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "StaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal container \"kube-scheduler\" is terminated: Completed: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal container \"kube-scheduler-cert-syncer\" is terminated: Error: 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:32:44.003352 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:32:44.003371 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:32:44.596800 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:32:44.596814 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:32:45.196664 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:32:45.196683 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:32:45.796463 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:32:45.796478 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:32:46.395921 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:32:46.395938 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:32:47.004042 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:32:47.004059 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:32:51.843143 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:32:51.843157 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:32:55.443308 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:32:55.443326 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:33:17.861785 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:33:17.861803 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:33:21.465240 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:33:21.465257 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal container \"kube-scheduler-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" openshift-kube-apiserver 40m Normal Started pod/revision-pruner-11-ip-10-0-140-6.ec2.internal Started container pruner openshift-kube-scheduler 40m Normal Created pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Created container kube-scheduler-recovery-controller openshift-kube-scheduler 40m Warning FastControllerResync pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler 40m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-apiserver 40m Normal Pulled pod/revision-pruner-11-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 40m Normal AddedInterface pod/revision-pruner-11-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.59/23] from ovn-kubernetes openshift-kube-apiserver 40m Normal Created pod/revision-pruner-11-ip-10-0-140-6.ec2.internal Created container pruner openshift-kube-scheduler 40m Normal Started pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Started container kube-scheduler-recovery-controller openshift-kube-controller-manager 40m Normal Pulled pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-controller-manager 40m Normal Created pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Created container kube-controller-manager-cert-syncer openshift-kube-controller-manager 40m Normal Started pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Started container cluster-policy-controller openshift-kube-controller-manager 40m Normal Started pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Started container kube-controller-manager-recovery-controller openshift-kube-controller-manager 40m Normal Created pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Created container cluster-policy-controller openshift-kube-controller-manager 40m Normal Pulled pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fec19020202ba3b7f09460bc0cb52fc001941fbb3a3fa7d1ca34a2e9ebf0e9" already present on machine openshift-kube-controller-manager 40m Normal Pulled pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 40m Normal Started pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Started container kube-controller-manager-cert-syncer openshift-kube-controller-manager 40m Normal Created pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Created container kube-controller-manager-recovery-controller openshift-kube-controller-manager 40m Normal Created pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Created container kube-controller-manager openshift-kube-controller-manager 40m Normal Pulled pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 40m Normal Started pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Started container kube-controller-manager openshift-kube-controller-manager 40m Warning ClusterInfrastructureStatus pod/kube-controller-manager-ip-10-0-239-132.ec2.internal unable to get cluster infrastructure status, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope openshift-kube-controller-manager 40m Warning FastControllerResync pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager-operator 40m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"cluster-policy-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager-cert-syncer\" is terminated: Error: 5363 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:32:52.205609 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:32:53.404182 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:32:53.405559 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:32:54.205614 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:32:54.205821 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:32:56.611673 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:32:56.611919 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:33:05.110819 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:33:05.111093 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:33:22.643151 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:33:22.643415 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:33:31.130324 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:33:31.130569 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" kube-system 40m Normal LeaderElection lease/kube-controller-manager ip-10-0-197-197_c9a49014-5ddc-42fd-98fd-cf04f176f053 became leader kube-system 40m Normal LeaderElection configmap/kube-controller-manager ip-10-0-197-197_c9a49014-5ddc-42fd-98fd-cf04f176f053 became leader default 40m Normal RegisteredNode node/ip-10-0-239-132.ec2.internal Node ip-10-0-239-132.ec2.internal event: Registered Node ip-10-0-239-132.ec2.internal in Controller default 40m Normal RegisteredNode node/ip-10-0-140-6.ec2.internal Node ip-10-0-140-6.ec2.internal event: Registered Node ip-10-0-140-6.ec2.internal in Controller default 40m Normal RegisteredNode node/ip-10-0-160-152.ec2.internal Node ip-10-0-160-152.ec2.internal event: Registered Node ip-10-0-160-152.ec2.internal in Controller default 40m Normal RegisteredNode node/ip-10-0-232-8.ec2.internal Node ip-10-0-232-8.ec2.internal event: Registered Node ip-10-0-232-8.ec2.internal in Controller openshift-ingress 40m Normal EnsuringLoadBalancer service/router-default Ensuring load balancer default 40m Normal RegisteredNode node/ip-10-0-197-197.ec2.internal Node ip-10-0-197-197.ec2.internal event: Registered Node ip-10-0-197-197.ec2.internal in Controller openshift-oauth-apiserver 40m Normal Created pod/apiserver-74455c7c5-h9ck5 Created container fix-audit-permissions openshift-oauth-apiserver 40m Normal Started pod/apiserver-74455c7c5-h9ck5 Started container fix-audit-permissions openshift-oauth-apiserver 40m Normal AddedInterface pod/apiserver-74455c7c5-h9ck5 Add eth0 [10.128.0.60/23] from ovn-kubernetes openshift-oauth-apiserver 40m Normal Pulled pod/apiserver-74455c7c5-h9ck5 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" already present on machine openshift-oauth-apiserver 40m Normal Pulled pod/apiserver-74455c7c5-h9ck5 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" already present on machine openshift-oauth-apiserver 40m Normal Started pod/apiserver-74455c7c5-h9ck5 Started container oauth-apiserver openshift-oauth-apiserver 40m Normal Created pod/apiserver-74455c7c5-h9ck5 Created container oauth-apiserver openshift-ingress 40m Normal EnsuredLoadBalancer service/router-default Ensured load balancer openshift-kube-apiserver 40m Normal Pulled pod/installer-11-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 40m Normal AddedInterface pod/installer-11-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.72/23] from ovn-kubernetes openshift-kube-apiserver-operator 40m Normal PodCreated deployment/kube-apiserver-operator Created Pod/installer-11-ip-10-0-197-197.ec2.internal -n openshift-kube-apiserver because it was missing openshift-kube-apiserver 40m Normal Created pod/installer-11-ip-10-0-197-197.ec2.internal Created container installer openshift-kube-apiserver 40m Normal Started pod/installer-11-ip-10-0-197-197.ec2.internal Started container installer openshift-cluster-storage-operator 40m Normal LeaderElection lease/snapshot-controller-leader csi-snapshot-controller-f58c44499-rnqw9 became leader default 40m Normal NodeHasSufficientPID node/ip-10-0-187-75.ec2.internal Node ip-10-0-187-75.ec2.internal status is now: NodeHasSufficientPID default 40m Normal NodeHasSufficientMemory node/ip-10-0-187-75.ec2.internal Node ip-10-0-187-75.ec2.internal status is now: NodeHasSufficientMemory openshift-oauth-apiserver 40m Normal SuccessfulCreate replicaset/apiserver-74455c7c5 Created pod: apiserver-74455c7c5-tqs7k openshift-oauth-apiserver 40m Normal ScalingReplicaSet deployment/apiserver Scaled down replica set apiserver-8ddbf84fd to 0 from 1 default 40m Normal NodeHasNoDiskPressure node/ip-10-0-187-75.ec2.internal Node ip-10-0-187-75.ec2.internal status is now: NodeHasNoDiskPressure openshift-oauth-apiserver 40m Normal SuccessfulDelete replicaset/apiserver-8ddbf84fd Deleted pod: apiserver-8ddbf84fd-7qf7p openshift-oauth-apiserver 40m Normal Killing pod/apiserver-8ddbf84fd-7qf7p Stopping container oauth-apiserver default 40m Normal Starting node/ip-10-0-187-75.ec2.internal Starting kubelet. openshift-oauth-apiserver 40m Normal ScalingReplicaSet deployment/apiserver Scaled up replica set apiserver-74455c7c5 to 3 from 2 openshift-authentication-operator 40m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()",Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: 2/3 pods have been updated to the latest generation\nOAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation" openshift-ovn-kubernetes 40m Normal SuccessfulCreate daemonset/ovnkube-node Created pod: ovnkube-node-zzdfn openshift-machine-config-operator 40m Normal SuccessfulCreate daemonset/machine-config-daemon Created pod: machine-config-daemon-vlfmm openshift-image-registry 40m Normal SuccessfulCreate daemonset/node-ca Created pod: node-ca-5ldj8 openshift-cluster-node-tuning-operator 40m Normal SuccessfulCreate daemonset/tuned Created pod: tuned-9gtgt openshift-dns 40m Normal SuccessfulCreate daemonset/node-resolver Created pod: node-resolver-qqhl6 openshift-multus 40m Normal SuccessfulCreate daemonset/multus Created pod: multus-xqcfd openshift-multus 40m Normal SuccessfulCreate daemonset/multus-additional-cni-plugins Created pod: multus-additional-cni-plugins-4qmk6 default 40m Warning ErrorReconcilingNode node/ip-10-0-187-75.ec2.internal nodeAdd: error adding node "ip-10-0-187-75.ec2.internal": could not find "k8s.ovn.org/node-subnets" annotation default 40m Normal NodeAllocatableEnforced node/ip-10-0-187-75.ec2.internal Updated Node Allocatable limit across pods openshift-multus 40m Normal SuccessfulCreate daemonset/network-metrics-daemon Created pod: network-metrics-daemon-lbxjr openshift-network-diagnostics 40m Normal SuccessfulCreate daemonset/network-check-target Created pod: network-check-target-v468t openshift-etcd-operator 40m Warning EtcdLeaderChangeMetrics deployment/etcd-operator Detected leader change increase of 2.2222222222222223 over 5 minutes on "AWS"; disk metrics are: etcd-ip-10-0-140-6.ec2.internal=0.004660,etcd-ip-10-0-197-197.ec2.internal=0.012130,etcd-ip-10-0-239-132.ec2.internal=0.003952. Most often this is as a result of inadequate storage or sometimes due to networking issues. openshift-monitoring 40m Normal SuccessfulCreate daemonset/node-exporter Created pod: node-exporter-4g9rl openshift-cluster-csi-drivers 40m Normal SuccessfulCreate daemonset/aws-ebs-csi-driver-node Created pod: aws-ebs-csi-driver-node-s4chb openshift-monitoring 40m Normal SuccessfulCreate daemonset/sre-dns-latency-exporter Created pod: sre-dns-latency-exporter-hm6bk openshift-machine-config-operator 40m Normal Pulled pod/machine-config-daemon-vlfmm Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" already present on machine openshift-multus 40m Normal Pulling pod/multus-xqcfd Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" openshift-network-diagnostics 40m Warning ErrorUpdatingResource pod/network-check-target-v468t addLogicalPort failed for openshift-network-diagnostics/network-check-target-v468t: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-187-75.ec2.internal" openshift-monitoring 40m Warning ErrorUpdatingResource pod/sre-dns-latency-exporter-hm6bk addLogicalPort failed for openshift-monitoring/sre-dns-latency-exporter-hm6bk: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-187-75.ec2.internal" openshift-cluster-csi-drivers 40m Normal Pulling pod/aws-ebs-csi-driver-node-s4chb Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" openshift-monitoring 40m Normal Pulling pod/node-exporter-4g9rl Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" openshift-multus 40m Normal Pulling pod/multus-additional-cni-plugins-4qmk6 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633" openshift-multus 40m Warning ErrorUpdatingResource pod/network-metrics-daemon-lbxjr addLogicalPort failed for openshift-multus/network-metrics-daemon-lbxjr: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-187-75.ec2.internal" openshift-dns 40m Normal Pulling pod/node-resolver-qqhl6 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" openshift-image-registry 40m Normal Pulling pod/node-ca-5ldj8 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" openshift-ovn-kubernetes 40m Normal Pulling pod/ovnkube-node-zzdfn Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" openshift-cluster-node-tuning-operator 40m Normal Pulling pod/tuned-9gtgt Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" openshift-kube-controller-manager-operator 40m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 6; 0 nodes have achieved new revision 7" to "NodeInstallerProgressing: 2 nodes are at revision 6; 1 nodes are at revision 7",Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 6; 0 nodes have achieved new revision 7" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 6; 1 nodes are at revision 7" openshift-machine-config-operator 40m Normal Pulling pod/machine-config-daemon-vlfmm Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" openshift-machine-config-operator 40m Normal Created pod/machine-config-daemon-vlfmm Created container machine-config-daemon openshift-machine-config-operator 40m Normal Started pod/machine-config-daemon-vlfmm Started container machine-config-daemon default 40m Normal RegisteredNode node/ip-10-0-187-75.ec2.internal Node ip-10-0-187-75.ec2.internal event: Registered Node ip-10-0-187-75.ec2.internal in Controller openshift-kube-controller-manager-operator 40m Normal NodeCurrentRevisionChanged deployment/kube-controller-manager-operator Updated node "ip-10-0-239-132.ec2.internal" from revision 6 to 7 because static pod is ready openshift-etcd 40m Normal Killing pod/etcd-ip-10-0-197-197.ec2.internal Stopping container etcd-readyz openshift-machine-api 40m Normal DetectedUnhealthy machine/qeaisrhods-c13-28wr5-infra-us-east-1a-54lb2 Machine openshift-machine-api/srep-infra-healthcheck/qeaisrhods-c13-28wr5-infra-us-east-1a-54lb2/ has unhealthy node openshift-etcd 40m Normal Killing pod/etcd-ip-10-0-197-197.ec2.internal Stopping container etcdctl openshift-etcd 40m Normal Killing pod/etcd-ip-10-0-197-197.ec2.internal Stopping container etcd openshift-monitoring 40m Normal Pulled pod/node-exporter-4g9rl Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" in 3.554047157s (3.554059149s including waiting) openshift-etcd 40m Normal StaticPodInstallerCompleted pod/installer-7-ip-10-0-197-197.ec2.internal Successfully installed revision 7 openshift-etcd 40m Warning ProbeError pod/etcd-guard-ip-10-0-197-197.ec2.internal Readiness probe error: Get "https://10.0.197.197:9980/healthz": dial tcp 10.0.197.197:9980: connect: connection refused... openshift-monitoring 39m Normal Started pod/node-exporter-4g9rl Started container init-textfile openshift-monitoring 39m Normal Created pod/node-exporter-4g9rl Created container init-textfile openshift-cloud-controller-manager-operator 39m Normal LeaderElection lease/cluster-cloud-config-sync-leader ip-10-0-140-6_d787524c-a97d-4b9d-8c4c-8e60daaa5739 became leader openshift-monitoring 39m Normal SuccessfulCreate daemonset/sre-dns-latency-exporter Created pod: sre-dns-latency-exporter-v8kzl openshift-machine-config-operator 39m Normal SuccessfulCreate daemonset/machine-config-daemon Created pod: machine-config-daemon-tpglq openshift-cluster-node-tuning-operator 39m Normal SuccessfulCreate daemonset/tuned Created pod: tuned-nhvkp openshift-image-registry 39m Normal SuccessfulCreate daemonset/node-ca Created pod: node-ca-fg6h6 openshift-network-diagnostics 39m Warning ErrorUpdatingResource pod/network-check-target-trrh7 addLogicalPort failed for openshift-network-diagnostics/network-check-target-trrh7: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-195-121.ec2.internal" openshift-ovn-kubernetes 39m Normal SuccessfulCreate daemonset/ovnkube-node Created pod: ovnkube-node-6jsx2 openshift-multus 39m Normal SuccessfulCreate daemonset/network-metrics-daemon Created pod: network-metrics-daemon-qfgm8 openshift-machine-api 39m Normal DetectedUnhealthy machine/qeaisrhods-c13-28wr5-infra-us-east-1a-qww78 Machine openshift-machine-api/srep-infra-healthcheck/qeaisrhods-c13-28wr5-infra-us-east-1a-qww78/ip-10-0-187-75.ec2.internal has unhealthy node ip-10-0-187-75.ec2.internal openshift-dns 39m Normal SuccessfulCreate daemonset/node-resolver Created pod: node-resolver-njmd5 openshift-multus 39m Normal SuccessfulCreate daemonset/multus Created pod: multus-db5qv openshift-monitoring 39m Warning ErrorUpdatingResource pod/sre-dns-latency-exporter-v8kzl addLogicalPort failed for openshift-monitoring/sre-dns-latency-exporter-v8kzl: timed out waiting for logical switch in logical switch cache "ip-10-0-195-121.ec2.internal" subnet: error getting logical switch ip-10-0-195-121.ec2.internal: switch not in logical switch cache default 39m Normal NodeHasSufficientMemory node/ip-10-0-195-121.ec2.internal Node ip-10-0-195-121.ec2.internal status is now: NodeHasSufficientMemory openshift-multus 39m Warning ErrorUpdatingResource pod/network-metrics-daemon-qfgm8 addLogicalPort failed for openshift-multus/network-metrics-daemon-qfgm8: timed out waiting for logical switch in logical switch cache "ip-10-0-195-121.ec2.internal" subnet: error getting logical switch ip-10-0-195-121.ec2.internal: switch not in logical switch cache openshift-monitoring 39m Normal SuccessfulCreate daemonset/node-exporter Created pod: node-exporter-sn6ks default 39m Warning ErrorReconcilingNode node/ip-10-0-195-121.ec2.internal nodeAdd: error adding node "ip-10-0-195-121.ec2.internal": could not find "k8s.ovn.org/node-subnets" annotation openshift-multus 39m Warning ErrorUpdatingResource pod/network-metrics-daemon-qfgm8 addLogicalPort failed for openshift-multus/network-metrics-daemon-qfgm8: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-195-121.ec2.internal" openshift-monitoring 39m Warning ErrorUpdatingResource pod/sre-dns-latency-exporter-v8kzl addLogicalPort failed for openshift-monitoring/sre-dns-latency-exporter-v8kzl: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-195-121.ec2.internal" openshift-cluster-csi-drivers 39m Normal SuccessfulCreate daemonset/aws-ebs-csi-driver-node Created pod: aws-ebs-csi-driver-node-r2n4w openshift-multus 39m Normal SuccessfulCreate daemonset/multus-additional-cni-plugins Created pod: multus-additional-cni-plugins-x8r6f openshift-network-diagnostics 39m Normal SuccessfulCreate daemonset/network-check-target Created pod: network-check-target-trrh7 default 39m Normal NodeHasNoDiskPressure node/ip-10-0-195-121.ec2.internal Node ip-10-0-195-121.ec2.internal status is now: NodeHasNoDiskPressure openshift-machine-api 39m Normal DetectedUnhealthy machine/qeaisrhods-c13-28wr5-infra-us-east-1a-54lb2 Machine openshift-machine-api/srep-infra-healthcheck/qeaisrhods-c13-28wr5-infra-us-east-1a-54lb2/ip-10-0-195-121.ec2.internal has unhealthy node ip-10-0-195-121.ec2.internal openshift-image-registry 39m Normal Pulling pod/node-ca-fg6h6 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" openshift-cluster-node-tuning-operator 39m Normal Pulling pod/tuned-nhvkp Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" openshift-multus 39m Normal Pulling pod/multus-db5qv Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" openshift-machine-config-operator 39m Normal Pulling pod/machine-config-daemon-tpglq Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" openshift-machine-config-operator 39m Warning Failed pod/machine-config-daemon-tpglq Error: services have not yet been read at least once, cannot construct envvars openshift-monitoring 39m Normal Pulling pod/node-exporter-sn6ks Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" openshift-kube-controller-manager-operator 39m Normal NodeTargetRevisionChanged deployment/kube-controller-manager-operator Updating node "ip-10-0-140-6.ec2.internal" from revision 6 to 7 because node ip-10-0-140-6.ec2.internal with revision 6 is the oldest openshift-ovn-kubernetes 39m Normal Pulling pod/ovnkube-node-6jsx2 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" openshift-multus 39m Normal Pulling pod/multus-additional-cni-plugins-x8r6f Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633" openshift-dns 39m Normal Pulling pod/node-resolver-njmd5 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" openshift-cluster-csi-drivers 39m Normal Pulling pod/aws-ebs-csi-driver-node-r2n4w Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" openshift-ingress 39m Normal UpdatedLoadBalancer service/router-default Updated load balancer with new hosts openshift-cloud-controller-manager-operator 39m Normal LeaderElection lease/cluster-cloud-controller-manager-leader ip-10-0-140-6_812cbf07-3423-437e-a1d9-431a8e116003 became leader openshift-monitoring 39m Normal Pulled pod/node-exporter-4g9rl Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" already present on machine default 39m Normal RegisteredNode node/ip-10-0-195-121.ec2.internal Node ip-10-0-195-121.ec2.internal event: Registered Node ip-10-0-195-121.ec2.internal in Controller openshift-kube-controller-manager-operator 39m Normal PodCreated deployment/kube-controller-manager-operator Created Pod/installer-7-ip-10-0-140-6.ec2.internal -n openshift-kube-controller-manager because it was missing openshift-kube-controller-manager 39m Normal Created pod/installer-7-ip-10-0-140-6.ec2.internal Created container installer openshift-kube-controller-manager 39m Normal Pulled pod/installer-7-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 39m Normal Started pod/installer-7-ip-10-0-140-6.ec2.internal Started container installer openshift-kube-controller-manager 39m Normal AddedInterface pod/installer-7-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.61/23] from ovn-kubernetes openshift-console-operator 39m Warning FastControllerResync deployment/console-operator Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling openshift-console-operator 39m Normal ConfigMapUpdated deployment/console-operator Updated ConfigMap/default-ingress-cert -n openshift-console:... openshift-console-operator 39m Warning FastControllerResync deployment/console-operator Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling openshift-console-operator 39m Warning FastControllerResync deployment/console-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-console-operator 39m Normal LeaderElection configmap/console-operator-lock console-operator-57cbc6b88f-tbq55_a617610f-7623-439a-b447-593fa5e34e19 became leader openshift-console-operator 39m Normal LeaderElection lease/console-operator-lock console-operator-57cbc6b88f-tbq55_a617610f-7623-439a-b447-593fa5e34e19 became leader openshift-console-operator 39m Normal ConfigMapUpdated deployment/console-operator Updated ConfigMap/oauth-serving-cert -n openshift-console:... openshift-monitoring 39m Warning FailedMount pod/sre-dns-latency-exporter-hm6bk MountVolume.SetUp failed for volume "trusted-ca-bundle" : object "openshift-monitoring"/"sre-dns-latency-exporter-trusted-ca-bundle" not registered openshift-kube-scheduler-operator 39m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/installer-8-ip-10-0-239-132.ec2.internal -n openshift-kube-scheduler because it was missing openshift-monitoring 39m Warning FailedMount pod/sre-dns-latency-exporter-hm6bk MountVolume.SetUp failed for volume "monitor-volume" : object "openshift-monitoring"/"sre-dns-latency-exporter-code" not registered openshift-kube-scheduler 39m Normal Pulled pod/installer-8-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 39m Normal Started pod/installer-8-ip-10-0-239-132.ec2.internal Started container installer openshift-kube-scheduler 39m Normal AddedInterface pod/installer-8-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.43/23] from ovn-kubernetes openshift-kube-scheduler 39m Normal Created pod/installer-8-ip-10-0-239-132.ec2.internal Created container installer openshift-console 39m Normal Killing pod/console-7db75d8d45-dzkhb Stopping container console openshift-console 39m Normal SuccessfulDelete replicaset/console-7db75d8d45 Deleted pod: console-7db75d8d45-dzkhb openshift-console 39m Normal SuccessfulCreate replicaset/console-65cc7f8b45 Created pod: console-65cc7f8b45-drq2q openshift-console 39m Normal SuccessfulCreate replicaset/console-65cc7f8b45 Created pod: console-65cc7f8b45-md5n8 openshift-console 39m Normal ScalingReplicaSet deployment/console Scaled down replica set console-7db75d8d45 to 1 from 2 openshift-console 39m Normal ScalingReplicaSet deployment/console Scaled up replica set console-65cc7f8b45 to 2 openshift-console-operator 39m Normal DeploymentUpdated deployment/console-operator Updated Deployment.apps/console -n openshift-console because it changed openshift-console-operator 39m Normal OperatorStatusChanged deployment/console-operator Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: Working toward version 4.13.0-rc.0, 1 replicas available" to "SyncLoopRefreshProgressing: Changes made during sync updates, additional sync expected." openshift-machine-config-operator 39m Normal Pulled pod/machine-config-daemon-vlfmm Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" in 20.007743583s (20.00775207s including waiting) openshift-cluster-node-tuning-operator 39m Normal Pulled pod/tuned-9gtgt Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" in 21.022879028s (21.02288648s including waiting) openshift-multus 39m Normal Pulled pod/multus-additional-cni-plugins-4qmk6 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633" in 21.040551664s (21.04055757s including waiting) openshift-cluster-csi-drivers 39m Normal Pulled pod/aws-ebs-csi-driver-node-s4chb Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" in 21.015068448s (21.015073928s including waiting) openshift-image-registry 39m Normal Pulled pod/node-ca-5ldj8 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" in 20.78106445s (20.781082218s including waiting) openshift-etcd 39m Normal AddedInterface pod/revision-pruner-7-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.44/23] from ovn-kubernetes openshift-etcd 39m Normal Pulled pod/revision-pruner-7-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd-operator 39m Normal PodCreated deployment/etcd-operator Created Pod/revision-pruner-7-ip-10-0-239-132.ec2.internal -n openshift-etcd because it was missing openshift-monitoring 39m Warning NetworkNotReady pod/sre-dns-latency-exporter-hm6bk network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? openshift-network-diagnostics 39m Warning FailedMount pod/network-check-target-trrh7 MountVolume.SetUp failed for volume "kube-api-access-vfblt" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] openshift-etcd 39m Normal Created pod/revision-pruner-7-ip-10-0-239-132.ec2.internal Created container pruner openshift-etcd 39m Normal Started pod/revision-pruner-7-ip-10-0-239-132.ec2.internal Started container pruner openshift-multus 39m Warning FailedMount pod/network-metrics-daemon-qfgm8 MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered openshift-monitoring 39m Warning FailedMount pod/sre-dns-latency-exporter-v8kzl MountVolume.SetUp failed for volume "monitor-volume" : object "openshift-monitoring"/"sre-dns-latency-exporter-code" not registered openshift-monitoring 39m Warning FailedMount pod/sre-dns-latency-exporter-v8kzl MountVolume.SetUp failed for volume "trusted-ca-bundle" : object "openshift-monitoring"/"sre-dns-latency-exporter-trusted-ca-bundle" not registered openshift-cluster-node-tuning-operator 39m Normal Started pod/tuned-9gtgt Started container tuned openshift-monitoring 39m Normal Started pod/node-exporter-4g9rl Started container node-exporter openshift-dns 39m Normal Pulled pod/node-resolver-qqhl6 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" in 24.807360175s (24.807366054s including waiting) openshift-monitoring 39m Normal Pulling pod/node-exporter-4g9rl Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" openshift-cluster-node-tuning-operator 39m Normal Created pod/tuned-9gtgt Created container tuned openshift-machine-config-operator 39m Normal Started pod/machine-config-daemon-vlfmm Started container oauth-proxy openshift-machine-config-operator 39m Normal Created pod/machine-config-daemon-vlfmm Created container oauth-proxy openshift-ovn-kubernetes 39m Normal Pulled pod/ovnkube-node-zzdfn Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" in 24.983512619s (24.98352904s including waiting) openshift-ovn-kubernetes 39m Normal Pulled pod/ovnkube-node-zzdfn Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-multus 39m Normal Pulled pod/multus-xqcfd Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" in 24.817281874s (24.817288157s including waiting) openshift-monitoring 39m Normal Created pod/node-exporter-4g9rl Created container node-exporter openshift-multus 39m Normal Pulling pod/multus-additional-cni-plugins-4qmk6 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be088cb9218a248cd4a616132c8daf531502c7b85404aadb0b73e76f8dba01df" openshift-image-registry 39m Normal Created pod/node-ca-5ldj8 Created container node-ca openshift-multus 39m Normal Started pod/multus-additional-cni-plugins-4qmk6 Started container egress-router-binary-copy openshift-console-operator 39m Normal OperatorStatusChanged deployment/console-operator Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: Changes made during sync updates, additional sync expected." to "SyncLoopRefreshProgressing: Working toward version 4.13.0-rc.0, 1 replicas available" openshift-ovn-kubernetes 39m Normal Created pod/ovnkube-node-zzdfn Created container ovn-acl-logging openshift-ovn-kubernetes 39m Normal Started pod/ovnkube-node-zzdfn Started container ovn-acl-logging openshift-multus 39m Normal Created pod/multus-xqcfd Created container kube-multus openshift-multus 39m Normal Started pod/multus-xqcfd Started container kube-multus openshift-ovn-kubernetes 39m Normal Pulling pod/ovnkube-node-zzdfn Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" openshift-cluster-csi-drivers 39m Normal Created pod/aws-ebs-csi-driver-node-s4chb Created container csi-driver openshift-cluster-csi-drivers 39m Normal Pulling pod/aws-ebs-csi-driver-node-s4chb Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" openshift-cluster-csi-drivers 39m Normal Started pod/aws-ebs-csi-driver-node-s4chb Started container csi-driver openshift-multus 39m Normal Created pod/multus-additional-cni-plugins-4qmk6 Created container egress-router-binary-copy openshift-dns 39m Normal Created pod/node-resolver-qqhl6 Created container dns-node-resolver openshift-image-registry 39m Normal Started pod/node-ca-5ldj8 Started container node-ca openshift-dns 39m Normal Started pod/node-resolver-qqhl6 Started container dns-node-resolver openshift-monitoring 39m Normal Pulled pod/node-exporter-4g9rl Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" in 1.595876789s (1.595890109s including waiting) openshift-machine-config-operator 39m Normal Pulled pod/machine-config-daemon-tpglq Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" in 17.515359003s (17.515365056s including waiting) openshift-cluster-csi-drivers 39m Normal Pulled pod/aws-ebs-csi-driver-node-s4chb Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" in 1.501753884s (1.501765415s including waiting) openshift-ovn-kubernetes 39m Normal Started pod/ovnkube-node-zzdfn Started container kube-rbac-proxy openshift-ovn-kubernetes 39m Normal Pulled pod/ovnkube-node-zzdfn Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 39m Normal Created pod/node-exporter-4g9rl Created container kube-rbac-proxy openshift-monitoring 39m Normal Started pod/node-exporter-4g9rl Started container kube-rbac-proxy openshift-ovn-kubernetes 39m Normal Pulled pod/ovnkube-node-zzdfn Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" in 1.376443755s (1.376463663s including waiting) openshift-ovn-kubernetes 39m Normal Started pod/ovnkube-node-zzdfn Started container kube-rbac-proxy-ovn-metrics openshift-monitoring 39m Normal Pulled pod/node-exporter-sn6ks Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" in 17.475516302s (17.475523164s including waiting) openshift-ovn-kubernetes 39m Normal Pulled pod/ovnkube-node-zzdfn Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-etcd-operator 39m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "StaticPodsDegraded: pod/etcd-ip-10-0-197-197.ec2.internal container \"etcd\" started at 2023-03-21 12:28:50 +0000 UTC is still not ready\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" openshift-ovn-kubernetes 39m Normal Created pod/ovnkube-node-zzdfn Created container kube-rbac-proxy openshift-ovn-kubernetes 39m Normal Created pod/ovnkube-node-zzdfn Created container kube-rbac-proxy-ovn-metrics openshift-cluster-csi-drivers 39m Normal Created pod/aws-ebs-csi-driver-node-s4chb Created container csi-node-driver-registrar openshift-cluster-csi-drivers 39m Normal Pulling pod/aws-ebs-csi-driver-node-s4chb Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" openshift-multus 39m Normal Pulled pod/multus-additional-cni-plugins-x8r6f Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633" in 18.448632968s (18.44863968s including waiting) openshift-cluster-csi-drivers 39m Normal Pulled pod/aws-ebs-csi-driver-node-r2n4w Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" in 18.43782873s (18.437838669s including waiting) openshift-image-registry 39m Normal Pulled pod/node-ca-fg6h6 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" in 18.42988561s (18.429891968s including waiting) default 39m Warning ErrorReconcilingNode node/ip-10-0-187-75.ec2.internal [k8s.ovn.org/node-chassis-id annotation not found for node ip-10-0-187-75.ec2.internal, macAddress annotation not found for node "ip-10-0-187-75.ec2.internal" , k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-187-75.ec2.internal"] openshift-cluster-csi-drivers 39m Normal Started pod/aws-ebs-csi-driver-node-s4chb Started container csi-node-driver-registrar openshift-ovn-kubernetes 39m Normal Created pod/ovnkube-node-zzdfn Created container ovnkube-node openshift-ovn-kubernetes 39m Normal Started pod/ovnkube-node-zzdfn Started container ovnkube-node openshift-cluster-csi-drivers 39m Normal Pulled pod/aws-ebs-csi-driver-node-s4chb Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" in 970.678465ms (970.686659ms including waiting) openshift-cluster-csi-drivers 39m Normal Created pod/aws-ebs-csi-driver-node-s4chb Created container csi-liveness-probe openshift-ovn-kubernetes 39m Normal Started pod/ovnkube-node-zzdfn Started container ovn-controller openshift-ovn-kubernetes 39m Normal Created pod/ovnkube-node-zzdfn Created container ovn-controller openshift-ovn-kubernetes 39m Normal Pulled pod/ovnkube-node-zzdfn Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-cluster-csi-drivers 39m Normal Started pod/aws-ebs-csi-driver-node-s4chb Started container csi-liveness-probe default 39m Warning ResolutionFailed namespace/openshift-custom-domains-operator constraints not satisfiable: subscription custom-domains-operator exists, no operators found from catalog custom-domains-operator-registry in namespace openshift-custom-domains-operator referenced by subscription custom-domains-operator default 39m Normal ConfigDriftMonitorStarted node/ip-10-0-187-75.ec2.internal Config Drift Monitor started, watching against rendered-worker-65a660c5b4cafef14c5770efedbee76c default 39m Normal NodeDone node/ip-10-0-187-75.ec2.internal Setting node ip-10-0-187-75.ec2.internal, currentConfig rendered-worker-65a660c5b4cafef14c5770efedbee76c to Done openshift-network-diagnostics 39m Warning NetworkNotReady pod/network-check-target-v468t network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? openshift-multus 39m Warning NetworkNotReady pod/network-metrics-daemon-lbxjr network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? default 39m Normal Uncordon node/ip-10-0-187-75.ec2.internal Update completed for config rendered-worker-65a660c5b4cafef14c5770efedbee76c and node has been uncordoned openshift-cluster-node-tuning-operator 39m Normal Pulled pod/tuned-nhvkp Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" in 21.441810489s (21.441836234s including waiting) openshift-dns 39m Normal Pulled pod/node-resolver-njmd5 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" in 21.475607782s (21.475614046s including waiting) openshift-monitoring 39m Normal TaintManagerEviction pod/prometheus-operator-admission-webhook-5c9b9d98cc-nznt8 Cancelling deletion of Pod openshift-monitoring/prometheus-operator-admission-webhook-5c9b9d98cc-nznt8 openshift-multus 39m Normal Pulled pod/multus-db5qv Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" in 21.458012512s (21.458018214s including waiting) openshift-monitoring 39m Warning NetworkNotReady pod/sre-dns-latency-exporter-v8kzl network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? openshift-ovn-kubernetes 39m Normal Pulled pod/ovnkube-node-6jsx2 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" in 21.579715109s (21.579736441s including waiting) openshift-multus 39m Warning FailedMount pod/network-metrics-daemon-lbxjr MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered openshift-ingress-canary 39m Normal SuccessfulCreate daemonset/ingress-canary Created pod: ingress-canary-zwpz2 openshift-network-diagnostics 39m Warning FailedMount pod/network-check-target-v468t MountVolume.SetUp failed for volume "kube-api-access-xch4r" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] openshift-ingress 39m Normal TaintManagerEviction pod/router-default-7cf4c94d4-s4mh5 Cancelling deletion of Pod openshift-ingress/router-default-7cf4c94d4-s4mh5 default 39m Normal NodeReady node/ip-10-0-187-75.ec2.internal Node ip-10-0-187-75.ec2.internal status is now: NodeReady openshift-monitoring 39m Normal AddedInterface pod/prometheus-operator-admission-webhook-5c9b9d98cc-nznt8 Add eth0 [10.129.2.7/23] from ovn-kubernetes openshift-monitoring 39m Normal Started pod/node-exporter-sn6ks Started container init-textfile openshift-ingress-canary 39m Normal AddedInterface pod/ingress-canary-zwpz2 Add eth0 [10.129.2.9/23] from ovn-kubernetes openshift-cluster-csi-drivers 39m Normal Created pod/aws-ebs-csi-driver-node-r2n4w Created container csi-driver openshift-ingress-canary 39m Normal Pulling pod/ingress-canary-zwpz2 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6de5d1e6d774c20a0c3bd4d4c544234519bc4528e5535f24d0cf78a6788b4056" openshift-cluster-csi-drivers 39m Normal Started pod/aws-ebs-csi-driver-node-r2n4w Started container csi-driver openshift-cluster-csi-drivers 39m Normal Pulling pod/aws-ebs-csi-driver-node-r2n4w Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" openshift-kube-apiserver 39m Normal StaticPodInstallerCompleted pod/installer-11-ip-10-0-197-197.ec2.internal Successfully installed revision 11 openshift-machine-config-operator 39m Normal Created pod/machine-config-daemon-tpglq Created container machine-config-daemon openshift-multus 39m Normal Created pod/multus-db5qv Created container kube-multus openshift-ovn-kubernetes 39m Normal Pulling pod/ovnkube-node-6jsx2 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" openshift-machine-config-operator 39m Normal Pulled pod/machine-config-daemon-tpglq Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" already present on machine openshift-multus 39m Normal Created pod/multus-additional-cni-plugins-x8r6f Created container egress-router-binary-copy openshift-cluster-node-tuning-operator 39m Normal Created pod/tuned-nhvkp Created container tuned openshift-ovn-kubernetes 39m Normal Created pod/ovnkube-node-6jsx2 Created container ovn-acl-logging openshift-monitoring 39m Normal Pulling pod/prometheus-operator-admission-webhook-5c9b9d98cc-nznt8 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e2218fd1d860bdb72a28d8fc34e1d5e7c3674bf1d0005583d70800dcd79d2" openshift-machine-config-operator 39m Normal Started pod/machine-config-daemon-tpglq Started container machine-config-daemon openshift-multus 39m Normal Started pod/multus-additional-cni-plugins-x8r6f Started container egress-router-binary-copy openshift-ovn-kubernetes 39m Normal Pulled pod/ovnkube-node-6jsx2 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-monitoring 39m Normal Pulled pod/node-exporter-sn6ks Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" already present on machine openshift-monitoring 39m Normal Created pod/node-exporter-sn6ks Created container init-textfile openshift-ovn-kubernetes 39m Normal Started pod/ovnkube-node-6jsx2 Started container ovn-acl-logging openshift-monitoring 39m Normal Created pod/node-exporter-sn6ks Created container node-exporter openshift-dns 39m Normal Created pod/node-resolver-njmd5 Created container dns-node-resolver openshift-multus 39m Normal Pulling pod/multus-additional-cni-plugins-x8r6f Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be088cb9218a248cd4a616132c8daf531502c7b85404aadb0b73e76f8dba01df" openshift-monitoring 39m Normal Started pod/node-exporter-sn6ks Started container node-exporter openshift-dns 39m Normal Started pod/node-resolver-njmd5 Started container dns-node-resolver openshift-machine-config-operator 39m Normal Created pod/machine-config-daemon-tpglq Created container oauth-proxy openshift-multus 39m Normal Pulling pod/multus-additional-cni-plugins-4qmk6 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dec1af3c24148179ab45f9137ba74876f4287121f07aa32dc7a3643da749a9" openshift-ingress 39m Normal Pulling pod/router-default-7cf4c94d4-s4mh5 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0743d54d3acaf6558295618248ff446b4352dde0234d52465d7578c7c261e6fd" openshift-monitoring 39m Normal Pulling pod/node-exporter-sn6ks Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" openshift-machine-config-operator 39m Normal Started pod/machine-config-daemon-tpglq Started container oauth-proxy openshift-image-registry 39m Normal Started pod/node-ca-fg6h6 Started container node-ca openshift-image-registry 39m Normal Created pod/node-ca-fg6h6 Created container node-ca openshift-multus 39m Normal Started pod/multus-db5qv Started container kube-multus openshift-cluster-node-tuning-operator 39m Normal Started pod/tuned-nhvkp Started container tuned openshift-multus 39m Normal Started pod/multus-additional-cni-plugins-4qmk6 Started container cni-plugins openshift-multus 39m Normal Created pod/multus-additional-cni-plugins-4qmk6 Created container cni-plugins openshift-ingress 39m Normal AddedInterface pod/router-default-7cf4c94d4-s4mh5 Add eth0 [10.129.2.8/23] from ovn-kubernetes openshift-multus 39m Normal Pulled pod/multus-additional-cni-plugins-4qmk6 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be088cb9218a248cd4a616132c8daf531502c7b85404aadb0b73e76f8dba01df" in 5.648606554s (5.64862022s including waiting) openshift-cluster-csi-drivers 39m Normal Pulled pod/aws-ebs-csi-driver-node-r2n4w Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" in 1.494993318s (1.494999555s including waiting) openshift-multus 39m Normal Pulled pod/multus-additional-cni-plugins-4qmk6 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dec1af3c24148179ab45f9137ba74876f4287121f07aa32dc7a3643da749a9" in 1.095256562s (1.095265097s including waiting) openshift-ovn-kubernetes 39m Normal Pulled pod/ovnkube-node-6jsx2 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-cluster-csi-drivers 39m Normal Pulling pod/aws-ebs-csi-driver-node-r2n4w Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" openshift-monitoring 39m Normal Pulled pod/node-exporter-sn6ks Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" in 1.835476646s (1.835485118s including waiting) openshift-multus 39m Normal Started pod/multus-additional-cni-plugins-4qmk6 Started container bond-cni-plugin openshift-multus 39m Normal Pulling pod/multus-additional-cni-plugins-4qmk6 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4903dfa59fc0fcc3521f6431d51c6ed0ef563b0c63614a2df44ee4d447ab6c5d" openshift-ovn-kubernetes 39m Normal Started pod/ovnkube-node-6jsx2 Started container ovnkube-node openshift-cluster-csi-drivers 39m Normal Started pod/aws-ebs-csi-driver-node-r2n4w Started container csi-node-driver-registrar openshift-ovn-kubernetes 39m Normal Pulled pod/ovnkube-node-6jsx2 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" in 2.028495561s (2.0285042s including waiting) openshift-ovn-kubernetes 39m Normal Started pod/ovnkube-node-6jsx2 Started container kube-rbac-proxy-ovn-metrics openshift-ovn-kubernetes 39m Normal Created pod/ovnkube-node-6jsx2 Created container kube-rbac-proxy-ovn-metrics openshift-multus 39m Normal Created pod/multus-additional-cni-plugins-4qmk6 Created container bond-cni-plugin openshift-ovn-kubernetes 39m Normal Pulled pod/ovnkube-node-6jsx2 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-ovn-kubernetes 39m Normal Created pod/ovnkube-node-6jsx2 Created container ovnkube-node openshift-monitoring 39m Normal Started pod/node-exporter-sn6ks Started container kube-rbac-proxy openshift-monitoring 39m Normal Created pod/node-exporter-sn6ks Created container kube-rbac-proxy openshift-cluster-csi-drivers 39m Normal Created pod/aws-ebs-csi-driver-node-r2n4w Created container csi-node-driver-registrar openshift-ovn-kubernetes 39m Normal Started pod/ovnkube-node-6jsx2 Started container kube-rbac-proxy openshift-ovn-kubernetes 39m Normal Created pod/ovnkube-node-6jsx2 Created container kube-rbac-proxy openshift-cluster-csi-drivers 39m Normal Pulled pod/aws-ebs-csi-driver-node-r2n4w Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" in 1.241997215s (1.242023181s including waiting) default 39m Warning ErrorReconcilingNode node/ip-10-0-195-121.ec2.internal [k8s.ovn.org/node-chassis-id annotation not found for node ip-10-0-195-121.ec2.internal, macAddress annotation not found for node "ip-10-0-195-121.ec2.internal" , k8s.ovn.org/l3-gateway-config annotation not found for node "ip-10-0-195-121.ec2.internal"] openshift-monitoring 39m Normal Pulled pod/prometheus-operator-admission-webhook-5c9b9d98cc-nznt8 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e2218fd1d860bdb72a28d8fc34e1d5e7c3674bf1d0005583d70800dcd79d2" in 3.09573965s (3.095755761s including waiting) openshift-etcd-operator 39m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "StaticPodsDegraded: pod/etcd-ip-10-0-197-197.ec2.internal container \"etcd\" started at 2023-03-21 12:28:50 +0000 UTC is still not ready\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "StaticPodsDegraded: pod/etcd-ip-10-0-197-197.ec2.internal container \"etcd\" is terminated: Completed: \nStaticPodsDegraded: pod/etcd-ip-10-0-197-197.ec2.internal container \"etcd-metrics\" is terminated: Error: ,\"caller\":\"zapgrpc/zapgrpc.go:191\",\"msg\":\"[core] grpc: addrConn.createTransport failed to connect to {10.0.197.197:9978 10.0.197.197 0 }. Err: connection error: desc = \\\"transport: Error while dialing dial tcp 10.0.197.197:9978: connect: connection refused\\\"\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:29:05.043Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to TRANSIENT_FAILURE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:29:05.043Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000520cf0, TRANSIENT_FAILURE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:29:16.241Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to IDLE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:29:16.241Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000520cf0, IDLE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:29:16.241Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to CONNECTING\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:29:16.241Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel picks a new address \\\"10.0.197.197:9978\\\" to connect\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:29:16.241Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000520cf0, CONNECTING\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:29:16.246Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to READY\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:29:16.246Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000520cf0, READY\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:29:16.246Z\",\"caller\":\nStaticPodsDegraded: pod/etcd-ip-10-0-197-197.ec2.internal container \"etcd-readyz\" is terminated: Completed: \nStaticPodsDegraded: pod/etcd-ip-10-0-197-197.ec2.internal container \"etcdctl\" is terminated: Error: \nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" openshift-ingress-canary 39m Normal Pulled pod/ingress-canary-zwpz2 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6de5d1e6d774c20a0c3bd4d4c544234519bc4528e5535f24d0cf78a6788b4056" in 3.210602747s (3.210619122s including waiting) openshift-multus 39m Normal Pulled pod/multus-additional-cni-plugins-4qmk6 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4903dfa59fc0fcc3521f6431d51c6ed0ef563b0c63614a2df44ee4d447ab6c5d" in 1.084523324s (1.084531509s including waiting) default 39m Warning ErrorReconcilingNode node/ip-10-0-195-121.ec2.internal error creating gateway for node ip-10-0-195-121.ec2.internal: failed to init shared interface gateway: failed to create MAC Binding for dummy nexthop ip-10-0-195-121.ec2.internal: error getting datapath GR_ip-10-0-195-121.ec2.internal: object not found openshift-ingress 39m Normal Pulled pod/router-default-7cf4c94d4-s4mh5 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0743d54d3acaf6558295618248ff446b4352dde0234d52465d7578c7c261e6fd" in 3.245521091s (3.245537173s including waiting) openshift-multus 39m Normal Pulling pod/multus-additional-cni-plugins-4qmk6 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" openshift-ingress 39m Normal Created pod/router-default-7cf4c94d4-s4mh5 Created container router openshift-monitoring 39m Normal Started pod/prometheus-operator-admission-webhook-5c9b9d98cc-nznt8 Started container prometheus-operator-admission-webhook openshift-ingress-canary 39m Normal Created pod/ingress-canary-zwpz2 Created container serve-healthcheck-canary openshift-monitoring 39m Normal SuccessfulDelete replicaset/prometheus-operator-admission-webhook-5c549f4449 Deleted pod: prometheus-operator-admission-webhook-5c549f4449-v9x8h openshift-monitoring 39m Normal Killing pod/prometheus-operator-admission-webhook-5c549f4449-v9x8h Stopping container prometheus-operator-admission-webhook openshift-ingress 39m Normal Started pod/router-default-7cf4c94d4-s4mh5 Started container router openshift-multus 39m Normal Created pod/multus-additional-cni-plugins-4qmk6 Created container routeoverride-cni openshift-multus 39m Normal Started pod/multus-additional-cni-plugins-4qmk6 Started container routeoverride-cni openshift-ingress-canary 39m Normal Started pod/ingress-canary-zwpz2 Started container serve-healthcheck-canary openshift-monitoring 39m Normal Created pod/prometheus-operator-admission-webhook-5c9b9d98cc-nznt8 Created container prometheus-operator-admission-webhook openshift-monitoring 39m Normal ScalingReplicaSet deployment/prometheus-operator-admission-webhook Scaled down replica set prometheus-operator-admission-webhook-5c549f4449 to 0 from 1 openshift-network-diagnostics 39m Warning NetworkNotReady pod/network-check-target-trrh7 network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? openshift-ingress-canary 39m Normal SuccessfulCreate daemonset/ingress-canary Created pod: ingress-canary-xb5f7 openshift-multus 39m Warning NetworkNotReady pod/network-metrics-daemon-qfgm8 network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? openshift-multus 39m Normal Pulled pod/multus-additional-cni-plugins-4qmk6 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" in 1.834570243s (1.834582119s including waiting) openshift-multus 39m Normal Created pod/multus-additional-cni-plugins-4qmk6 Created container whereabouts-cni-bincopy openshift-image-registry 39m Normal AddedInterface pod/image-registry-55b7d998b9-4mbwh Add eth0 [10.129.2.10/23] from ovn-kubernetes openshift-image-registry 39m Normal Pulled pod/image-registry-55b7d998b9-4mbwh Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" already present on machine openshift-multus 39m Normal Started pod/multus-additional-cni-plugins-4qmk6 Started container whereabouts-cni-bincopy openshift-multus 39m Normal Pulled pod/multus-additional-cni-plugins-4qmk6 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" already present on machine default 39m Warning ResolutionFailed namespace/openshift-observability-operator constraints not satisfiable: subscription observability-operator exists, no operators found from catalog observability-operator-catalog in namespace openshift-observability-operator referenced by subscription observability-operator openshift-image-registry 39m Normal Started pod/image-registry-55b7d998b9-4mbwh Started container registry openshift-image-registry 39m Normal Created pod/image-registry-55b7d998b9-4mbwh Created container registry openshift-multus 39m Normal Pulled pod/multus-additional-cni-plugins-4qmk6 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" already present on machine openshift-multus 39m Normal Created pod/multus-additional-cni-plugins-4qmk6 Created container whereabouts-cni openshift-multus 39m Normal Started pod/multus-additional-cni-plugins-4qmk6 Started container whereabouts-cni openshift-etcd 39m Warning FailedToUpdateEndpoint endpoints/etcd Failed to update endpoint openshift-etcd/etcd: Operation cannot be fulfilled on endpoints "etcd": the object has been modified; please apply your changes to the latest version and try again openshift-monitoring 39m Normal TaintManagerEviction pod/prometheus-operator-admission-webhook-5c9b9d98cc-4mv5m Cancelling deletion of Pod openshift-monitoring/prometheus-operator-admission-webhook-5c9b9d98cc-4mv5m openshift-etcd 39m Normal Pulled pod/etcd-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 39m Normal Started pod/etcd-ip-10-0-197-197.ec2.internal Started container setup openshift-image-registry 39m Normal TaintManagerEviction pod/image-registry-55b7d998b9-479fl Cancelling deletion of Pod openshift-image-registry/image-registry-55b7d998b9-479fl openshift-multus 39m Normal Created pod/multus-additional-cni-plugins-4qmk6 Created container kube-multus-additional-cni-plugins openshift-etcd 39m Normal Created pod/etcd-ip-10-0-197-197.ec2.internal Created container setup openshift-ingress 39m Normal TaintManagerEviction pod/router-default-7cf4c94d4-zs7xj Cancelling deletion of Pod openshift-ingress/router-default-7cf4c94d4-zs7xj openshift-etcd 39m Normal Pulled pod/etcd-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 39m Normal Created pod/etcd-ip-10-0-197-197.ec2.internal Created container etcd-ensure-env-vars openshift-etcd 39m Normal Started pod/etcd-ip-10-0-197-197.ec2.internal Started container etcd-ensure-env-vars openshift-etcd-operator 39m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "GuardControllerDegraded: Missing PodIP in operand etcd-ip-10-0-197-197.ec2.internal on node ip-10-0-197-197.ec2.internal\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" openshift-etcd-operator 39m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "StaticPodsDegraded: pod/etcd-ip-10-0-197-197.ec2.internal container \"etcd\" is terminated: Completed: \nStaticPodsDegraded: pod/etcd-ip-10-0-197-197.ec2.internal container \"etcd-metrics\" is terminated: Error: ,\"caller\":\"zapgrpc/zapgrpc.go:191\",\"msg\":\"[core] grpc: addrConn.createTransport failed to connect to {10.0.197.197:9978 10.0.197.197 0 }. Err: connection error: desc = \\\"transport: Error while dialing dial tcp 10.0.197.197:9978: connect: connection refused\\\"\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:29:05.043Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to TRANSIENT_FAILURE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:29:05.043Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000520cf0, TRANSIENT_FAILURE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:29:16.241Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to IDLE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:29:16.241Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000520cf0, IDLE\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:29:16.241Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to CONNECTING\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:29:16.241Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel picks a new address \\\"10.0.197.197:9978\\\" to connect\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:29:16.241Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000520cf0, CONNECTING\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:29:16.246Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[core] Subchannel Connectivity change to READY\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:29:16.246Z\",\"caller\":\"zapgrpc/zapgrpc.go:174\",\"msg\":\"[balancer] base.baseBalancer: handle SubConn state change: 0xc000520cf0, READY\"}\nStaticPodsDegraded: {\"level\":\"info\",\"ts\":\"2023-03-21T12:29:16.246Z\",\"caller\":\nStaticPodsDegraded: pod/etcd-ip-10-0-197-197.ec2.internal container \"etcd-readyz\" is terminated: Completed: \nStaticPodsDegraded: pod/etcd-ip-10-0-197-197.ec2.internal container \"etcdctl\" is terminated: Error: \nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" openshift-etcd 39m Normal Pulled pod/etcd-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 39m Normal Created pod/etcd-ip-10-0-197-197.ec2.internal Created container etcd-resources-copy openshift-etcd 39m Normal Started pod/etcd-ip-10-0-197-197.ec2.internal Started container etcd-resources-copy openshift-etcd 39m Normal Pulled pod/etcd-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-oauth-apiserver 39m Warning ProbeError pod/apiserver-8ddbf84fd-7qf7p Readiness probe error: Get "https://10.129.0.20:8443/readyz": dial tcp 10.129.0.20:8443: connect: connection refused... openshift-etcd 39m Normal Pulled pod/etcd-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 39m Normal Created pod/etcd-ip-10-0-197-197.ec2.internal Created container etcdctl openshift-etcd 39m Normal Started pod/etcd-ip-10-0-197-197.ec2.internal Started container etcdctl openshift-oauth-apiserver 39m Warning Unhealthy pod/apiserver-8ddbf84fd-7qf7p Readiness probe failed: Get "https://10.129.0.20:8443/readyz": dial tcp 10.129.0.20:8443: connect: connection refused openshift-etcd 39m Normal Pulled pod/etcd-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 39m Normal Created pod/etcd-ip-10-0-197-197.ec2.internal Created container etcd openshift-kube-controller-manager 39m Normal Killing pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Stopping container kube-controller-manager-recovery-controller openshift-etcd 39m Normal Started pod/etcd-ip-10-0-197-197.ec2.internal Started container etcd-metrics openshift-etcd 39m Normal Pulled pod/etcd-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd 39m Normal Created pod/etcd-ip-10-0-197-197.ec2.internal Created container etcd-readyz openshift-kube-controller-manager 39m Normal Killing pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Stopping container cluster-policy-controller openshift-kube-controller-manager 39m Normal StaticPodInstallerCompleted pod/installer-7-ip-10-0-140-6.ec2.internal Successfully installed revision 7 openshift-console-operator 39m Normal OperatorStatusChanged deployment/console-operator Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.qeaisrhods-c13.abmw.s1.devshift.org): Get \"https://console-openshift-console.apps.qeaisrhods-c13.abmw.s1.devshift.org\": x509: certificate signed by unknown authority" to "All is well",Available changed from False to True ("All is well") openshift-kube-controller-manager 39m Normal Killing pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Stopping container kube-controller-manager openshift-etcd 39m Normal Created pod/etcd-ip-10-0-197-197.ec2.internal Created container etcd-metrics openshift-etcd 39m Normal Started pod/etcd-ip-10-0-197-197.ec2.internal Started container etcd openshift-etcd 39m Normal Started pod/etcd-ip-10-0-197-197.ec2.internal Started container etcd-readyz openshift-kube-controller-manager 39m Normal Killing pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Stopping container kube-controller-manager-cert-syncer openshift-kube-controller-manager 39m Warning ProbeError pod/kube-controller-manager-guard-ip-10-0-140-6.ec2.internal Readiness probe error: Get "https://10.0.140.6:10257/healthz": dial tcp 10.0.140.6:10257: connect: connection refused... openshift-kube-scheduler 39m Normal StaticPodInstallerCompleted pod/installer-8-ip-10-0-239-132.ec2.internal Successfully installed revision 8 openshift-kube-scheduler 39m Normal Killing pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Stopping container kube-scheduler openshift-kube-scheduler 39m Normal Killing pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Stopping container kube-scheduler-recovery-controller openshift-kube-scheduler 39m Normal Killing pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Stopping container kube-scheduler-cert-syncer openshift-console 39m Normal Created pod/console-65cc7f8b45-md5n8 Created container console openshift-kube-controller-manager-operator 39m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"cluster-policy-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager-cert-syncer\" is terminated: Error: 3403 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:33:22.643780 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:33:31.130414 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:33:31.130725 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:33:48.664147 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:33:48.664505 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:33:57.152215 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:33:57.152858 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:34:14.713647 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:34:14.713979 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:34:23.174190 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:34:23.174463 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:34:40.786447 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:34:40.786728 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" openshift-kube-scheduler-operator 39m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal container \"kube-scheduler\" is terminated: Completed: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal container \"kube-scheduler-cert-syncer\" is terminated: Error: I0321 12:33:43.882678 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: I0321 12:33:43.882914 1 base_controller.go:67] Waiting for caches to sync for CertSyncController\nStaticPodsDegraded: I0321 12:33:43.883265 1 event.go:285] Event(v1.ObjectReference{Kind:\"Pod\", Namespace:\"openshift-kube-scheduler\", Name:\"openshift-kube-scheduler-ip-10-0-239-132.ec2.internal\", UID:\"\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}): type: 'Warning' reason: 'FastControllerResync' Controller \"CertSyncController\" resync interval is set to 0s which might lead to client request throttling\nStaticPodsDegraded: I0321 12:33:43.983128 1 base_controller.go:73] Caches are synced for CertSyncController \nStaticPodsDegraded: I0321 12:33:43.983151 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ...\nStaticPodsDegraded: I0321 12:33:43.983207 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:33:43.983217 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:33:47.486645 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:33:47.486661 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:34:09.902919 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:34:09.902940 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:34:13.506057 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:34:13.506076 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:34:35.919707 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:34:35.919728 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:34:39.522827 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:34:39.522843 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal container \"kube-scheduler-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" openshift-console 39m Normal Started pod/console-65cc7f8b45-md5n8 Started container console openshift-console 39m Normal AddedInterface pod/console-65cc7f8b45-md5n8 Add eth0 [10.128.0.62/23] from ovn-kubernetes openshift-console 39m Normal Pulled pod/console-65cc7f8b45-md5n8 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f8ed86b29b0df00f0cfb8b6d170e5fa8d9b0092ee92140788ec5a0a1eb60a10" already present on machine openshift-kube-scheduler 39m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 39m Normal Created pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Created container wait-for-host-port openshift-kube-scheduler 39m Normal Started pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Started container wait-for-host-port openshift-kube-scheduler 39m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-scheduler 39m Normal SandboxChanged pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Pod sandbox changed, it will be killed and re-created. openshift-kube-scheduler 39m Normal Created pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Created container kube-scheduler openshift-kube-scheduler 39m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-scheduler 39m Normal Started pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Started container kube-scheduler openshift-console 39m Normal Killing pod/console-7db75d8d45-7vkqx Stopping container console openshift-kube-scheduler 39m Warning FastControllerResync pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling default 39m Normal Reboot node/ip-10-0-160-152.ec2.internal Node will reboot into config rendered-worker-c37c7a9e551f049d382df8406f11fe9b default 39m Normal PendingConfig node/ip-10-0-160-152.ec2.internal Written pending config rendered-worker-c37c7a9e551f049d382df8406f11fe9b openshift-kube-scheduler 39m Normal Created pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Created container kube-scheduler-cert-syncer openshift-etcd-operator 39m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "GuardControllerDegraded: Missing PodIP in operand etcd-ip-10-0-197-197.ec2.internal on node ip-10-0-197-197.ec2.internal\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" default 39m Normal OSUpdateStaged node/ip-10-0-160-152.ec2.internal Changes to OS staged openshift-kube-scheduler 39m Normal Started pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Started container kube-scheduler-cert-syncer openshift-console 39m Normal SuccessfulDelete replicaset/console-7db75d8d45 Deleted pod: console-7db75d8d45-7vkqx default 39m Normal OSUpdateStarted node/ip-10-0-160-152.ec2.internal openshift-console 39m Normal ScalingReplicaSet deployment/console Scaled down replica set console-7db75d8d45 to 0 from 1 openshift-etcd-operator 39m Warning UnhealthyEtcdMember deployment/etcd-operator unhealthy members: ip-10-0-197-197.ec2.internal openshift-kube-scheduler 39m Normal Created pod/revision-pruner-8-ip-10-0-239-132.ec2.internal Created container pruner openshift-kube-scheduler 39m Normal AddedInterface pod/revision-pruner-8-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.45/23] from ovn-kubernetes openshift-kube-scheduler-operator 39m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/revision-pruner-8-ip-10-0-239-132.ec2.internal -n openshift-kube-scheduler because it was missing openshift-kube-scheduler 39m Normal Pulled pod/revision-pruner-8-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 39m Normal Started pod/revision-pruner-8-ip-10-0-239-132.ec2.internal Started container pruner openshift-console-operator 39m Normal OperatorStatusChanged deployment/console-operator Status for clusteroperator/console changed: Progressing changed from True to False ("All is well") openshift-ovn-kubernetes 39m Normal Pulled pod/ovnkube-node-6jsx2 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-kube-scheduler-operator 39m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "StaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal container \"kube-scheduler\" is terminated: Completed: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal container \"kube-scheduler-cert-syncer\" is terminated: Error: I0321 12:33:43.882678 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: I0321 12:33:43.882914 1 base_controller.go:67] Waiting for caches to sync for CertSyncController\nStaticPodsDegraded: I0321 12:33:43.883265 1 event.go:285] Event(v1.ObjectReference{Kind:\"Pod\", Namespace:\"openshift-kube-scheduler\", Name:\"openshift-kube-scheduler-ip-10-0-239-132.ec2.internal\", UID:\"\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}): type: 'Warning' reason: 'FastControllerResync' Controller \"CertSyncController\" resync interval is set to 0s which might lead to client request throttling\nStaticPodsDegraded: I0321 12:33:43.983128 1 base_controller.go:73] Caches are synced for CertSyncController \nStaticPodsDegraded: I0321 12:33:43.983151 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ...\nStaticPodsDegraded: I0321 12:33:43.983207 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:33:43.983217 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:33:47.486645 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:33:47.486661 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:34:09.902919 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:34:09.902940 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:34:13.506057 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:34:13.506076 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:34:35.919707 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:34:35.919728 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:34:39.522827 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:34:39.522843 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal container \"kube-scheduler-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" openshift-ovn-kubernetes 39m Normal Created pod/ovnkube-node-6jsx2 Created container ovn-controller openshift-ingress-canary 39m Normal AddedInterface pod/ingress-canary-xb5f7 Add eth0 [10.130.2.7/23] from ovn-kubernetes openshift-multus 39m Normal Pulled pod/multus-additional-cni-plugins-x8r6f Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be088cb9218a248cd4a616132c8daf531502c7b85404aadb0b73e76f8dba01df" in 24.378550656s (24.378557678s including waiting) openshift-monitoring 39m Normal AddedInterface pod/sre-dns-latency-exporter-v8kzl Add eth0 [10.130.2.4/23] from ovn-kubernetes openshift-ingress-canary 39m Normal Pulling pod/ingress-canary-xb5f7 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6de5d1e6d774c20a0c3bd4d4c544234519bc4528e5535f24d0cf78a6788b4056" openshift-ingress 39m Normal Pulling pod/router-default-7cf4c94d4-zs7xj Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0743d54d3acaf6558295618248ff446b4352dde0234d52465d7578c7c261e6fd" openshift-ingress 39m Normal AddedInterface pod/router-default-7cf4c94d4-zs7xj Add eth0 [10.130.2.10/23] from ovn-kubernetes openshift-image-registry 39m Normal Pulled pod/image-registry-55b7d998b9-479fl Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" already present on machine openshift-multus 39m Normal AddedInterface pod/network-metrics-daemon-qfgm8 Add eth0 [10.130.2.5/23] from ovn-kubernetes openshift-cluster-csi-drivers 39m Normal Created pod/aws-ebs-csi-driver-node-r2n4w Created container csi-liveness-probe openshift-network-diagnostics 39m Normal Pulling pod/network-check-target-trrh7 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" openshift-multus 39m Normal Pulling pod/network-metrics-daemon-qfgm8 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5e01ac462952a60254c8a9a0f43cce8b323da60014d37a49250a2e586142a5" openshift-network-diagnostics 39m Normal AddedInterface pod/network-check-target-trrh7 Add eth0 [10.130.2.6/23] from ovn-kubernetes openshift-cluster-csi-drivers 39m Normal Started pod/aws-ebs-csi-driver-node-r2n4w Started container csi-liveness-probe openshift-monitoring 39m Normal AddedInterface pod/prometheus-operator-admission-webhook-5c9b9d98cc-4mv5m Add eth0 [10.130.2.9/23] from ovn-kubernetes openshift-monitoring 39m Normal Pulling pod/prometheus-operator-admission-webhook-5c9b9d98cc-4mv5m Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e2218fd1d860bdb72a28d8fc34e1d5e7c3674bf1d0005583d70800dcd79d2" openshift-ovn-kubernetes 39m Normal Started pod/ovnkube-node-6jsx2 Started container ovn-controller openshift-image-registry 39m Normal AddedInterface pod/image-registry-55b7d998b9-479fl Add eth0 [10.130.2.8/23] from ovn-kubernetes openshift-image-registry 39m Normal Started pod/image-registry-55b7d998b9-479fl Started container registry openshift-multus 39m Normal Started pod/multus-additional-cni-plugins-x8r6f Started container cni-plugins openshift-multus 39m Normal Created pod/multus-additional-cni-plugins-x8r6f Created container cni-plugins openshift-image-registry 39m Normal Created pod/image-registry-55b7d998b9-479fl Created container registry openshift-image-registry 39m Normal ScalingReplicaSet deployment/image-registry Scaled down replica set image-registry-5588bdd7b4 to 0 from 1 openshift-image-registry 39m Normal Killing pod/image-registry-5588bdd7b4-m28sx Stopping container registry openshift-image-registry 39m Normal SuccessfulDelete replicaset/image-registry-5588bdd7b4 Deleted pod: image-registry-5588bdd7b4-m28sx openshift-multus 39m Normal Pulling pod/multus-additional-cni-plugins-x8r6f Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dec1af3c24148179ab45f9137ba74876f4287121f07aa32dc7a3643da749a9" openshift-etcd-operator 39m Warning EtcdLeaderChangeMetrics deployment/etcd-operator Detected leader change increase of 3.3333333333333335 over 5 minutes on "AWS"; disk metrics are: etcd-ip-10-0-140-6.ec2.internal=0.004250,etcd-ip-10-0-197-197.ec2.internal=0.012098,etcd-ip-10-0-239-132.ec2.internal=0.003911. Most often this is as a result of inadequate storage or sometimes due to networking issues. openshift-kube-controller-manager 39m Normal Pulled pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 39m Warning FastControllerResync pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager 39m Normal Started pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Started container kube-controller-manager-cert-syncer openshift-kube-controller-manager 39m Normal Pulled pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-controller-manager 39m Normal Started pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Started container cluster-policy-controller openshift-kube-controller-manager 39m Normal Created pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Created container kube-controller-manager-cert-syncer openshift-kube-controller-manager 39m Normal Started pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Started container kube-controller-manager openshift-kube-controller-manager 39m Normal Pulled pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fec19020202ba3b7f09460bc0cb52fc001941fbb3a3fa7d1ca34a2e9ebf0e9" already present on machine openshift-kube-controller-manager 39m Normal Created pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Created container cluster-policy-controller openshift-kube-controller-manager 39m Normal Pulled pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 39m Normal Created pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Created container kube-controller-manager openshift-kube-controller-manager 39m Normal Started pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Started container kube-controller-manager-recovery-controller openshift-kube-controller-manager 39m Normal Created pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Created container kube-controller-manager-recovery-controller openshift-kube-apiserver-operator 39m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" started at 2023-03-21 12:27:14 +0000 UTC is still not ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" started at 2023-03-21 12:27:15 +0000 UTC is still not ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-cert-regeneration-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-cert-syncer\" is terminated: Error: rue} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nStaticPodsDegraded: I0321 12:31:38.468166 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}]\nStaticPodsDegraded: I0321 12:31:38.470781 1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nStaticPodsDegraded: I0321 12:31:39.866374 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}]\nStaticPodsDegraded: I0321 12:31:39.866787 1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-insecure-readyz\" is terminated: Completed: " openshift-etcd 39m Warning ProbeError pod/etcd-ip-10-0-197-197.ec2.internal Startup probe error: Get "https://10.0.197.197:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers)... openshift-etcd 39m Warning Unhealthy pod/etcd-ip-10-0-197-197.ec2.internal Startup probe failed: Get "https://10.0.197.197:9980/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) default 39m Normal Reboot node/ip-10-0-239-132.ec2.internal Node will reboot into config rendered-master-d273453f5fe4894c22cd393f5c0dbfa3 default 39m Normal PendingConfig node/ip-10-0-239-132.ec2.internal Written pending config rendered-master-d273453f5fe4894c22cd393f5c0dbfa3 default 39m Normal OSUpdateStaged node/ip-10-0-239-132.ec2.internal Changes to OS staged default 39m Normal OSUpdateStarted node/ip-10-0-239-132.ec2.internal openshift-multus 39m Normal Pulled pod/network-metrics-daemon-qfgm8 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5e01ac462952a60254c8a9a0f43cce8b323da60014d37a49250a2e586142a5" in 7.777458569s (7.777465039s including waiting) openshift-ingress-canary 39m Normal Pulled pod/ingress-canary-xb5f7 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6de5d1e6d774c20a0c3bd4d4c544234519bc4528e5535f24d0cf78a6788b4056" in 7.782347387s (7.782355222s including waiting) openshift-monitoring 39m Normal Pulled pod/prometheus-operator-admission-webhook-5c9b9d98cc-4mv5m Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e2218fd1d860bdb72a28d8fc34e1d5e7c3674bf1d0005583d70800dcd79d2" in 7.781283519s (7.781290707s including waiting) openshift-kube-apiserver 39m Warning Unhealthy pod/kube-apiserver-ip-10-0-140-6.ec2.internal Readiness probe failed: HTTP probe failed with statuscode: 500 openshift-network-diagnostics 39m Normal Pulled pod/network-check-target-trrh7 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" in 8.048392357s (8.048403259s including waiting) openshift-network-diagnostics 39m Normal Created pod/network-check-target-trrh7 Created container network-check-target-container openshift-ingress-canary 39m Normal Created pod/ingress-canary-xb5f7 Created container serve-healthcheck-canary openshift-ingress 39m Normal Created pod/router-default-7cf4c94d4-zs7xj Created container router openshift-monitoring 39m Normal Created pod/prometheus-operator-admission-webhook-5c9b9d98cc-4mv5m Created container prometheus-operator-admission-webhook openshift-ingress 39m Normal Started pod/router-default-7cf4c94d4-zs7xj Started container router openshift-multus 39m Normal Pulled pod/multus-additional-cni-plugins-x8r6f Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dec1af3c24148179ab45f9137ba74876f4287121f07aa32dc7a3643da749a9" in 6.297778023s (6.297789833s including waiting) openshift-kube-apiserver 39m Warning Unhealthy pod/kube-apiserver-guard-ip-10-0-140-6.ec2.internal Readiness probe failed: HTTP probe failed with statuscode: 500 openshift-kube-apiserver 39m Warning ProbeError pod/kube-apiserver-guard-ip-10-0-140-6.ec2.internal Readiness probe error: HTTP probe failed with statuscode: 500... openshift-kube-apiserver 39m Warning ProbeError pod/kube-apiserver-ip-10-0-140-6.ec2.internal Readiness probe error: HTTP probe failed with statuscode: 500... openshift-monitoring 38m Normal Started pod/prometheus-operator-admission-webhook-5c9b9d98cc-4mv5m Started container prometheus-operator-admission-webhook openshift-multus 38m Normal Created pod/multus-additional-cni-plugins-x8r6f Created container bond-cni-plugin openshift-multus 38m Normal Created pod/network-metrics-daemon-qfgm8 Created container network-metrics-daemon openshift-apiserver 38m Warning Unhealthy pod/apiserver-7475f65d84-whqlh Liveness probe failed: Get "https://10.130.0.50:8443/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) openshift-ingress-canary 38m Normal Started pod/ingress-canary-xb5f7 Started container serve-healthcheck-canary openshift-apiserver 38m Warning ProbeError pod/apiserver-7475f65d84-whqlh Liveness probe error: Get "https://10.130.0.50:8443/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers)... openshift-multus 38m Normal Started pod/multus-additional-cni-plugins-x8r6f Started container bond-cni-plugin openshift-apiserver 38m Warning Unhealthy pod/apiserver-7475f65d84-whqlh Readiness probe failed: Get "https://10.130.0.50:8443/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) openshift-multus 38m Normal Pulling pod/multus-additional-cni-plugins-x8r6f Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4903dfa59fc0fcc3521f6431d51c6ed0ef563b0c63614a2df44ee4d447ab6c5d" openshift-kube-apiserver 38m Warning ProbeError pod/kube-apiserver-ip-10-0-140-6.ec2.internal Liveness probe error: HTTP probe failed with statuscode: 500... openshift-apiserver 38m Warning ProbeError pod/apiserver-7475f65d84-whqlh Readiness probe error: Get "https://10.130.0.50:8443/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers)... openshift-kube-apiserver 38m Warning Unhealthy pod/kube-apiserver-ip-10-0-140-6.ec2.internal Liveness probe failed: HTTP probe failed with statuscode: 500 openshift-multus 38m Normal Pulled pod/multus-additional-cni-plugins-x8r6f Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4903dfa59fc0fcc3521f6431d51c6ed0ef563b0c63614a2df44ee4d447ab6c5d" in 938.101008ms (938.114022ms including waiting) openshift-multus 38m Normal Started pod/multus-additional-cni-plugins-x8r6f Started container routeoverride-cni openshift-multus 38m Normal Created pod/multus-additional-cni-plugins-x8r6f Created container routeoverride-cni openshift-oauth-apiserver 38m Warning Unhealthy pod/apiserver-74455c7c5-h9ck5 Readiness probe failed: Get "https://10.128.0.60:8443/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) openshift-ingress 38m Normal SuccessfulDelete replicaset/router-default-699d8c97f Deleted pod: router-default-699d8c97f-mlkcv openshift-oauth-apiserver 38m Warning Unhealthy pod/apiserver-74455c7c5-rpzl9 Readiness probe failed: Get "https://10.130.0.67:8443/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) openshift-ingress 38m Warning FailedToUpdateEndpoint endpoints/router-default Failed to update endpoint openshift-ingress/router-default: Operation cannot be fulfilled on endpoints "router-default": the object has been modified; please apply your changes to the latest version and try again openshift-multus 38m Normal Pulling pod/multus-additional-cni-plugins-x8r6f Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" openshift-ingress 38m Normal Killing pod/router-default-699d8c97f-mlkcv Stopping container router openshift-monitoring 38m Normal SuccessfulCreate replicaset/prometheus-operator-7f64545d8 Created pod: prometheus-operator-7f64545d8-cxj25 openshift-kube-apiserver-operator 38m Normal PodCreated deployment/kube-apiserver-operator Created Pod/revision-pruner-11-ip-10-0-239-132.ec2.internal -n openshift-kube-apiserver because it was missing openshift-apiserver-operator 38m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available changed from True to False ("APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"") openshift-multus 38m Normal Pulling pod/network-metrics-daemon-lbxjr Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5e01ac462952a60254c8a9a0f43cce8b323da60014d37a49250a2e586142a5" openshift-apiserver 38m Warning Unhealthy pod/apiserver-5f568869f-mpswm Readiness probe failed: Get "https://10.128.0.57:8443/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) openshift-ingress 38m Warning FailedToUpdateEndpoint endpoints/router-internal-default Failed to update endpoint openshift-ingress/router-internal-default: Operation cannot be fulfilled on endpoints "router-internal-default": the object has been modified; please apply your changes to the latest version and try again openshift-monitoring 38m Normal ScalingReplicaSet deployment/prometheus-operator Scaled up replica set prometheus-operator-7f64545d8 to 1 openshift-authentication-operator 38m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: endpoints for service/api in \"openshift-oauth-apiserver\" have no addresses with port name \"https\"" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: endpoints for service/api in \"openshift-oauth-apiserver\" have no addresses with port name \"https\"" openshift-oauth-apiserver 38m Warning ProbeError pod/apiserver-74455c7c5-h9ck5 Readiness probe error: Get "https://10.128.0.60:8443/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers)... openshift-kube-controller-manager-operator 38m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"cluster-policy-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager-cert-syncer\" is terminated: Error: 3403 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:33:22.643780 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:33:31.130414 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:33:31.130725 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:33:48.664147 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:33:48.664505 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:33:57.152215 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:33:57.152858 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:34:14.713647 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:34:14.713979 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:34:23.174190 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:34:23.174463 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:34:40.786447 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:34:40.786728 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" openshift-ingress 38m Normal ScalingReplicaSet deployment/router-default Scaled down replica set router-default-699d8c97f to 0 from 1 openshift-authentication-operator 38m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available changed from True to False ("APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: endpoints for service/api in \"openshift-oauth-apiserver\" have no addresses with port name \"https\"") openshift-multus 38m Normal AddedInterface pod/network-metrics-daemon-lbxjr Add eth0 [10.129.2.5/23] from ovn-kubernetes openshift-oauth-apiserver 38m Warning ProbeError pod/apiserver-74455c7c5-rpzl9 Readiness probe error: Get "https://10.130.0.67:8443/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers)... openshift-apiserver 38m Warning ProbeError pod/apiserver-5f568869f-mpswm Readiness probe error: Get "https://10.128.0.57:8443/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers)... openshift-authentication-operator 38m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()" to "APIServerDeploymentDegraded: 3 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-74455c7c5-h9ck5 pod)",Available changed from True to False ("APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.") openshift-kube-controller-manager 38m Warning ProbeError pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Startup probe error: Get "https://10.0.140.6:10357/healthz": dial tcp 10.0.140.6:10357: connect: connection refused... openshift-authentication-operator 38m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 3 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-74455c7c5-h9ck5 pod)" to "APIServerDeploymentDegraded: 3 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-74455c7c5-h9ck5 pod)\nOAuthClientsControllerDegraded: unable to get \"openshift-browser-client\" bootstrapped OAuth client: etcdserver: request timed out, possibly due to connection lost" openshift-authentication-operator 38m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 3 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-74455c7c5-h9ck5 pod)\nOAuthClientsControllerDegraded: unable to get \"openshift-browser-client\" bootstrapped OAuth client: etcdserver: request timed out, possibly due to connection lost" to "APIServerDeploymentDegraded: 3 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-74455c7c5-h9ck5 pod)" openshift-multus 38m Normal Pulled pod/network-metrics-daemon-lbxjr Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5e01ac462952a60254c8a9a0f43cce8b323da60014d37a49250a2e586142a5" in 1.272274699s (1.27228788s including waiting) openshift-apiserver-operator 38m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" openshift-kube-controller-manager 38m Warning Unhealthy pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Startup probe failed: Get "https://10.0.140.6:10357/healthz": dial tcp 10.0.140.6:10357: connect: connection refused openshift-multus 38m Normal Pulled pod/multus-additional-cni-plugins-x8r6f Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" in 1.776635911s (1.776647886s including waiting) openshift-multus 38m Normal Started pod/multus-additional-cni-plugins-x8r6f Started container whereabouts-cni-bincopy openshift-multus 38m Normal Created pod/multus-additional-cni-plugins-x8r6f Created container whereabouts-cni-bincopy openshift-multus 38m Normal Pulled pod/multus-additional-cni-plugins-x8r6f Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" already present on machine openshift-multus 38m Normal Started pod/multus-additional-cni-plugins-x8r6f Started container whereabouts-cni openshift-apiserver-operator 38m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("All is well") openshift-multus 38m Normal Created pod/multus-additional-cni-plugins-x8r6f Created container whereabouts-cni openshift-ingress 38m Warning FailedMount pod/router-default-699d8c97f-mlkcv MountVolume.SetUp failed for volume "default-certificate" : secret "router-certs-default" not found default 38m Normal NodeAllocatableEnforced node/ip-10-0-160-152.ec2.internal Updated Node Allocatable limit across pods openshift-multus 38m Normal Pulled pod/multus-additional-cni-plugins-x8r6f Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" already present on machine default 38m Normal Starting node/ip-10-0-160-152.ec2.internal Starting kubelet. default 38m Normal NodeNotReady node/ip-10-0-160-152.ec2.internal Node ip-10-0-160-152.ec2.internal status is now: NodeNotReady default 38m Warning Rebooted node/ip-10-0-160-152.ec2.internal Node ip-10-0-160-152.ec2.internal has been rebooted, boot id: 0cf2e85d-d04b-472c-975f-90bba89dd45c default 38m Normal NodeHasSufficientMemory node/ip-10-0-160-152.ec2.internal Node ip-10-0-160-152.ec2.internal status is now: NodeHasSufficientMemory default 38m Normal NodeHasSufficientPID node/ip-10-0-160-152.ec2.internal Node ip-10-0-160-152.ec2.internal status is now: NodeHasSufficientPID default 38m Normal NodeHasNoDiskPressure node/ip-10-0-160-152.ec2.internal Node ip-10-0-160-152.ec2.internal status is now: NodeHasNoDiskPressure default 38m Normal NodeNotSchedulable node/ip-10-0-160-152.ec2.internal Node ip-10-0-160-152.ec2.internal status is now: NodeNotSchedulable openshift-multus 38m Normal Created pod/multus-additional-cni-plugins-x8r6f Created container kube-multus-additional-cni-plugins openshift-kube-apiserver 38m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-authentication-operator 38m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 3 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-74455c7c5-h9ck5 pod)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()",Available changed from False to True ("All is well") openshift-console-operator 38m Normal OperatorStatusChanged deployment/console-operator Status for clusteroperator/console changed: Degraded message changed from "All is well" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.qeaisrhods-c13.abmw.s1.devshift.org): Get \"https://console-openshift-console.apps.qeaisrhods-c13.abmw.s1.devshift.org\": x509: certificate signed by unknown authority",Available changed from True to False ("RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.qeaisrhods-c13.abmw.s1.devshift.org): Get \"https://console-openshift-console.apps.qeaisrhods-c13.abmw.s1.devshift.org\": x509: certificate signed by unknown authority") openshift-etcd-operator 38m Normal NodeCurrentRevisionChanged deployment/etcd-operator Updated node "ip-10-0-197-197.ec2.internal" from revision 6 to 7 because static pod is ready openshift-kube-apiserver 38m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container setup openshift-kube-apiserver 38m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container setup openshift-etcd-operator 38m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 7\nEtcdMembersProgressing: No unstarted etcd members found"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 6; 2 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available" openshift-kube-apiserver 38m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-apiserver 38m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 38m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver openshift-kube-apiserver 38m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver openshift-kube-apiserver 38m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 38m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver-cert-regeneration-controller openshift-machine-api 38m Normal DetectedUnhealthy machine/qeaisrhods-c13-28wr5-worker-us-east-1a-tfwzm Machine openshift-machine-api/srep-worker-healthcheck/qeaisrhods-c13-28wr5-worker-us-east-1a-tfwzm/ip-10-0-160-152.ec2.internal has unhealthy node ip-10-0-160-152.ec2.internal openshift-kube-apiserver 38m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver-check-endpoints openshift-kube-apiserver 38m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 38m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver-cert-syncer openshift-kube-apiserver 38m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver-check-endpoints openshift-kube-apiserver 38m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 38m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver-insecure-readyz openshift-network-diagnostics 38m Normal Pulling pod/network-check-target-v468t Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" openshift-kube-apiserver 38m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver-cert-regeneration-controller openshift-network-diagnostics 38m Normal AddedInterface pod/network-check-target-v468t Add eth0 [10.129.2.6/23] from ovn-kubernetes openshift-kube-apiserver 38m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver-insecure-readyz openshift-kube-apiserver 38m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver-cert-syncer openshift-kube-apiserver-operator 38m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-cert-regeneration-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-cert-syncer\" is terminated: Error: rue} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nStaticPodsDegraded: I0321 12:31:38.468166 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}]\nStaticPodsDegraded: I0321 12:31:38.470781 1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nStaticPodsDegraded: I0321 12:31:39.866374 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}]\nStaticPodsDegraded: I0321 12:31:39.866787 1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-insecure-readyz\" is terminated: Completed: " to "NodeControllerDegraded: All master nodes are ready" openshift-kube-apiserver 38m Warning FastControllerResync node/ip-10-0-197-197.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver 38m Warning FastControllerResync node/ip-10-0-197-197.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling openshift-network-diagnostics 38m Normal Pulled pod/network-check-target-v468t Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" in 2.773243281s (2.773251559s including waiting) openshift-etcd-operator 38m Warning UnhealthyEtcdMember deployment/etcd-operator unhealthy members: ip-10-0-197-197.ec2.internal,ip-10-0-239-132.ec2.internal openshift-kube-controller-manager 38m Warning ClusterInfrastructureStatus pod/kube-controller-manager-ip-10-0-140-6.ec2.internal unable to get cluster infrastructure status, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope default 38m Normal Uncordon node/ip-10-0-195-121.ec2.internal Update completed for config rendered-worker-65a660c5b4cafef14c5770efedbee76c and node has been uncordoned openshift-console-operator 38m Normal OperatorStatusChanged deployment/console-operator Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.qeaisrhods-c13.abmw.s1.devshift.org): Get \"https://console-openshift-console.apps.qeaisrhods-c13.abmw.s1.devshift.org\": x509: certificate signed by unknown authority" to "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.qeaisrhods-c13.abmw.s1.devshift.org): Get \"https://console-openshift-console.apps.qeaisrhods-c13.abmw.s1.devshift.org\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)",Available message changed from "RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.qeaisrhods-c13.abmw.s1.devshift.org): Get \"https://console-openshift-console.apps.qeaisrhods-c13.abmw.s1.devshift.org\": x509: certificate signed by unknown authority" to "RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.qeaisrhods-c13.abmw.s1.devshift.org): Get \"https://console-openshift-console.apps.qeaisrhods-c13.abmw.s1.devshift.org\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" default 38m Normal NodeDone node/ip-10-0-195-121.ec2.internal Setting node ip-10-0-195-121.ec2.internal, currentConfig rendered-worker-65a660c5b4cafef14c5770efedbee76c to Done default 38m Normal ConfigDriftMonitorStarted node/ip-10-0-195-121.ec2.internal Config Drift Monitor started, watching against rendered-worker-65a660c5b4cafef14c5770efedbee76c openshift-kube-controller-manager-operator 38m Normal NodeCurrentRevisionChanged deployment/kube-controller-manager-operator Updated node "ip-10-0-140-6.ec2.internal" from revision 6 to 7 because static pod is ready openshift-kube-controller-manager-operator 38m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 6; 1 nodes are at revision 7" to "NodeInstallerProgressing: 1 nodes are at revision 6; 2 nodes are at revision 7",Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 6; 1 nodes are at revision 7" to "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 6; 2 nodes are at revision 7" openshift-image-registry 38m Warning Unhealthy pod/image-registry-5588bdd7b4-m28sx Readiness probe failed: Get "https://10.128.2.3:5000/healthz": dial tcp 10.128.2.3:5000: connect: connection refused openshift-image-registry 38m Warning ProbeError pod/image-registry-5588bdd7b4-m28sx Readiness probe error: Get "https://10.128.2.3:5000/healthz": dial tcp 10.128.2.3:5000: connect: connection refused... openshift-console-operator 38m Normal OperatorStatusChanged deployment/console-operator Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: failed to GET route (https://console-openshift-console.apps.qeaisrhods-c13.abmw.s1.devshift.org): Get \"https://console-openshift-console.apps.qeaisrhods-c13.abmw.s1.devshift.org\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" to "All is well",Available changed from False to True ("All is well") default 38m Normal NodeReady node/ip-10-0-160-152.ec2.internal Node ip-10-0-160-152.ec2.internal status is now: NodeReady openshift-etcd-operator 38m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy" openshift-etcd-operator 38m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy" default 38m Normal NodeHasNoDiskPressure node/ip-10-0-239-132.ec2.internal Node ip-10-0-239-132.ec2.internal status is now: NodeHasNoDiskPressure default 38m Normal NodeHasSufficientPID node/ip-10-0-239-132.ec2.internal Node ip-10-0-239-132.ec2.internal status is now: NodeHasSufficientPID default 38m Normal NodeAllocatableEnforced node/ip-10-0-239-132.ec2.internal Updated Node Allocatable limit across pods default 38m Warning Rebooted node/ip-10-0-239-132.ec2.internal Node ip-10-0-239-132.ec2.internal has been rebooted, boot id: 83c49b17-ab6c-4858-89b4-d5f1b029ad91 default 38m Normal NodeHasSufficientMemory node/ip-10-0-239-132.ec2.internal Node ip-10-0-239-132.ec2.internal status is now: NodeHasSufficientMemory default 38m Normal Starting node/ip-10-0-239-132.ec2.internal Starting kubelet. default 38m Normal NodeNotSchedulable node/ip-10-0-239-132.ec2.internal Node ip-10-0-239-132.ec2.internal status is now: NodeNotSchedulable default 38m Normal NodeNotReady node/ip-10-0-239-132.ec2.internal Node ip-10-0-239-132.ec2.internal status is now: NodeNotReady default 38m Normal NodeReady node/ip-10-0-239-132.ec2.internal Node ip-10-0-239-132.ec2.internal status is now: NodeReady openshift-etcd-operator 38m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-239-132.ec2.internal\" not ready since 2023-03-21 12:35:26 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful])\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy" openshift-kube-controller-manager-operator 38m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-239-132.ec2.internal\" not ready since 2023-03-21 12:35:26 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful])" openshift-kube-controller-manager-operator 38m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-239-132.ec2.internal\" not ready since 2023-03-21 12:35:26 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful])" to "NodeControllerDegraded: All master nodes are ready" openshift-etcd-operator 38m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy" to "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-239-132.ec2.internal\" not ready since 2023-03-21 12:35:26 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful])\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy" openshift-kube-scheduler-operator 38m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-239-132.ec2.internal\" not ready since 2023-03-21 12:35:26 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful])" to "NodeControllerDegraded: All master nodes are ready" openshift-kube-apiserver-operator 38m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-239-132.ec2.internal\" not ready since 2023-03-21 12:35:26 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful])" to "NodeControllerDegraded: All master nodes are ready" openshift-kube-scheduler-operator 38m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-239-132.ec2.internal\" not ready since 2023-03-21 12:35:26 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful])" openshift-kube-apiserver-operator 38m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-239-132.ec2.internal\" not ready since 2023-03-21 12:35:26 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful])" default 38m Warning ResolutionFailed namespace/openshift-managed-node-metadata-operator constraints not satisfiable: subscription managed-node-metadata-operator exists, no operators found from catalog managed-node-metadata-operator-registry in namespace openshift-managed-node-metadata-operator referenced by subscription managed-node-metadata-operator openshift-etcd-operator 38m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:true Took:836.269µs Error:} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:true Took:1.133208ms Error:} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.239.132:2379]: context deadline exceeded}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy" openshift-ovn-kubernetes 38m Normal Pulling pod/ovnkube-node-8sb9g Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" openshift-dns 38m Normal Pulling pod/node-resolver-f7qjl Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" openshift-multus 38m Normal Pulling pod/multus-additional-cni-plugins-j5mgq Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633" openshift-multus 38m Normal Pulling pod/multus-d7w6w Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" openshift-cluster-node-tuning-operator 38m Normal Pulling pod/tuned-t8kzn Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" openshift-monitoring 38m Normal Pulling pod/node-exporter-g4hdx Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" openshift-cluster-csi-drivers 38m Normal Pulling pod/aws-ebs-csi-driver-node-2p86w Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" openshift-image-registry 38m Normal Pulling pod/node-ca-tvq4f Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" openshift-console 38m Warning FailedCreatePodSandBox pod/console-65cc7f8b45-drq2q Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-65cc7f8b45-drq2q_openshift-console_d9bd1fe9-c3d2-4085-b6fd-f3de62490217_0(5601bfe48e07d2b4cc2e07174c3d893903ddcbe85c74f9b7453527b7a16dc463): error adding pod openshift-console_console-65cc7f8b45-drq2q to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-console/console-65cc7f8b45-drq2q/d9bd1fe9-c3d2-4085-b6fd-f3de62490217]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-console/pods/console-65cc7f8b45-drq2q?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-machine-config-operator 38m Normal Pulling pod/machine-config-daemon-w98lz Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" default 38m Warning ResolutionFailed namespace/openshift-addon-operator constraints not satisfiable: subscription addon-operator exists, no operators found from catalog addon-operator-catalog in namespace openshift-addon-operator referenced by subscription addon-operator openshift-monitoring 38m Normal Pulled pod/node-exporter-g4hdx Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" in 2.002607426s (2.00261386s including waiting) openshift-kube-controller-manager-operator 38m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nNodeControllerDegraded: All master nodes are ready" openshift-monitoring 38m Normal Started pod/node-exporter-g4hdx Started container init-textfile openshift-kube-controller-manager-operator 38m Normal NodeTargetRevisionChanged deployment/kube-controller-manager-operator Updating node "ip-10-0-197-197.ec2.internal" from revision 6 to 7 because node ip-10-0-197-197.ec2.internal with revision 6 is the oldest openshift-monitoring 38m Normal Created pod/node-exporter-g4hdx Created container init-textfile openshift-authentication-operator 38m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available changed from True to False ("WellKnownAvailable: The well-known endpoint is not yet available: failed to GET kube-apiserver oauth endpoint https://10.0.239.132:6443/.well-known/oauth-authorization-server: dial tcp 10.0.239.132:6443: i/o timeout") openshift-authentication-operator 38m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nWellKnownReadyControllerDegraded: failed to GET kube-apiserver oauth endpoint https://10.0.239.132:6443/.well-known/oauth-authorization-server: dial tcp 10.0.239.132:6443: i/o timeout" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nWellKnownReadyControllerDegraded: need at least 3 kube-apiservers, got 2" openshift-authentication-operator 38m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nWellKnownReadyControllerDegraded: failed to GET kube-apiserver oauth endpoint https://10.0.239.132:6443/.well-known/oauth-authorization-server: dial tcp 10.0.239.132:6443: i/o timeout" openshift-authentication-operator 38m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: failed to GET kube-apiserver oauth endpoint https://10.0.239.132:6443/.well-known/oauth-authorization-server: dial tcp 10.0.239.132:6443: i/o timeout" to "WellKnownAvailable: The well-known endpoint is not yet available: need at least 3 kube-apiservers, got 2" openshift-monitoring 38m Normal Pulled pod/node-exporter-g4hdx Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" already present on machine openshift-kube-controller-manager-operator 38m Normal PodCreated deployment/kube-controller-manager-operator Created Pod/installer-7-ip-10-0-197-197.ec2.internal -n openshift-kube-controller-manager because it was missing default 38m Warning ResolutionFailed namespace/openshift-deployment-validation-operator constraints not satisfiable: subscription deployment-validation-operator exists, no operators found from catalog deployment-validation-operator-catalog in namespace openshift-deployment-validation-operator referenced by subscription deployment-validation-operator openshift-dns 38m Normal Pulled pod/node-resolver-f7qjl Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" in 13.045449219s (13.045462747s including waiting) openshift-image-registry 38m Normal Pulled pod/node-ca-tvq4f Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" in 12.518408087s (12.518421462s including waiting) openshift-monitoring 38m Normal Pulling pod/node-exporter-g4hdx Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" openshift-monitoring 38m Normal Started pod/node-exporter-g4hdx Started container node-exporter openshift-multus 38m Normal Pulled pod/multus-d7w6w Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" in 13.866571719s (13.866576783s including waiting) openshift-monitoring 38m Normal Created pod/node-exporter-g4hdx Created container node-exporter openshift-cluster-csi-drivers 38m Normal Pulled pod/aws-ebs-csi-driver-node-2p86w Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" in 13.85217529s (13.852182623s including waiting) openshift-cluster-node-tuning-operator 38m Normal Pulled pod/tuned-t8kzn Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" in 13.854302914s (13.854313779s including waiting) openshift-etcd-operator 38m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:true Took:836.269µs Error:} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:true Took:1.133208ms Error:} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.239.132:2379]: context deadline exceeded}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:true Took:2.057054ms Error:} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:true Took:912.211µs Error:} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.239.132:2379]: context deadline exceeded}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy" openshift-ingress 38m Warning ProbeError pod/router-default-699d8c97f-mlkcv Readiness probe error: HTTP probe failed with statuscode: 500... default 38m Warning ResolutionFailed namespace/openshift-cloud-ingress-operator constraints not satisfiable: subscription cloud-ingress-operator exists, no operators found from catalog cloud-ingress-operator-registry in namespace openshift-cloud-ingress-operator referenced by subscription cloud-ingress-operator openshift-machine-config-operator 38m Normal Pulled pod/machine-config-daemon-w98lz Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" in 16.761086955s (16.761092986s including waiting) openshift-multus 38m Normal Started pod/multus-d7w6w Started container kube-multus openshift-dns 38m Normal Created pod/node-resolver-f7qjl Created container dns-node-resolver openshift-cluster-csi-drivers 38m Normal Pulling pod/aws-ebs-csi-driver-node-2p86w Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" openshift-image-registry 38m Normal Started pod/node-ca-tvq4f Started container node-ca openshift-cluster-node-tuning-operator 38m Normal Started pod/tuned-t8kzn Started container tuned openshift-machine-config-operator 38m Normal Pulling pod/machine-config-daemon-w98lz Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" openshift-multus 38m Normal Created pod/multus-d7w6w Created container kube-multus openshift-cluster-csi-drivers 38m Normal Started pod/aws-ebs-csi-driver-node-2p86w Started container csi-driver openshift-image-registry 38m Normal Created pod/node-ca-tvq4f Created container node-ca openshift-cluster-node-tuning-operator 38m Normal Created pod/tuned-t8kzn Created container tuned openshift-multus 38m Normal Pulled pod/multus-additional-cni-plugins-j5mgq Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633" in 16.765930902s (16.765939184s including waiting) openshift-multus 38m Normal Created pod/multus-additional-cni-plugins-j5mgq Created container egress-router-binary-copy openshift-multus 38m Normal Started pod/multus-additional-cni-plugins-j5mgq Started container egress-router-binary-copy openshift-machine-config-operator 38m Normal Started pod/machine-config-daemon-w98lz Started container machine-config-daemon openshift-machine-config-operator 38m Normal Created pod/machine-config-daemon-w98lz Created container machine-config-daemon openshift-ovn-kubernetes 38m Normal Pulled pod/ovnkube-node-8sb9g Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" in 16.885666387s (16.885673166s including waiting) openshift-cluster-csi-drivers 38m Normal Created pod/aws-ebs-csi-driver-node-2p86w Created container csi-driver openshift-dns 38m Normal Started pod/node-resolver-f7qjl Started container dns-node-resolver openshift-ovn-kubernetes 38m Normal Created pod/ovnkube-node-8sb9g Created container kube-rbac-proxy openshift-cluster-csi-drivers 38m Normal Pulling pod/aws-ebs-csi-driver-node-ts9mc Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" openshift-multus 38m Normal Pulling pod/multus-additional-cni-plugins-j5mgq Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be088cb9218a248cd4a616132c8daf531502c7b85404aadb0b73e76f8dba01df" openshift-monitoring 38m Normal Started pod/node-exporter-g4hdx Started container kube-rbac-proxy openshift-cluster-node-tuning-operator 38m Warning Failed pod/tuned-pbkvf Error: ErrImagePull openshift-monitoring 38m Normal Created pod/node-exporter-g4hdx Created container kube-rbac-proxy openshift-image-registry 38m Warning Failed pod/node-ca-bcbwn Error: ErrImagePull openshift-image-registry 38m Warning Failed pod/node-ca-bcbwn Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7": pull QPS exceeded openshift-monitoring 38m Normal Pulled pod/node-exporter-g4hdx Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" in 3.869547676s (3.869559936s including waiting) openshift-ovn-kubernetes 38m Normal Pulling pod/ovnkube-node-wsrzb Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" openshift-etcd 38m Warning Failed pod/etcd-ip-10-0-239-132.ec2.internal Error: ErrImagePull openshift-multus 38m Normal Pulling pod/multus-kkqdt Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" openshift-etcd 38m Warning Failed pod/etcd-ip-10-0-239-132.ec2.internal Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b": pull QPS exceeded openshift-machine-config-operator 38m Normal Pulled pod/machine-config-daemon-w98lz Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" in 1.379608259s (1.379622069s including waiting) openshift-ovn-kubernetes 38m Normal Pulled pod/ovnkube-node-8sb9g Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-machine-config-operator 38m Normal Pulling pod/machine-config-server-8rhkb Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" openshift-monitoring 38m Normal Pulling pod/node-exporter-jhj5d Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" openshift-machine-config-operator 38m Normal Created pod/machine-config-daemon-w98lz Created container oauth-proxy openshift-machine-config-operator 38m Normal Started pod/machine-config-daemon-w98lz Started container oauth-proxy openshift-cluster-node-tuning-operator 38m Warning Failed pod/tuned-pbkvf Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106": pull QPS exceeded openshift-machine-config-operator 38m Normal Pulling pod/machine-config-daemon-zlzm2 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" openshift-kube-controller-manager 38m Normal Pulling pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" openshift-cluster-csi-drivers 38m Normal Pulled pod/aws-ebs-csi-driver-node-2p86w Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" in 933.601336ms (933.615897ms including waiting) openshift-kube-scheduler 38m Normal Pulling pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" openshift-ovn-kubernetes 38m Normal Pulled pod/ovnkube-node-8sb9g Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-ovn-kubernetes 38m Normal Created pod/ovnkube-node-8sb9g Created container ovn-acl-logging openshift-ovn-kubernetes 38m Normal Pulling pod/ovnkube-master-l7mb9 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" openshift-ovn-kubernetes 38m Normal Started pod/ovnkube-node-8sb9g Started container ovn-acl-logging openshift-ovn-kubernetes 38m Normal Pulled pod/ovnkube-node-8sb9g Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-multus 38m Warning Failed pod/multus-additional-cni-plugins-g7hvw Error: ErrImagePull openshift-multus 38m Warning Failed pod/multus-additional-cni-plugins-g7hvw Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633": pull QPS exceeded openshift-dns 38m Normal Pulling pod/node-resolver-dqg6k Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" openshift-kube-apiserver 38m Normal Pulling pod/kube-apiserver-ip-10-0-239-132.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" openshift-ovn-kubernetes 38m Normal Started pod/ovnkube-node-8sb9g Started container kube-rbac-proxy openshift-image-registry 38m Warning Failed pod/node-ca-bcbwn Error: ImagePullBackOff openshift-ovn-kubernetes 38m Normal Created pod/ovnkube-node-8sb9g Created container kube-rbac-proxy-ovn-metrics openshift-ovn-kubernetes 38m Normal Started pod/ovnkube-node-8sb9g Started container kube-rbac-proxy-ovn-metrics openshift-cluster-node-tuning-operator 38m Warning Failed pod/tuned-pbkvf Error: ImagePullBackOff openshift-cluster-csi-drivers 38m Normal Pulling pod/aws-ebs-csi-driver-node-2p86w Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" openshift-cluster-csi-drivers 38m Normal Started pod/aws-ebs-csi-driver-node-2p86w Started container csi-node-driver-registrar openshift-cluster-node-tuning-operator 38m Normal BackOff pod/tuned-pbkvf Back-off pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" openshift-ovn-kubernetes 38m Normal Pulled pod/ovnkube-node-8sb9g Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-cluster-csi-drivers 38m Normal Created pod/aws-ebs-csi-driver-node-2p86w Created container csi-node-driver-registrar openshift-ovn-kubernetes 38m Normal Created pod/ovnkube-node-8sb9g Created container ovnkube-node openshift-ovn-kubernetes 38m Normal Started pod/ovnkube-node-8sb9g Started container ovnkube-node openshift-image-registry 38m Normal BackOff pod/node-ca-bcbwn Back-off pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" openshift-multus 38m Normal BackOff pod/multus-additional-cni-plugins-g7hvw Back-off pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633" openshift-monitoring 38m Normal Pulled pod/node-exporter-jhj5d Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" in 2.291909312s (2.291919734s including waiting) openshift-multus 38m Warning Failed pod/multus-additional-cni-plugins-g7hvw Error: ImagePullBackOff openshift-cluster-csi-drivers 38m Normal Pulled pod/aws-ebs-csi-driver-node-2p86w Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" in 1.51839964s (1.518416516s including waiting) default 38m Warning ResolutionFailed namespace/openshift-velero constraints not satisfiable: subscription managed-velero-operator exists, no operators found from catalog managed-velero-operator-registry in namespace openshift-velero referenced by subscription managed-velero-operator openshift-monitoring 38m Normal Started pod/node-exporter-jhj5d Started container init-textfile openshift-etcd 38m Normal BackOff pod/etcd-ip-10-0-239-132.ec2.internal Back-off pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" openshift-etcd 38m Warning Failed pod/etcd-ip-10-0-239-132.ec2.internal Error: ImagePullBackOff openshift-monitoring 38m Normal Created pod/node-exporter-jhj5d Created container init-textfile openshift-monitoring 38m Normal Pulled pod/node-exporter-jhj5d Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" already present on machine openshift-ingress 38m Warning Unhealthy pod/router-default-699d8c97f-mlkcv Readiness probe failed: HTTP probe failed with statuscode: 500 openshift-ingress 38m Warning ProbeError pod/router-default-699d8c97f-mlkcv Readiness probe error: HTTP probe failed with statuscode: 500... openshift-etcd-operator 38m Warning EtcdLeaderChangeMetrics deployment/etcd-operator Detected leader change increase of 3.3333333333333335 over 5 minutes on "AWS"; disk metrics are: etcd-ip-10-0-140-6.ec2.internal=0.003684,etcd-ip-10-0-197-197.ec2.internal=0.009863,etcd-ip-10-0-239-132.ec2.internal=0.003908. Most often this is as a result of inadequate storage or sometimes due to networking issues. openshift-cluster-node-tuning-operator 38m Normal Pulling pod/tuned-pbkvf Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" openshift-etcd-operator 38m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:true Took:2.057054ms Error:} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:true Took:912.211µs Error:} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.239.132:2379]: context deadline exceeded}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:true Took:3.055786ms Error:} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:true Took:1.642671ms Error:} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.239.132:2379]: context deadline exceeded}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy" openshift-monitoring 38m Normal AddedInterface pod/sre-dns-latency-exporter-hm6bk Add eth0 [10.129.2.4/23] from ovn-kubernetes openshift-monitoring 38m Normal Pulling pod/node-exporter-jhj5d Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" openshift-monitoring 38m Normal Created pod/node-exporter-jhj5d Created container node-exporter openshift-monitoring 38m Normal Started pod/node-exporter-jhj5d Started container node-exporter openshift-image-registry 38m Normal Pulling pod/node-ca-bcbwn Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" openshift-etcd 38m Normal Pulling pod/etcd-ip-10-0-239-132.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" openshift-multus 38m Normal Pulling pod/multus-additional-cni-plugins-g7hvw Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633" default 38m Warning ErrorReconcilingNode node/ip-10-0-239-132.ec2.internal error creating gateway for node ip-10-0-239-132.ec2.internal: failed to init shared interface gateway: failed to sync stale SNATs on node ip-10-0-239-132.ec2.internal: unable to fetch podIPs for pod openshift-kube-apiserver/revision-pruner-11-ip-10-0-239-132.ec2.internal openshift-ovn-kubernetes 38m Normal LeaderElection lease/ovn-kubernetes-master ip-10-0-197-197.ec2.internal became leader default 38m Warning ErrorReconcilingNode node/ip-10-0-187-75.ec2.internal error creating gateway for node ip-10-0-187-75.ec2.internal: failed to init shared interface gateway: failed to sync stale SNATs on node ip-10-0-187-75.ec2.internal: unable to fetch podIPs for pod openshift-monitoring/prometheus-operator-7f64545d8-cxj25 default 38m Warning ErrorReconcilingNode node/ip-10-0-140-6.ec2.internal error creating gateway for node ip-10-0-140-6.ec2.internal: failed to init shared interface gateway: failed to sync stale SNATs on node ip-10-0-140-6.ec2.internal: unable to fetch podIPs for pod openshift-marketplace/certified-operators-dwz78 openshift-monitoring 38m Normal AddedInterface pod/prometheus-operator-7f64545d8-cxj25 Add eth0 [10.129.2.3/23] from ovn-kubernetes default 38m Warning ErrorReconcilingNode node/ip-10-0-197-197.ec2.internal error creating gateway for node ip-10-0-197-197.ec2.internal: failed to init shared interface gateway: failed to sync stale SNATs on node ip-10-0-197-197.ec2.internal: unable to fetch podIPs for pod openshift-kube-controller-manager/installer-7-ip-10-0-197-197.ec2.internal openshift-kube-controller-manager 37m Normal Pulled pod/installer-7-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-console 37m Normal AddedInterface pod/console-65cc7f8b45-drq2q Add eth0 [10.130.0.35/23] from ovn-kubernetes openshift-marketplace 37m Normal Started pod/certified-operators-dwz78 Started container registry-server openshift-kube-controller-manager 37m Normal AddedInterface pod/installer-7-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.20/23] from ovn-kubernetes openshift-kube-apiserver-operator 37m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 9; 0 nodes have achieved new revision 11" to "NodeInstallerProgressing: 2 nodes are at revision 9; 1 nodes are at revision 11",Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 9; 0 nodes have achieved new revision 11" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 9; 1 nodes are at revision 11" openshift-kube-controller-manager 37m Normal Created pod/installer-7-ip-10-0-197-197.ec2.internal Created container installer openshift-marketplace 37m Normal Created pod/certified-operators-dwz78 Created container registry-server openshift-monitoring 37m Normal Pulling pod/prometheus-operator-7f64545d8-cxj25 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f0c9dc9888697e244d61cd89f8fe5a61dcb09dc100889be738db21b2fc5bbf7" openshift-kube-apiserver-operator 37m Normal NodeCurrentRevisionChanged deployment/kube-apiserver-operator Updated node "ip-10-0-197-197.ec2.internal" from revision 9 to 11 because static pod is ready openshift-console 37m Normal Started pod/console-65cc7f8b45-drq2q Started container console openshift-marketplace 37m Normal Pulling pod/certified-operators-dwz78 Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.12" openshift-kube-controller-manager 37m Normal Started pod/installer-7-ip-10-0-197-197.ec2.internal Started container installer openshift-marketplace 37m Normal Pulled pod/certified-operators-dwz78 Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.12" in 199.913777ms (199.925397ms including waiting) openshift-console 37m Normal Pulled pod/console-65cc7f8b45-drq2q Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f8ed86b29b0df00f0cfb8b6d170e5fa8d9b0092ee92140788ec5a0a1eb60a10" already present on machine openshift-console 37m Normal Created pod/console-65cc7f8b45-drq2q Created container console openshift-marketplace 37m Normal AddedInterface pod/certified-operators-dwz78 Add eth0 [10.128.0.8/23] from ovn-kubernetes openshift-monitoring 37m Normal Created pod/prometheus-operator-7f64545d8-cxj25 Created container kube-rbac-proxy openshift-dns 37m Normal Pulled pod/node-resolver-dqg6k Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" in 19.566979889s (19.56699208s including waiting) openshift-machine-config-operator 37m Normal Pulled pod/machine-config-server-8rhkb Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" in 19.581644788s (19.581651228s including waiting) openshift-monitoring 37m Normal Started pod/prometheus-operator-7f64545d8-cxj25 Started container prometheus-operator openshift-cluster-csi-drivers 37m Normal Pulled pod/aws-ebs-csi-driver-node-ts9mc Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" in 19.569534957s (19.569540074s including waiting) openshift-monitoring 37m Normal Created pod/prometheus-operator-7f64545d8-cxj25 Created container prometheus-operator openshift-machine-config-operator 37m Normal Pulled pod/machine-config-daemon-zlzm2 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" in 19.581609012s (19.581615009s including waiting) openshift-monitoring 37m Normal Started pod/prometheus-operator-7f64545d8-cxj25 Started container kube-rbac-proxy openshift-monitoring 37m Normal Pulled pod/prometheus-operator-7f64545d8-cxj25 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f0c9dc9888697e244d61cd89f8fe5a61dcb09dc100889be738db21b2fc5bbf7" in 1.326193767s (1.326203322s including waiting) openshift-monitoring 37m Normal Pulled pod/prometheus-operator-7f64545d8-cxj25 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 37m Normal Killing pod/prometheus-operator-f4cf7fb47-bhql4 Stopping container prometheus-operator openshift-monitoring 37m Normal ScalingReplicaSet deployment/prometheus-operator Scaled down replica set prometheus-operator-f4cf7fb47 to 0 from 1 openshift-monitoring 37m Normal SuccessfulDelete replicaset/prometheus-operator-f4cf7fb47 Deleted pod: prometheus-operator-f4cf7fb47-bhql4 openshift-monitoring 37m Normal ServiceAccountCreated deployment/cluster-monitoring-operator Created ServiceAccount/prometheus-operator -n openshift-user-workload-monitoring because it was missing openshift-monitoring 37m Normal ServiceAccountCreated deployment/cluster-monitoring-operator Created ServiceAccount/prometheus-user-workload -n openshift-user-workload-monitoring because it was missing openshift-monitoring 37m Normal SuccessfulCreate replicaset/openshift-state-metrics-8757cbbb4 Created pod: openshift-state-metrics-8757cbbb4-whgf4 openshift-monitoring 37m Normal SuccessfulCreate replicaset/kube-state-metrics-7d7b86bb68 Created pod: kube-state-metrics-7d7b86bb68-l675w openshift-monitoring 37m Normal ScalingReplicaSet deployment/openshift-state-metrics Scaled up replica set openshift-state-metrics-8757cbbb4 to 1 openshift-monitoring 37m Normal ScalingReplicaSet deployment/kube-state-metrics Scaled up replica set kube-state-metrics-7d7b86bb68 to 1 openshift-user-workload-monitoring 37m Normal ScalingReplicaSet deployment/prometheus-operator Scaled up replica set prometheus-operator-6cbc5c4f45 to 1 openshift-monitoring 37m Normal Pulling pod/openshift-state-metrics-8757cbbb4-whgf4 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:907363827442bc34c33be580ea3ac30198ca65f46a95eb80b2c5255e24d173f3" openshift-monitoring 37m Normal Pulling pod/telemeter-client-5c9599c744-827bg Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:942a1ba76f95d02ba681afbb7d1aea28d457fb2a9d967cacc2233bb243588990" openshift-monitoring 37m Normal Pulling pod/kube-state-metrics-7d7b86bb68-l675w Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:48772f8b25db5f426c168026f3e89252389ea1c6bf3e508f670bffb24ee6e8e7" openshift-monitoring 37m Normal ScalingReplicaSet deployment/telemeter-client Scaled up replica set telemeter-client-5c9599c744 to 1 openshift-monitoring 37m Normal Started pod/openshift-state-metrics-8757cbbb4-whgf4 Started container kube-rbac-proxy-self openshift-monitoring 37m Normal AddedInterface pod/openshift-state-metrics-8757cbbb4-whgf4 Add eth0 [10.129.2.11/23] from ovn-kubernetes openshift-monitoring 37m Normal Pulled pod/openshift-state-metrics-8757cbbb4-whgf4 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-user-workload-monitoring 37m Normal SuccessfulCreate replicaset/prometheus-operator-6cbc5c4f45 Created pod: prometheus-operator-6cbc5c4f45-dt4j5 openshift-monitoring 37m Normal SuccessfulCreate replicaset/telemeter-client-5c9599c744 Created pod: telemeter-client-5c9599c744-827bg openshift-monitoring 37m Normal Pulled pod/openshift-state-metrics-8757cbbb4-whgf4 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-user-workload-monitoring 37m Warning FailedMount pod/prometheus-operator-6cbc5c4f45-dt4j5 MountVolume.SetUp failed for volume "prometheus-operator-user-workload-tls" : secret "prometheus-operator-user-workload-tls" not found openshift-monitoring 37m Normal Created pod/openshift-state-metrics-8757cbbb4-whgf4 Created container kube-rbac-proxy-main openshift-monitoring 37m Normal AddedInterface pod/telemeter-client-5c9599c744-827bg Add eth0 [10.130.2.11/23] from ovn-kubernetes openshift-monitoring 37m Normal AddedInterface pod/kube-state-metrics-7d7b86bb68-l675w Add eth0 [10.130.2.3/23] from ovn-kubernetes openshift-monitoring 37m Normal Created pod/openshift-state-metrics-8757cbbb4-whgf4 Created container kube-rbac-proxy-self openshift-monitoring 37m Normal Started pod/openshift-state-metrics-8757cbbb4-whgf4 Started container kube-rbac-proxy-main openshift-monitoring 37m Normal Killing pod/prometheus-operator-f4cf7fb47-bhql4 Stopping container kube-rbac-proxy openshift-monitoring 37m Normal Killing pod/alertmanager-main-0 Stopping container kube-rbac-proxy-metric openshift-monitoring 37m Normal Killing pod/alertmanager-main-0 Stopping container alertmanager-proxy openshift-monitoring 37m Normal Killing pod/alertmanager-main-0 Stopping container prom-label-proxy openshift-monitoring 37m Normal ServiceAccountCreated deployment/cluster-monitoring-operator Created ServiceAccount/thanos-ruler -n openshift-user-workload-monitoring because it was missing openshift-monitoring 37m Normal Killing pod/alertmanager-main-0 Stopping container config-reloader openshift-monitoring 37m Normal Killing pod/alertmanager-main-0 Stopping container kube-rbac-proxy openshift-monitoring 37m Normal Killing pod/alertmanager-main-0 Stopping container alertmanager openshift-monitoring 37m Normal Pulled pod/openshift-state-metrics-8757cbbb4-whgf4 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:907363827442bc34c33be580ea3ac30198ca65f46a95eb80b2c5255e24d173f3" in 1.053951504s (1.053967073s including waiting) openshift-monitoring 37m Warning FailedToUpdateEndpoint endpoints/alertmanager-operated Failed to update endpoint openshift-monitoring/alertmanager-operated: Operation cannot be fulfilled on endpoints "alertmanager-operated": the object has been modified; please apply your changes to the latest version and try again openshift-user-workload-monitoring 37m Normal NoPods poddisruptionbudget/prometheus-user-workload No matching pods found openshift-monitoring 37m Normal ScalingReplicaSet deployment/thanos-querier Scaled up replica set thanos-querier-6566ccfdd9 to 1 openshift-monitoring 37m Normal ScalingReplicaSet deployment/prometheus-adapter Scaled down replica set prometheus-adapter-5b77f96bd4 to 1 from 2 openshift-monitoring 37m Normal ScalingReplicaSet deployment/prometheus-adapter Scaled up replica set prometheus-adapter-8467ff79fd to 1 openshift-monitoring 37m Normal SuccessfulCreate replicaset/prometheus-adapter-8467ff79fd Created pod: prometheus-adapter-8467ff79fd-rl8p7 openshift-multus 37m Normal Pulled pod/multus-kkqdt Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" in 24.427520498s (24.427529602s including waiting) openshift-monitoring 37m Normal SuccessfulDelete replicaset/prometheus-adapter-5b77f96bd4 Deleted pod: prometheus-adapter-5b77f96bd4-7lwwj openshift-monitoring 37m Normal ScalingReplicaSet deployment/prometheus-adapter Scaled up replica set prometheus-adapter-8467ff79fd to 2 from 1 openshift-monitoring 37m Normal SuccessfulDelete replicaset/thanos-querier-7bbf5b5dcd Deleted pod: thanos-querier-7bbf5b5dcd-fvmbq openshift-monitoring 37m Normal Pulling pod/thanos-querier-6566ccfdd9-7cwhk Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a898acb346b7aae546bb9dc31e43266415a096fa755241b56acb5969190584f9" openshift-monitoring 37m Normal Started pod/telemeter-client-5c9599c744-827bg Started container telemeter-client openshift-monitoring 37m Normal SuccessfulCreate replicaset/prometheus-adapter-8467ff79fd Created pod: prometheus-adapter-8467ff79fd-szs4l openshift-monitoring 37m Normal Pulling pod/telemeter-client-5c9599c744-827bg Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" openshift-monitoring 37m Normal Pulling pod/thanos-querier-6566ccfdd9-jmz7s Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a898acb346b7aae546bb9dc31e43266415a096fa755241b56acb5969190584f9" openshift-monitoring 37m Normal SuccessfulCreate replicaset/thanos-querier-6566ccfdd9 Created pod: thanos-querier-6566ccfdd9-jmz7s openshift-monitoring 37m Normal ScalingReplicaSet deployment/thanos-querier Scaled down replica set thanos-querier-7bbf5b5dcd to 1 from 2 openshift-monitoring 37m Normal SuccessfulCreate replicaset/thanos-querier-6566ccfdd9 Created pod: thanos-querier-6566ccfdd9-7cwhk openshift-monitoring 37m Normal ScalingReplicaSet deployment/thanos-querier Scaled up replica set thanos-querier-6566ccfdd9 to 2 from 1 openshift-monitoring 37m Normal AddedInterface pod/thanos-querier-6566ccfdd9-7cwhk Add eth0 [10.130.2.12/23] from ovn-kubernetes openshift-monitoring 37m Normal Started pod/kube-state-metrics-7d7b86bb68-l675w Started container kube-state-metrics openshift-monitoring 37m Normal Created pod/kube-state-metrics-7d7b86bb68-l675w Created container kube-state-metrics openshift-monitoring 37m Normal Pulled pod/kube-state-metrics-7d7b86bb68-l675w Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 37m Normal Pulled pod/kube-state-metrics-7d7b86bb68-l675w Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:48772f8b25db5f426c168026f3e89252389ea1c6bf3e508f670bffb24ee6e8e7" in 1.78701085s (1.787018208s including waiting) openshift-monitoring 37m Normal AddedInterface pod/thanos-querier-6566ccfdd9-jmz7s Add eth0 [10.129.2.12/23] from ovn-kubernetes openshift-monitoring 37m Normal SuccessfulDelete replicaset/openshift-state-metrics-66f87c88bd Deleted pod: openshift-state-metrics-66f87c88bd-jg7dn openshift-monitoring 37m Normal ScalingReplicaSet deployment/openshift-state-metrics Scaled down replica set openshift-state-metrics-66f87c88bd to 0 from 1 openshift-user-workload-monitoring 37m Normal Pulled pod/prometheus-operator-6cbc5c4f45-dt4j5 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-user-workload-monitoring 37m Normal Started pod/prometheus-operator-6cbc5c4f45-dt4j5 Started container prometheus-operator openshift-monitoring 37m Normal Started pod/openshift-state-metrics-8757cbbb4-whgf4 Started container openshift-state-metrics openshift-monitoring 37m Normal Created pod/openshift-state-metrics-8757cbbb4-whgf4 Created container openshift-state-metrics openshift-user-workload-monitoring 37m Normal AddedInterface pod/prometheus-operator-6cbc5c4f45-dt4j5 Add eth0 [10.128.0.9/23] from ovn-kubernetes openshift-monitoring 37m Normal Killing pod/openshift-state-metrics-66f87c88bd-jg7dn Stopping container openshift-state-metrics openshift-monitoring 37m Normal Killing pod/openshift-state-metrics-66f87c88bd-jg7dn Stopping container kube-rbac-proxy-main openshift-multus 37m Warning FailedCreatePodSandBox pod/network-metrics-daemon-74bvc Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-74bvc_openshift-multus_a0c0c384-0c4d-4ab1-a06d-f9f72de6a93d_0(9b051581c1ac4bf46dfaf9846e2b819503fe071091a58108a6c2be4ed165af42): error adding pod openshift-multus_network-metrics-daemon-74bvc to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-multus/network-metrics-daemon-74bvc/a0c0c384-0c4d-4ab1-a06d-f9f72de6a93d]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-multus/pods/network-metrics-daemon-74bvc?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-user-workload-monitoring 37m Normal Created pod/prometheus-operator-6cbc5c4f45-dt4j5 Created container prometheus-operator openshift-monitoring 37m Normal Pulled pod/telemeter-client-5c9599c744-827bg Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:942a1ba76f95d02ba681afbb7d1aea28d457fb2a9d967cacc2233bb243588990" in 1.66866032s (1.66866748s including waiting) openshift-user-workload-monitoring 37m Normal Pulled pod/prometheus-operator-6cbc5c4f45-dt4j5 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f0c9dc9888697e244d61cd89f8fe5a61dcb09dc100889be738db21b2fc5bbf7" already present on machine openshift-monitoring 37m Normal Created pod/telemeter-client-5c9599c744-827bg Created container telemeter-client openshift-monitoring 37m Normal Killing pod/openshift-state-metrics-66f87c88bd-jg7dn Stopping container kube-rbac-proxy-self openshift-ingress-canary 37m Warning FailedCreatePodSandBox pod/ingress-canary-bn5dn Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-bn5dn_openshift-ingress-canary_5d493380-4833-46ed-9f90-54d19f456f6e_0(b782a98e036258bdb6fd255ffe01482eb78fa9379f62f3a92f48dc8f521c3833): error adding pod openshift-ingress-canary_ingress-canary-bn5dn to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-ingress-canary/ingress-canary-bn5dn/5d493380-4833-46ed-9f90-54d19f456f6e]: error waiting for pod: Get "https://[api-int.qeaisrhods-c13.abmw.s1.devshift.org]:6443/api/v1/namespaces/openshift-ingress-canary/pods/ingress-canary-bn5dn?timeout=1m0s": dial tcp 10.0.209.0:6443: connect: connection refused openshift-user-workload-monitoring 37m Normal Started pod/prometheus-operator-6cbc5c4f45-dt4j5 Started container kube-rbac-proxy openshift-monitoring 37m Normal SuccessfulCreate statefulset/alertmanager-main create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful openshift-monitoring 37m Normal Pulled pod/telemeter-client-5c9599c744-827bg Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" in 915.11312ms (915.154361ms including waiting) openshift-user-workload-monitoring 37m Normal SuccessfulCreate statefulset/prometheus-user-workload create Pod prometheus-user-workload-1 in StatefulSet prometheus-user-workload successful openshift-user-workload-monitoring 37m Normal SuccessfulCreate statefulset/prometheus-user-workload create Pod prometheus-user-workload-0 in StatefulSet prometheus-user-workload successful openshift-monitoring 37m Normal Created pod/telemeter-client-5c9599c744-827bg Created container reload openshift-monitoring 37m Normal Started pod/kube-state-metrics-7d7b86bb68-l675w Started container kube-rbac-proxy-self openshift-user-workload-monitoring 37m Normal AddedInterface pod/prometheus-user-workload-0 Add eth0 [10.128.2.3/23] from ovn-kubernetes openshift-user-workload-monitoring 37m Normal Pulled pod/prometheus-user-workload-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" already present on machine openshift-monitoring 37m Normal Started pod/telemeter-client-5c9599c744-827bg Started container reload openshift-monitoring 37m Normal SuccessfulCreate statefulset/alertmanager-main create Pod alertmanager-main-1 in StatefulSet alertmanager-main successful openshift-monitoring 37m Normal Pulled pod/telemeter-client-5c9599c744-827bg Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 37m Normal SuccessfulCreate statefulset/alertmanager-main create Claim alertmanager-data-alertmanager-main-1 Pod alertmanager-main-1 in StatefulSet alertmanager-main success openshift-user-workload-monitoring 37m Normal Created pod/prometheus-user-workload-0 Created container init-config-reloader openshift-ovn-kubernetes 37m Normal Pulled pod/ovnkube-node-8sb9g Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-user-workload-monitoring 37m Normal Started pod/prometheus-user-workload-0 Started container init-config-reloader openshift-monitoring 37m Normal Killing pod/kube-state-metrics-55f6dbfb8b-phfp9 Stopping container kube-state-metrics openshift-monitoring 37m Normal SuccessfulCreate statefulset/alertmanager-main create Claim alertmanager-data-alertmanager-main-0 Pod alertmanager-main-0 in StatefulSet alertmanager-main success openshift-monitoring 37m Normal ScalingReplicaSet deployment/kube-state-metrics Scaled down replica set kube-state-metrics-55f6dbfb8b to 0 from 1 openshift-monitoring 37m Normal NoPods poddisruptionbudget/alertmanager-main No matching pods found openshift-monitoring 37m Normal Killing pod/kube-state-metrics-55f6dbfb8b-phfp9 Stopping container kube-rbac-proxy-self openshift-monitoring 37m Normal Killing pod/kube-state-metrics-55f6dbfb8b-phfp9 Stopping container kube-rbac-proxy-main openshift-monitoring 37m Normal SuccessfulDelete replicaset/kube-state-metrics-55f6dbfb8b Deleted pod: kube-state-metrics-55f6dbfb8b-phfp9 openshift-kube-controller-manager 37m Normal Pulled pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" in 24.761399171s (24.761410882s including waiting) openshift-monitoring 37m Normal WaitForFirstConsumer persistentvolumeclaim/alertmanager-data-alertmanager-main-0 waiting for first consumer to be created before binding openshift-monitoring 37m Normal Provisioning persistentvolumeclaim/alertmanager-data-alertmanager-main-0 External provisioner is provisioning volume for claim "openshift-monitoring/alertmanager-data-alertmanager-main-0" openshift-monitoring 37m Normal AddedInterface pod/prometheus-adapter-8467ff79fd-rl8p7 Add eth0 [10.129.2.13/23] from ovn-kubernetes openshift-monitoring 37m Normal Pulling pod/prometheus-adapter-8467ff79fd-rl8p7 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbc27b4ea8b6ed06d8490b60e95b36bda21f09f15ec3f25f901c8dffc32292d9" openshift-monitoring 37m Normal ExternalProvisioning persistentvolumeclaim/alertmanager-data-alertmanager-main-0 waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator openshift-monitoring 37m Normal Created pod/kube-state-metrics-7d7b86bb68-l675w Created container kube-rbac-proxy-self openshift-monitoring 37m Normal WaitForFirstConsumer persistentvolumeclaim/alertmanager-data-alertmanager-main-1 waiting for first consumer to be created before binding openshift-monitoring 37m Normal ExternalProvisioning persistentvolumeclaim/alertmanager-data-alertmanager-main-1 waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator openshift-monitoring 37m Normal Provisioning persistentvolumeclaim/alertmanager-data-alertmanager-main-1 External provisioner is provisioning volume for claim "openshift-monitoring/alertmanager-data-alertmanager-main-1" openshift-monitoring 37m Normal Pulling pod/prometheus-adapter-8467ff79fd-szs4l Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbc27b4ea8b6ed06d8490b60e95b36bda21f09f15ec3f25f901c8dffc32292d9" openshift-monitoring 37m Normal AddedInterface pod/prometheus-adapter-8467ff79fd-szs4l Add eth0 [10.130.2.13/23] from ovn-kubernetes openshift-monitoring 37m Normal Pulled pod/kube-state-metrics-7d7b86bb68-l675w Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 37m Normal Started pod/kube-state-metrics-7d7b86bb68-l675w Started container kube-rbac-proxy-main openshift-user-workload-monitoring 37m Normal Created pod/prometheus-operator-6cbc5c4f45-dt4j5 Created container kube-rbac-proxy openshift-monitoring 37m Normal Created pod/kube-state-metrics-7d7b86bb68-l675w Created container kube-rbac-proxy-main openshift-user-workload-monitoring 37m Normal Created pod/prometheus-user-workload-0 Created container kube-rbac-proxy-federate openshift-monitoring 37m Normal Killing pod/prometheus-k8s-0 Stopping container thanos-sidecar openshift-authentication-operator 37m Normal ObserveIdentityProviders deployment/authentication-operator identity providers changed to [map["challenge":%!q(bool=true) "login":%!q(bool=true) "mappingMethod":"claim" "name":"htpasswd-cluster-admin" "provider":map["apiVersion":"osin.config.openshift.io/v1" "file":"/var/config/user/idp/0/secret/v4-0-config-user-idp-0-file-data/htpasswd" "kind":"HTPasswdPasswordIdentityProvider"]]] openshift-monitoring 37m Normal Killing pod/prometheus-k8s-0 Stopping container config-reloader openshift-authentication-operator 37m Normal SecretCreated deployment/authentication-operator Created Secret/v4-0-config-user-idp-0-file-data -n openshift-authentication because it was missing openshift-monitoring 37m Normal Killing pod/telemeter-client-5bd4dfdf7c-2982f Stopping container telemeter-client openshift-network-diagnostics 37m Normal Pulling pod/network-check-target-w7m4g Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" openshift-network-diagnostics 37m Normal AddedInterface pod/network-check-target-w7m4g Add eth0 [10.131.0.5/23] from ovn-kubernetes openshift-monitoring 37m Normal Killing pod/prometheus-k8s-0 Stopping container prometheus-proxy openshift-monitoring 37m Normal Killing pod/prometheus-k8s-0 Stopping container kube-rbac-proxy-thanos openshift-monitoring 37m Normal AddedInterface pod/sre-dns-latency-exporter-4j7vx Add eth0 [10.131.0.20/23] from ovn-kubernetes openshift-monitoring 37m Normal Pulling pod/sre-dns-latency-exporter-4j7vx Pulling image "quay.io/app-sre/managed-prometheus-exporter-base:latest" openshift-monitoring 37m Normal Killing pod/prometheus-k8s-0 Stopping container kube-rbac-proxy openshift-user-workload-monitoring 37m Normal Pulled pod/prometheus-user-workload-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a898acb346b7aae546bb9dc31e43266415a096fa755241b56acb5969190584f9" already present on machine openshift-monitoring 37m Normal Killing pod/prometheus-k8s-0 Stopping container prometheus openshift-monitoring 37m Normal ScalingReplicaSet deployment/telemeter-client Scaled down replica set telemeter-client-5bd4dfdf7c to 0 from 1 openshift-monitoring 37m Normal Killing pod/telemeter-client-5bd4dfdf7c-2982f Stopping container reload openshift-monitoring 37m Normal Created pod/telemeter-client-5c9599c744-827bg Created container kube-rbac-proxy openshift-monitoring 37m Normal Started pod/telemeter-client-5c9599c744-827bg Started container kube-rbac-proxy openshift-ovn-kubernetes 37m Normal Started pod/ovnkube-node-8sb9g Started container ovn-controller openshift-ovn-kubernetes 37m Normal Created pod/ovnkube-node-8sb9g Created container ovn-controller openshift-cluster-csi-drivers 37m Normal Created pod/aws-ebs-csi-driver-node-2p86w Created container csi-liveness-probe openshift-multus 37m Normal Started pod/multus-additional-cni-plugins-j5mgq Started container cni-plugins openshift-kube-apiserver 37m Normal Pulled pod/kube-apiserver-ip-10-0-239-132.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" in 26.468069041s (26.468084425s including waiting) openshift-user-workload-monitoring 37m Normal Created pod/prometheus-user-workload-0 Created container config-reloader openshift-cluster-csi-drivers 37m Normal Started pod/aws-ebs-csi-driver-node-2p86w Started container csi-liveness-probe openshift-user-workload-monitoring 37m Normal Pulled pod/prometheus-user-workload-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" already present on machine openshift-user-workload-monitoring 37m Normal Started pod/prometheus-user-workload-0 Started container prometheus openshift-multus 37m Normal Pulled pod/multus-additional-cni-plugins-j5mgq Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be088cb9218a248cd4a616132c8daf531502c7b85404aadb0b73e76f8dba01df" in 25.611656492s (25.611669833s including waiting) openshift-user-workload-monitoring 37m Normal Created pod/prometheus-user-workload-0 Created container prometheus openshift-user-workload-monitoring 37m Normal Pulled pod/prometheus-user-workload-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1cdf8e55288584a04e1a403133d4508bac27808a20427a18aee85a92c367316" already present on machine openshift-dns 37m Normal AddedInterface pod/dns-default-jf2vx Add eth0 [10.131.0.9/23] from ovn-kubernetes openshift-dns 37m Normal Pulling pod/dns-default-jf2vx Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a604c96952da5e59f104a305e0c8303d474e33ef345b8c534fd677f189d05299" openshift-user-workload-monitoring 37m Normal Pulled pod/prometheus-user-workload-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-user-workload-monitoring 37m Normal Started pod/prometheus-user-workload-0 Started container thanos-sidecar openshift-multus 37m Normal Created pod/multus-additional-cni-plugins-j5mgq Created container cni-plugins openshift-user-workload-monitoring 37m Normal Created pod/prometheus-user-workload-0 Created container thanos-sidecar openshift-multus 37m Normal Pulling pod/multus-additional-cni-plugins-j5mgq Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dec1af3c24148179ab45f9137ba74876f4287121f07aa32dc7a3643da749a9" openshift-user-workload-monitoring 37m Normal Started pod/prometheus-user-workload-0 Started container config-reloader openshift-monitoring 37m Normal SuccessfulDelete replicaset/telemeter-client-5bd4dfdf7c Deleted pod: telemeter-client-5bd4dfdf7c-2982f openshift-monitoring 37m Normal Started pod/thanos-querier-6566ccfdd9-jmz7s Started container kube-rbac-proxy openshift-machine-config-operator 37m Normal Started pod/machine-config-server-8rhkb Started container machine-config-server openshift-image-registry 37m Normal Pulled pod/node-ca-bcbwn Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" in 12.267143203s (12.267154222s including waiting) openshift-monitoring 37m Normal NoPods poddisruptionbudget/prometheus-k8s No matching pods found openshift-kube-scheduler 37m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" in 26.706550267s (26.706560504s including waiting) openshift-monitoring 37m Normal Pulling pod/thanos-querier-6566ccfdd9-7cwhk Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8541443d5c6c0d31959a8e5f8d6a525d43bb4df66be563603d282f48bdeb304" openshift-monitoring 37m Normal Started pod/prometheus-adapter-8467ff79fd-rl8p7 Started container prometheus-adapter openshift-multus 37m Normal Created pod/multus-additional-cni-plugins-j5mgq Created container bond-cni-plugin openshift-monitoring 37m Normal Started pod/thanos-querier-6566ccfdd9-7cwhk Started container kube-rbac-proxy openshift-monitoring 37m Normal Created pod/prometheus-adapter-8467ff79fd-rl8p7 Created container prometheus-adapter openshift-monitoring 37m Normal Pulling pod/thanos-querier-6566ccfdd9-jmz7s Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8541443d5c6c0d31959a8e5f8d6a525d43bb4df66be563603d282f48bdeb304" openshift-multus 37m Normal Started pod/multus-additional-cni-plugins-j5mgq Started container bond-cni-plugin openshift-monitoring 37m Normal Created pod/thanos-querier-6566ccfdd9-7cwhk Created container kube-rbac-proxy openshift-monitoring 37m Normal Pulled pod/thanos-querier-6566ccfdd9-7cwhk Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 37m Normal Started pod/thanos-querier-6566ccfdd9-7cwhk Started container oauth-proxy openshift-authentication-operator 37m Normal ObservedConfigChanged deployment/authentication-operator Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]any{\n\u00a0\u00a0\t\"corsAllowedOrigins\": []any{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\"oauthConfig\": map[string]any{\n\u00a0\u00a0\t\t\"assetPublicURL\": string(\"https://console-openshift-console.apps.qeaisrhods-c13.abmw.s1.de\"...),\n+\u00a0\t\t\"identityProviders\": []any{\n+\u00a0\t\t\tmap[string]any{\n+\u00a0\t\t\t\t\"challenge\": bool(true),\n+\u00a0\t\t\t\t\"login\": bool(true),\n+\u00a0\t\t\t\t\"mappingMethod\": string(\"claim\"),\n+\u00a0\t\t\t\t\"name\": string(\"htpasswd-cluster-admin\"),\n+\u00a0\t\t\t\t\"provider\": map[string]any{\n+\u00a0\t\t\t\t\t\"apiVersion\": string(\"osin.config.openshift.io/v1\"),\n+\u00a0\t\t\t\t\t\"file\": string(\"/var/config/user/idp/0/secret/v4\"...),\n+\u00a0\t\t\t\t\t\"kind\": string(\"HTPasswdPasswordIdentityProvider\"),\n+\u00a0\t\t\t\t},\n+\u00a0\t\t\t},\n+\u00a0\t\t},\n\u00a0\u00a0\t\t\"loginURL\": string(\"https://api.qeaisrhods-c13.abmw.s1.devshift.org:6443\"),\n\u00a0\u00a0\t\t\"templates\": map[string]any{\"error\": string(\"/var/config/user/template/secret/v4-0-config-user-template-error\"...), \"login\": string(\"/var/config/user/template/secret/v4-0-config-user-template-login\"...), \"providerSelection\": string(\"/var/config/user/template/secret/v4-0-config-user-template-provi\"...)},\n\u00a0\u00a0\t\t\"tokenConfig\": map[string]any{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)},\n\u00a0\u00a0\t},\n\u00a0\u00a0\t\"serverArguments\": map[string]any{\"audit-log-format\": []any{string(\"json\")}, \"audit-log-maxbackup\": []any{string(\"10\")}, \"audit-log-maxsize\": []any{string(\"100\")}, \"audit-log-path\": []any{string(\"/var/log/oauth-server/audit.log\")}, ...},\n\u00a0\u00a0\t\"servingInfo\": map[string]any{\"cipherSuites\": []any{string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\"), string(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"), ...}, \"minTLSVersion\": string(\"VersionTLS12\"), \"namedCertificates\": []any{map[string]any{\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"names\": []any{string(\"*.apps.qeaisrhods-c13.abmw.s1.devshift.org\")}}}},\n-\u00a0\t\"volumesToMount\": map[string]any{\"identityProviders\": string(\"{}\")},\n+\u00a0\t\"volumesToMount\": map[string]any{\n+\u00a0\t\t\"identityProviders\": string(`{\"v4-0-config-user-idp-0-file-data\":{\"name\":\"htpasswd-secret\",\"mountPath\":\"/var/config/user/idp/0/secret/v4-0-config-user-idp-0-file-data\",\"key\":\"htpasswd\",\"type\":\"secret\"}}`),\n+\u00a0\t},\n\u00a0\u00a0}\n" openshift-monitoring 37m Normal Pulled pod/prometheus-adapter-8467ff79fd-rl8p7 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbc27b4ea8b6ed06d8490b60e95b36bda21f09f15ec3f25f901c8dffc32292d9" in 1.843946108s (1.8439544s including waiting) openshift-monitoring 37m Normal Created pod/thanos-querier-6566ccfdd9-7cwhk Created container oauth-proxy openshift-monitoring 37m Normal Pulled pod/node-exporter-jhj5d Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" in 13.931615605s (13.931626186s including waiting) openshift-monitoring 37m Normal Created pod/node-exporter-jhj5d Created container kube-rbac-proxy openshift-monitoring 37m Normal Pulled pod/thanos-querier-6566ccfdd9-7cwhk Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-monitoring 37m Normal Started pod/thanos-querier-6566ccfdd9-7cwhk Started container thanos-query openshift-machine-config-operator 37m Normal Created pod/machine-config-server-8rhkb Created container machine-config-server openshift-monitoring 37m Normal Created pod/thanos-querier-6566ccfdd9-7cwhk Created container thanos-query openshift-monitoring 37m Normal Pulled pod/prometheus-adapter-8467ff79fd-szs4l Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbc27b4ea8b6ed06d8490b60e95b36bda21f09f15ec3f25f901c8dffc32292d9" in 2.153940432s (2.153948936s including waiting) openshift-monitoring 37m Normal Pulled pod/thanos-querier-6566ccfdd9-7cwhk Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a898acb346b7aae546bb9dc31e43266415a096fa755241b56acb5969190584f9" in 2.537181924s (2.537192109s including waiting) openshift-cluster-node-tuning-operator 37m Normal Created pod/tuned-pbkvf Created container tuned openshift-multus 37m Normal Pulled pod/multus-additional-cni-plugins-j5mgq Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dec1af3c24148179ab45f9137ba74876f4287121f07aa32dc7a3643da749a9" in 865.465495ms (865.472039ms including waiting) openshift-cluster-node-tuning-operator 37m Normal Pulled pod/tuned-pbkvf Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" in 15.28941178s (15.289419933s including waiting) openshift-ovn-kubernetes 37m Normal Pulled pod/ovnkube-node-wsrzb Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 37m Normal Pulled pod/ovnkube-node-wsrzb Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" in 26.94936588s (26.949373878s including waiting) openshift-machine-config-operator 37m Normal Started pod/machine-config-daemon-zlzm2 Started container machine-config-daemon openshift-user-workload-monitoring 37m Normal Started pod/prometheus-user-workload-0 Started container kube-rbac-proxy-thanos openshift-user-workload-monitoring 37m Normal Created pod/prometheus-user-workload-0 Created container kube-rbac-proxy-thanos openshift-user-workload-monitoring 37m Normal Pulled pod/prometheus-user-workload-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 37m Normal Started pod/node-exporter-jhj5d Started container kube-rbac-proxy openshift-user-workload-monitoring 37m Normal Started pod/prometheus-user-workload-0 Started container kube-rbac-proxy-metrics openshift-user-workload-monitoring 37m Normal Created pod/prometheus-user-workload-0 Created container kube-rbac-proxy-metrics openshift-machine-config-operator 37m Normal Pulling pod/machine-config-daemon-zlzm2 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" openshift-multus 37m Normal Pulled pod/multus-additional-cni-plugins-g7hvw Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633" in 10.26927683s (10.269283565s including waiting) openshift-user-workload-monitoring 37m Normal Pulled pod/prometheus-user-workload-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 37m Normal Created pod/prometheus-adapter-8467ff79fd-szs4l Created container prometheus-adapter openshift-monitoring 37m Normal Started pod/prometheus-adapter-8467ff79fd-szs4l Started container prometheus-adapter openshift-ovn-kubernetes 37m Normal Pulled pod/ovnkube-master-l7mb9 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" in 26.942375526s (26.942383296s including waiting) openshift-ovn-kubernetes 37m Normal Created pod/ovnkube-master-l7mb9 Created container northd openshift-ovn-kubernetes 37m Normal Started pod/ovnkube-master-l7mb9 Started container northd openshift-ovn-kubernetes 37m Normal Pulled pod/ovnkube-master-l7mb9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-monitoring 37m Normal Pulled pod/thanos-querier-6566ccfdd9-jmz7s Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a898acb346b7aae546bb9dc31e43266415a096fa755241b56acb5969190584f9" in 2.412916559s (2.412925508s including waiting) openshift-monitoring 37m Normal Created pod/thanos-querier-6566ccfdd9-jmz7s Created container thanos-query openshift-user-workload-monitoring 37m Normal Started pod/prometheus-user-workload-0 Started container kube-rbac-proxy-federate openshift-cluster-csi-drivers 37m Normal Created pod/aws-ebs-csi-driver-node-ts9mc Created container csi-driver openshift-monitoring 37m Normal Started pod/thanos-querier-6566ccfdd9-jmz7s Started container thanos-query openshift-monitoring 37m Normal Pulled pod/thanos-querier-6566ccfdd9-jmz7s Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-etcd 37m Normal Pulled pod/etcd-ip-10-0-239-132.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" in 11.267266499s (11.267277786s including waiting) openshift-monitoring 37m Normal Created pod/thanos-querier-6566ccfdd9-jmz7s Created container oauth-proxy openshift-monitoring 37m Normal Started pod/thanos-querier-6566ccfdd9-jmz7s Started container oauth-proxy openshift-etcd-operator 37m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:true Took:3.055786ms Error:} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:true Took:1.642671ms Error:} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.239.132:2379]: context deadline exceeded}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:true Took:2.957328ms Error:} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:true Took:1.267647ms Error:} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.239.132:2379]: context deadline exceeded}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy" openshift-monitoring 37m Normal Pulled pod/thanos-querier-6566ccfdd9-jmz7s Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 37m Normal Created pod/thanos-querier-6566ccfdd9-jmz7s Created container kube-rbac-proxy openshift-machine-config-operator 37m Normal Created pod/machine-config-daemon-zlzm2 Created container machine-config-daemon openshift-monitoring 37m Normal WaitForFirstConsumer persistentvolumeclaim/prometheus-data-prometheus-k8s-1 waiting for first consumer to be created before binding openshift-monitoring 37m Normal Provisioning persistentvolumeclaim/prometheus-data-prometheus-k8s-1 External provisioner is provisioning volume for claim "openshift-monitoring/prometheus-data-prometheus-k8s-1" openshift-monitoring 37m Normal ExternalProvisioning persistentvolumeclaim/prometheus-data-prometheus-k8s-1 waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator openshift-ovn-kubernetes 37m Normal Started pod/ovnkube-master-l7mb9 Started container nbdb openshift-monitoring 37m Normal ExternalProvisioning persistentvolumeclaim/prometheus-data-prometheus-k8s-0 waiting for a volume to be created, either by external provisioner "ebs.csi.aws.com" or manually created by system administrator openshift-monitoring 37m Normal Provisioning persistentvolumeclaim/prometheus-data-prometheus-k8s-0 External provisioner is provisioning volume for claim "openshift-monitoring/prometheus-data-prometheus-k8s-0" openshift-ovn-kubernetes 37m Normal Created pod/ovnkube-master-l7mb9 Created container nbdb openshift-etcd 37m Normal Created pod/etcd-ip-10-0-239-132.ec2.internal Created container setup openshift-etcd 37m Normal Started pod/etcd-ip-10-0-239-132.ec2.internal Started container setup openshift-cluster-csi-drivers 37m Normal Started pod/aws-ebs-csi-driver-node-ts9mc Started container csi-driver openshift-cluster-csi-drivers 37m Normal Pulling pod/aws-ebs-csi-driver-node-ts9mc Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" openshift-monitoring 37m Normal WaitForFirstConsumer persistentvolumeclaim/prometheus-data-prometheus-k8s-0 waiting for first consumer to be created before binding openshift-dns 37m Normal Created pod/node-resolver-dqg6k Created container dns-node-resolver openshift-dns 37m Normal Started pod/node-resolver-dqg6k Started container dns-node-resolver openshift-multus 37m Normal Pulling pod/multus-additional-cni-plugins-j5mgq Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4903dfa59fc0fcc3521f6431d51c6ed0ef563b0c63614a2df44ee4d447ab6c5d" openshift-ovn-kubernetes 37m Normal Started pod/ovnkube-node-wsrzb Started container ovnkube-node openshift-ovn-kubernetes 37m Normal Created pod/ovnkube-node-wsrzb Created container ovnkube-node openshift-ovn-kubernetes 37m Normal Pulled pod/ovnkube-node-wsrzb Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 37m Normal Started pod/ovnkube-node-wsrzb Started container kube-rbac-proxy-ovn-metrics openshift-ovn-kubernetes 37m Normal Created pod/ovnkube-node-wsrzb Created container kube-rbac-proxy-ovn-metrics openshift-image-registry 37m Normal Started pod/node-ca-bcbwn Started container node-ca openshift-kube-controller-manager 37m Normal Started pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Started container kube-controller-manager openshift-ovn-kubernetes 37m Normal Pulled pod/ovnkube-node-wsrzb Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-image-registry 37m Normal Created pod/node-ca-bcbwn Created container node-ca openshift-ovn-kubernetes 37m Normal Started pod/ovnkube-node-wsrzb Started container kube-rbac-proxy openshift-ovn-kubernetes 37m Normal Created pod/ovnkube-node-wsrzb Created container kube-rbac-proxy openshift-ovn-kubernetes 37m Normal Pulled pod/ovnkube-node-wsrzb Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 37m Normal Pulled pod/thanos-querier-6566ccfdd9-jmz7s Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8541443d5c6c0d31959a8e5f8d6a525d43bb4df66be563603d282f48bdeb304" in 1.182173631s (1.182187253s including waiting) openshift-monitoring 37m Normal Created pod/thanos-querier-6566ccfdd9-jmz7s Created container prom-label-proxy openshift-ovn-kubernetes 37m Normal Started pod/ovnkube-node-wsrzb Started container ovn-acl-logging openshift-ovn-kubernetes 37m Normal Created pod/ovnkube-node-wsrzb Created container ovn-acl-logging openshift-monitoring 37m Normal Pulled pod/thanos-querier-6566ccfdd9-7cwhk Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8541443d5c6c0d31959a8e5f8d6a525d43bb4df66be563603d282f48bdeb304" in 958.404201ms (958.438319ms including waiting) openshift-kube-apiserver 37m Normal Created pod/kube-apiserver-ip-10-0-239-132.ec2.internal Created container setup openshift-kube-apiserver 37m Normal Started pod/kube-apiserver-ip-10-0-239-132.ec2.internal Started container setup openshift-kube-apiserver 37m Normal Pulled pod/kube-apiserver-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-scheduler 37m Normal Created pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Created container wait-for-host-port openshift-kube-scheduler 37m Normal Started pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Started container wait-for-host-port openshift-kube-controller-manager 37m Normal Pulling pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fec19020202ba3b7f09460bc0cb52fc001941fbb3a3fa7d1ca34a2e9ebf0e9" openshift-kube-apiserver 37m Normal Created pod/kube-apiserver-ip-10-0-239-132.ec2.internal Created container kube-apiserver openshift-kube-apiserver 37m Normal Pulling pod/kube-apiserver-ip-10-0-239-132.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" openshift-multus 37m Normal Created pod/multus-kkqdt Created container kube-multus openshift-cluster-node-tuning-operator 37m Normal Started pod/tuned-pbkvf Started container tuned openshift-multus 37m Normal Started pod/multus-kkqdt Started container kube-multus openshift-authentication-operator 37m Normal ConfigMapUpdated deployment/authentication-operator Updated ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication:... openshift-monitoring 37m Normal SuccessfulCreate statefulset/prometheus-k8s create Pod prometheus-k8s-1 in StatefulSet prometheus-k8s successful openshift-monitoring 37m Normal SuccessfulCreate statefulset/prometheus-k8s create Claim prometheus-data-prometheus-k8s-1 Pod prometheus-k8s-1 in StatefulSet prometheus-k8s success openshift-multus 37m Normal Started pod/multus-additional-cni-plugins-g7hvw Started container egress-router-binary-copy openshift-multus 37m Normal Created pod/multus-additional-cni-plugins-g7hvw Created container egress-router-binary-copy openshift-monitoring 37m Normal SuccessfulCreate statefulset/prometheus-k8s create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful openshift-monitoring 37m Normal SuccessfulCreate statefulset/prometheus-k8s create Claim prometheus-data-prometheus-k8s-0 Pod prometheus-k8s-0 in StatefulSet prometheus-k8s success openshift-kube-controller-manager 37m Normal Created pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Created container kube-controller-manager openshift-kube-apiserver 37m Normal Started pod/kube-apiserver-ip-10-0-239-132.ec2.internal Started container kube-apiserver openshift-monitoring 37m Normal Created pod/thanos-querier-6566ccfdd9-jmz7s Created container kube-rbac-proxy-rules openshift-user-workload-monitoring 37m Normal AddedInterface pod/thanos-ruler-user-workload-0 Add eth0 [10.128.2.8/23] from ovn-kubernetes openshift-monitoring 37m Normal Started pod/thanos-querier-6566ccfdd9-jmz7s Started container kube-rbac-proxy-metrics openshift-user-workload-monitoring 37m Normal SuccessfulCreate statefulset/thanos-ruler-user-workload create Pod thanos-ruler-user-workload-1 in StatefulSet thanos-ruler-user-workload successful openshift-user-workload-monitoring 37m Normal NoPods poddisruptionbudget/thanos-ruler-user-workload No matching pods found openshift-monitoring 37m Normal Killing pod/telemeter-client-5bd4dfdf7c-2982f Stopping container kube-rbac-proxy openshift-monitoring 37m Normal ProvisioningSucceeded persistentvolumeclaim/alertmanager-data-alertmanager-main-0 Successfully provisioned volume pvc-1b1f012e-b506-4373-bdfd-02e4e6dd5098 openshift-kube-apiserver-operator 37m Normal NodeTargetRevisionChanged deployment/kube-apiserver-operator Updating node "ip-10-0-239-132.ec2.internal" from revision 9 to 11 because node ip-10-0-239-132.ec2.internal with revision 9 is the oldest not ready openshift-etcd 37m Normal Pulled pod/etcd-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-monitoring 37m Normal Created pod/thanos-querier-6566ccfdd9-jmz7s Created container kube-rbac-proxy-metrics openshift-user-workload-monitoring 37m Normal Pulled pod/thanos-ruler-user-workload-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" already present on machine openshift-multus 37m Normal Pulling pod/multus-additional-cni-plugins-g7hvw Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be088cb9218a248cd4a616132c8daf531502c7b85404aadb0b73e76f8dba01df" openshift-monitoring 37m Normal Pulled pod/thanos-querier-6566ccfdd9-jmz7s Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 37m Normal Started pod/thanos-querier-6566ccfdd9-jmz7s Started container kube-rbac-proxy-rules openshift-monitoring 37m Normal Started pod/thanos-querier-6566ccfdd9-jmz7s Started container prom-label-proxy openshift-user-workload-monitoring 37m Normal Created pod/thanos-ruler-user-workload-0 Created container thanos-ruler openshift-user-workload-monitoring 37m Normal Started pod/thanos-ruler-user-workload-0 Started container thanos-ruler openshift-monitoring 37m Normal ProvisioningSucceeded persistentvolumeclaim/alertmanager-data-alertmanager-main-1 Successfully provisioned volume pvc-88f77b77-893b-4d58-9c84-470da77b4262 openshift-monitoring 37m Normal Pulled pod/thanos-querier-6566ccfdd9-jmz7s Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 37m Normal Pulled pod/thanos-querier-6566ccfdd9-7cwhk Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 37m Normal Created pod/thanos-querier-6566ccfdd9-7cwhk Created container prom-label-proxy openshift-monitoring 37m Normal Started pod/thanos-querier-6566ccfdd9-7cwhk Started container kube-rbac-proxy-metrics openshift-kube-scheduler 37m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-monitoring 37m Normal Created pod/thanos-querier-6566ccfdd9-7cwhk Created container kube-rbac-proxy-rules openshift-monitoring 37m Normal Started pod/thanos-querier-6566ccfdd9-7cwhk Started container kube-rbac-proxy-rules openshift-monitoring 37m Normal Pulled pod/thanos-querier-6566ccfdd9-7cwhk Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 37m Normal Created pod/thanos-querier-6566ccfdd9-7cwhk Created container kube-rbac-proxy-metrics openshift-monitoring 37m Normal Started pod/thanos-querier-6566ccfdd9-7cwhk Started container prom-label-proxy openshift-user-workload-monitoring 37m Normal Pulled pod/thanos-ruler-user-workload-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a898acb346b7aae546bb9dc31e43266415a096fa755241b56acb5969190584f9" already present on machine openshift-user-workload-monitoring 37m Normal Pulled pod/thanos-ruler-user-workload-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-user-workload-monitoring 37m Normal Started pod/thanos-ruler-user-workload-0 Started container config-reloader openshift-cluster-csi-drivers 37m Normal Started pod/aws-ebs-csi-driver-node-ts9mc Started container csi-node-driver-registrar openshift-user-workload-monitoring 37m Normal Created pod/thanos-ruler-user-workload-0 Created container config-reloader openshift-cluster-csi-drivers 37m Normal Created pod/aws-ebs-csi-driver-node-ts9mc Created container csi-node-driver-registrar openshift-user-workload-monitoring 37m Normal Started pod/thanos-ruler-user-workload-0 Started container kube-rbac-proxy-metrics openshift-kube-scheduler 37m Normal Created pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Created container kube-scheduler openshift-user-workload-monitoring 37m Normal Created pod/thanos-ruler-user-workload-0 Created container thanos-ruler-proxy openshift-kube-scheduler 37m Normal Pulling pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" openshift-cluster-csi-drivers 37m Normal Pulled pod/aws-ebs-csi-driver-node-ts9mc Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" in 2.251791934s (2.251806732s including waiting) openshift-cluster-csi-drivers 37m Normal Pulling pod/aws-ebs-csi-driver-node-ts9mc Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" openshift-ovn-kubernetes 37m Normal Pulled pod/ovnkube-master-l7mb9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-user-workload-monitoring 37m Normal Started pod/thanos-ruler-user-workload-0 Started container thanos-ruler-proxy openshift-etcd 37m Normal Started pod/etcd-ip-10-0-239-132.ec2.internal Started container etcd-ensure-env-vars openshift-etcd 37m Normal Created pod/etcd-ip-10-0-239-132.ec2.internal Created container etcd-ensure-env-vars openshift-user-workload-monitoring 37m Normal Pulled pod/thanos-ruler-user-workload-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-kube-controller-manager 37m Normal Pulled pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fec19020202ba3b7f09460bc0cb52fc001941fbb3a3fa7d1ca34a2e9ebf0e9" in 2.315854516s (2.315863669s including waiting) openshift-kube-controller-manager 37m Normal Created pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Created container cluster-policy-controller openshift-ovn-kubernetes 37m Normal Started pod/ovnkube-master-l7mb9 Started container kube-rbac-proxy openshift-user-workload-monitoring 37m Normal Created pod/thanos-ruler-user-workload-0 Created container kube-rbac-proxy-metrics openshift-dns 37m Normal Pulled pod/dns-default-jf2vx Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a604c96952da5e59f104a305e0c8303d474e33ef345b8c534fd677f189d05299" in 4.321373806s (4.321383947s including waiting) openshift-machine-config-operator 37m Normal Pulled pod/machine-config-daemon-zlzm2 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" in 2.626210941s (2.626226354s including waiting) openshift-kube-controller-manager 37m Normal Started pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Started container cluster-policy-controller openshift-machine-config-operator 37m Normal Created pod/machine-config-daemon-zlzm2 Created container oauth-proxy openshift-machine-config-operator 37m Normal Started pod/machine-config-daemon-zlzm2 Started container oauth-proxy openshift-kube-controller-manager 37m Normal Pulling pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" openshift-ovn-kubernetes 37m Normal Pulled pod/ovnkube-master-l7mb9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-ovn-kubernetes 37m Normal Created pod/ovnkube-master-l7mb9 Created container kube-rbac-proxy openshift-kube-scheduler 37m Normal Started pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Started container kube-scheduler openshift-network-diagnostics 37m Normal Pulled pod/network-check-target-w7m4g Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" in 4.991616418s (4.991628478s including waiting) openshift-monitoring 37m Normal ProvisioningSucceeded persistentvolumeclaim/prometheus-data-prometheus-k8s-0 Successfully provisioned volume pvc-7d81aae0-58a0-4040-982d-7f7c86fa6c88 openshift-monitoring 37m Normal ProvisioningSucceeded persistentvolumeclaim/prometheus-data-prometheus-k8s-1 Successfully provisioned volume pvc-2abeae08-0492-477e-a938-e36e3511d5b3 openshift-network-diagnostics 37m Normal Created pod/network-check-target-w7m4g Created container network-check-target-container openshift-authentication 37m Normal SuccessfulCreate replicaset/oauth-openshift-5c9d8ccbcc Created pod: oauth-openshift-5c9d8ccbcc-bkr8m openshift-kube-controller-manager 37m Warning ClusterInfrastructureStatus pod/kube-controller-manager-ip-10-0-239-132.ec2.internal unable to get cluster infrastructure status, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope openshift-network-diagnostics 37m Normal Started pod/network-check-target-w7m4g Started container network-check-target-container openshift-ovn-kubernetes 37m Normal Pulled pod/ovnkube-node-wsrzb Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-dns 37m Normal Started pod/dns-default-jf2vx Started container kube-rbac-proxy openshift-dns 37m Normal Created pod/dns-default-jf2vx Created container kube-rbac-proxy openshift-etcd 37m Normal Pulled pod/etcd-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-authentication-operator 37m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 5, desired generation is 6." openshift-dns 37m Normal Pulled pod/dns-default-jf2vx Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-dns 37m Normal Started pod/dns-default-jf2vx Started container dns openshift-dns 37m Normal Created pod/dns-default-jf2vx Created container dns openshift-authentication 37m Normal SuccessfulDelete replicaset/oauth-openshift-58cb97bf44 Deleted pod: oauth-openshift-58cb97bf44-dtw8g openshift-authentication 37m Normal ScalingReplicaSet deployment/oauth-openshift Scaled down replica set oauth-openshift-58cb97bf44 to 0 from 1 openshift-monitoring 37m Normal Pulled pod/sre-dns-latency-exporter-4j7vx Successfully pulled image "quay.io/app-sre/managed-prometheus-exporter-base:latest" in 5.028949451s (5.028971888s including waiting) openshift-monitoring 37m Normal Created pod/sre-dns-latency-exporter-4j7vx Created container main openshift-multus 37m Normal Pulled pod/multus-additional-cni-plugins-j5mgq Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4903dfa59fc0fcc3521f6431d51c6ed0ef563b0c63614a2df44ee4d447ab6c5d" in 2.939786502s (2.939800322s including waiting) openshift-multus 37m Normal Created pod/multus-additional-cni-plugins-j5mgq Created container routeoverride-cni openshift-monitoring 37m Normal Started pod/sre-dns-latency-exporter-4j7vx Started container main openshift-user-workload-monitoring 37m Normal AddedInterface pod/thanos-ruler-user-workload-1 Add eth0 [10.131.0.3/23] from ovn-kubernetes openshift-user-workload-monitoring 37m Normal Pulling pod/thanos-ruler-user-workload-1 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a898acb346b7aae546bb9dc31e43266415a096fa755241b56acb5969190584f9" openshift-authentication 37m Normal ScalingReplicaSet deployment/oauth-openshift Scaled up replica set oauth-openshift-5c9d8ccbcc to 1 from 0 openshift-multus 37m Normal Started pod/multus-additional-cni-plugins-j5mgq Started container routeoverride-cni openshift-monitoring 37m Normal Killing pod/thanos-querier-7bbf5b5dcd-7fpvv Stopping container kube-rbac-proxy openshift-monitoring 37m Normal SuccessfulAttachVolume pod/alertmanager-main-1 AttachVolume.Attach succeeded for volume "pvc-88f77b77-893b-4d58-9c84-470da77b4262" openshift-multus 37m Normal Pulling pod/multus-additional-cni-plugins-j5mgq Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" openshift-monitoring 37m Normal ScalingReplicaSet deployment/thanos-querier Scaled down replica set thanos-querier-7bbf5b5dcd to 0 from 1 openshift-monitoring 37m Normal Killing pod/thanos-querier-7bbf5b5dcd-7fpvv Stopping container oauth-proxy openshift-etcd-operator 37m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded changed from False to True ("ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:true Took:2.957328ms Error:} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:true Took:1.267647ms Error:} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.239.132:2379]: context deadline exceeded}]\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy") openshift-monitoring 37m Normal SuccessfulDelete replicaset/thanos-querier-7bbf5b5dcd Deleted pod: thanos-querier-7bbf5b5dcd-7fpvv openshift-monitoring 37m Normal Killing pod/thanos-querier-7bbf5b5dcd-7fpvv Stopping container prom-label-proxy openshift-monitoring 37m Normal Killing pod/thanos-querier-7bbf5b5dcd-7fpvv Stopping container kube-rbac-proxy-metrics openshift-monitoring 37m Normal Killing pod/thanos-querier-7bbf5b5dcd-7fpvv Stopping container thanos-query openshift-monitoring 37m Normal Killing pod/thanos-querier-7bbf5b5dcd-7fpvv Stopping container kube-rbac-proxy-rules openshift-monitoring 37m Normal SuccessfulAttachVolume pod/alertmanager-main-0 AttachVolume.Attach succeeded for volume "pvc-1b1f012e-b506-4373-bdfd-02e4e6dd5098" openshift-user-workload-monitoring 37m Normal AddedInterface pod/prometheus-user-workload-1 Add eth0 [10.131.0.6/23] from ovn-kubernetes openshift-user-workload-monitoring 37m Normal Pulling pod/prometheus-user-workload-1 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" openshift-user-workload-monitoring 37m Normal Pulling pod/thanos-ruler-user-workload-1 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" openshift-marketplace 37m Normal Killing pod/certified-operators-dwz78 Stopping container registry-server openshift-user-workload-monitoring 37m Normal Started pod/thanos-ruler-user-workload-1 Started container thanos-ruler openshift-user-workload-monitoring 37m Normal Created pod/thanos-ruler-user-workload-1 Created container thanos-ruler openshift-monitoring 37m Warning BackOff pod/osd-cluster-ready-thb5j Back-off restarting failed container osd-cluster-ready in pod osd-cluster-ready-thb5j_openshift-monitoring(dbc64674-f7f6-4dc9-86d7-96ebf3fb2764) openshift-user-workload-monitoring 37m Normal Pulled pod/thanos-ruler-user-workload-1 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a898acb346b7aae546bb9dc31e43266415a096fa755241b56acb5969190584f9" in 1.62987753s (1.629889809s including waiting) openshift-monitoring 37m Normal SuccessfulAttachVolume pod/prometheus-k8s-1 AttachVolume.Attach succeeded for volume "pvc-2abeae08-0492-477e-a938-e36e3511d5b3" openshift-user-workload-monitoring 37m Normal Pulled pod/thanos-ruler-user-workload-1 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" in 1.186985854s (1.187001632s including waiting) openshift-multus 37m Normal Created pod/multus-additional-cni-plugins-j5mgq Created container whereabouts-cni-bincopy openshift-monitoring 37m Normal SuccessfulAttachVolume pod/prometheus-k8s-0 AttachVolume.Attach succeeded for volume "pvc-7d81aae0-58a0-4040-982d-7f7c86fa6c88" default 37m Normal Uncordon node/ip-10-0-160-152.ec2.internal Update completed for config rendered-worker-c37c7a9e551f049d382df8406f11fe9b and node has been uncordoned openshift-monitoring 37m Normal Pulling pod/alertmanager-main-1 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1a766de06530f7008fc155e6239f6e3d63b816deeb24c6d520ef1980d748050" default 37m Normal ConfigDriftMonitorStarted node/ip-10-0-160-152.ec2.internal Config Drift Monitor started, watching against rendered-worker-c37c7a9e551f049d382df8406f11fe9b openshift-monitoring 37m Normal AddedInterface pod/alertmanager-main-0 Add eth0 [10.130.2.14/23] from ovn-kubernetes openshift-monitoring 37m Normal AddedInterface pod/alertmanager-main-1 Add eth0 [10.129.2.14/23] from ovn-kubernetes openshift-user-workload-monitoring 37m Normal Pulled pod/thanos-ruler-user-workload-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-user-workload-monitoring 37m Normal Pulled pod/prometheus-user-workload-1 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" in 1.143717901s (1.143730998s including waiting) default 37m Normal NodeDone node/ip-10-0-160-152.ec2.internal Setting node ip-10-0-160-152.ec2.internal, currentConfig rendered-worker-c37c7a9e551f049d382df8406f11fe9b to Done openshift-authentication-operator 37m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 5, desired generation is 6." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation" openshift-user-workload-monitoring 37m Normal Created pod/thanos-ruler-user-workload-1 Created container thanos-ruler-proxy openshift-user-workload-monitoring 37m Normal Created pod/thanos-ruler-user-workload-1 Created container config-reloader openshift-user-workload-monitoring 37m Normal Started pod/thanos-ruler-user-workload-1 Started container thanos-ruler-proxy openshift-user-workload-monitoring 37m Normal Started pod/thanos-ruler-user-workload-1 Started container config-reloader default 37m Normal SetDesiredConfig machineconfigpool/worker Targeted node ip-10-0-232-8.ec2.internal to config rendered-worker-c37c7a9e551f049d382df8406f11fe9b openshift-user-workload-monitoring 37m Normal Started pod/prometheus-user-workload-1 Started container init-config-reloader openshift-multus 37m Normal Started pod/multus-additional-cni-plugins-j5mgq Started container whereabouts-cni-bincopy openshift-multus 37m Normal Pulled pod/multus-additional-cni-plugins-j5mgq Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" in 2.18955813s (2.189571932s including waiting) openshift-user-workload-monitoring 37m Normal Pulled pod/thanos-ruler-user-workload-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-user-workload-monitoring 37m Normal Created pod/prometheus-user-workload-1 Created container init-config-reloader openshift-monitoring 37m Normal Pulled pod/alertmanager-main-1 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1a766de06530f7008fc155e6239f6e3d63b816deeb24c6d520ef1980d748050" in 1.5255099s (1.525521946s including waiting) openshift-user-workload-monitoring 37m Normal Created pod/thanos-ruler-user-workload-1 Created container kube-rbac-proxy-metrics openshift-multus 37m Normal AddedInterface pod/network-metrics-daemon-74bvc Add eth0 [10.131.0.4/23] from ovn-kubernetes openshift-user-workload-monitoring 37m Normal Started pod/thanos-ruler-user-workload-1 Started container kube-rbac-proxy-metrics openshift-monitoring 37m Normal Pulling pod/alertmanager-main-0 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1a766de06530f7008fc155e6239f6e3d63b816deeb24c6d520ef1980d748050" openshift-multus 37m Normal Created pod/multus-additional-cni-plugins-j5mgq Created container whereabouts-cni openshift-multus 37m Normal Pulling pod/network-metrics-daemon-74bvc Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5e01ac462952a60254c8a9a0f43cce8b323da60014d37a49250a2e586142a5" openshift-multus 37m Normal Pulled pod/multus-additional-cni-plugins-j5mgq Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" already present on machine openshift-user-workload-monitoring 37m Normal Pulling pod/prometheus-user-workload-1 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1cdf8e55288584a04e1a403133d4508bac27808a20427a18aee85a92c367316" openshift-multus 37m Normal Started pod/multus-additional-cni-plugins-j5mgq Started container whereabouts-cni openshift-monitoring 37m Normal Started pod/alertmanager-main-0 Started container kube-rbac-proxy-metric default 37m Normal ConfigDriftMonitorStopped node/ip-10-0-232-8.ec2.internal Config Drift Monitor stopped openshift-multus 37m Normal Started pod/network-metrics-daemon-74bvc Started container kube-rbac-proxy openshift-monitoring 37m Normal Created pod/alertmanager-main-0 Created container alertmanager-proxy default 37m Normal Cordon node/ip-10-0-232-8.ec2.internal Cordoned node to apply update openshift-monitoring 37m Normal Created pod/alertmanager-main-1 Created container config-reloader openshift-monitoring 37m Normal Pulled pod/alertmanager-main-1 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" in 730.642164ms (730.656516ms including waiting) openshift-monitoring 37m Normal Pulling pod/alertmanager-main-1 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" default 37m Normal Drain node/ip-10-0-232-8.ec2.internal Draining node to update config. openshift-monitoring 37m Normal Pulled pod/alertmanager-main-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-multus 37m Normal Created pod/network-metrics-daemon-74bvc Created container kube-rbac-proxy openshift-multus 37m Normal Created pod/multus-additional-cni-plugins-j5mgq Created container kube-multus-additional-cni-plugins openshift-multus 37m Normal Created pod/network-metrics-daemon-74bvc Created container network-metrics-daemon openshift-multus 37m Normal Pulled pod/multus-additional-cni-plugins-j5mgq Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" already present on machine openshift-monitoring 37m Normal Pulled pod/alertmanager-main-0 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1a766de06530f7008fc155e6239f6e3d63b816deeb24c6d520ef1980d748050" in 1.40118412s (1.401193731s including waiting) openshift-multus 37m Normal Started pod/network-metrics-daemon-74bvc Started container network-metrics-daemon openshift-monitoring 37m Normal Pulled pod/alertmanager-main-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-monitoring 37m Normal Pulled pod/alertmanager-main-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" already present on machine openshift-monitoring 37m Normal Created pod/alertmanager-main-0 Created container config-reloader openshift-monitoring 37m Normal Started pod/alertmanager-main-0 Started container config-reloader openshift-monitoring 37m Normal Pulled pod/alertmanager-main-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8541443d5c6c0d31959a8e5f8d6a525d43bb4df66be563603d282f48bdeb304" already present on machine openshift-multus 37m Normal Pulled pod/network-metrics-daemon-74bvc Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5e01ac462952a60254c8a9a0f43cce8b323da60014d37a49250a2e586142a5" in 1.203537897s (1.203570228s including waiting) openshift-monitoring 37m Normal Started pod/alertmanager-main-0 Started container alertmanager-proxy openshift-monitoring 37m Normal Pulled pod/alertmanager-main-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 37m Normal Created pod/alertmanager-main-0 Created container kube-rbac-proxy openshift-monitoring 37m Normal Started pod/alertmanager-main-0 Started container kube-rbac-proxy openshift-monitoring 37m Normal Pulled pod/alertmanager-main-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 37m Normal Created pod/alertmanager-main-0 Created container kube-rbac-proxy-metric openshift-monitoring 37m Normal Started pod/alertmanager-main-1 Started container config-reloader openshift-multus 37m Normal Pulled pod/network-metrics-daemon-74bvc Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 37m Normal Pulled pod/alertmanager-main-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8541443d5c6c0d31959a8e5f8d6a525d43bb4df66be563603d282f48bdeb304" already present on machine openshift-monitoring 37m Normal Started pod/alertmanager-main-0 Started container prom-label-proxy openshift-monitoring 37m Normal Started pod/alertmanager-main-0 Started container alertmanager openshift-monitoring 37m Normal Created pod/alertmanager-main-0 Created container alertmanager openshift-monitoring 37m Normal Pulled pod/alertmanager-main-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1a766de06530f7008fc155e6239f6e3d63b816deeb24c6d520ef1980d748050" already present on machine openshift-monitoring 37m Normal Started pod/alertmanager-main-1 Started container prom-label-proxy openshift-monitoring 37m Normal Started pod/alertmanager-main-1 Started container alertmanager-proxy openshift-monitoring 37m Normal Created pod/alertmanager-main-0 Created container prom-label-proxy openshift-monitoring 37m Normal Started pod/alertmanager-main-1 Started container kube-rbac-proxy-metric default 37m Normal NodeSchedulable node/ip-10-0-160-152.ec2.internal Node ip-10-0-160-152.ec2.internal status is now: NodeSchedulable openshift-monitoring 37m Normal Pulled pod/alertmanager-main-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 37m Normal Created pod/alertmanager-main-1 Created container alertmanager-proxy openshift-monitoring 37m Normal Started pod/alertmanager-main-1 Started container kube-rbac-proxy openshift-monitoring 37m Normal Created pod/alertmanager-main-1 Created container kube-rbac-proxy openshift-monitoring 37m Normal Pulled pod/alertmanager-main-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 37m Normal Created pod/alertmanager-main-1 Created container kube-rbac-proxy-metric openshift-monitoring 37m Normal Created pod/alertmanager-main-1 Created container prom-label-proxy openshift-user-workload-monitoring 37m Normal Pulled pod/prometheus-user-workload-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-user-workload-monitoring 37m Normal Pulled pod/prometheus-user-workload-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 37m Normal Pulled pod/alertmanager-main-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1a766de06530f7008fc155e6239f6e3d63b816deeb24c6d520ef1980d748050" already present on machine openshift-monitoring 37m Normal Started pod/prometheus-k8s-0 Started container init-config-reloader openshift-user-workload-monitoring 37m Normal Created pod/prometheus-user-workload-1 Created container config-reloader openshift-user-workload-monitoring 37m Normal Pulled pod/prometheus-user-workload-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" already present on machine openshift-monitoring 37m Normal AddedInterface pod/prometheus-k8s-1 Add eth0 [10.130.2.15/23] from ovn-kubernetes openshift-monitoring 37m Normal Pulled pod/prometheus-k8s-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" already present on machine openshift-monitoring 37m Normal Created pod/prometheus-k8s-1 Created container init-config-reloader openshift-monitoring 37m Normal Started pod/prometheus-k8s-1 Started container init-config-reloader openshift-user-workload-monitoring 37m Normal Started pod/prometheus-user-workload-1 Started container prometheus openshift-user-workload-monitoring 37m Normal Pulled pod/prometheus-user-workload-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-user-workload-monitoring 37m Normal Started pod/prometheus-user-workload-1 Started container kube-rbac-proxy-metrics openshift-user-workload-monitoring 37m Normal Created pod/prometheus-user-workload-1 Created container prometheus openshift-user-workload-monitoring 37m Normal Pulled pod/prometheus-user-workload-1 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1cdf8e55288584a04e1a403133d4508bac27808a20427a18aee85a92c367316" in 2.847284883s (2.847292539s including waiting) openshift-user-workload-monitoring 37m Normal Pulled pod/prometheus-user-workload-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a898acb346b7aae546bb9dc31e43266415a096fa755241b56acb5969190584f9" already present on machine openshift-user-workload-monitoring 37m Normal Created pod/prometheus-user-workload-1 Created container thanos-sidecar openshift-user-workload-monitoring 37m Normal Started pod/prometheus-user-workload-1 Started container thanos-sidecar openshift-user-workload-monitoring 37m Normal Created pod/prometheus-user-workload-1 Created container kube-rbac-proxy-metrics openshift-user-workload-monitoring 37m Normal Created pod/prometheus-user-workload-1 Created container kube-rbac-proxy-federate openshift-user-workload-monitoring 37m Normal Started pod/prometheus-user-workload-1 Started container kube-rbac-proxy-federate openshift-monitoring 37m Normal AddedInterface pod/prometheus-k8s-0 Add eth0 [10.129.2.15/23] from ovn-kubernetes openshift-ingress-canary 37m Normal AddedInterface pod/ingress-canary-bn5dn Add eth0 [10.131.0.10/23] from ovn-kubernetes openshift-monitoring 37m Normal Pulled pod/prometheus-k8s-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" already present on machine openshift-monitoring 37m Normal Created pod/prometheus-k8s-0 Created container init-config-reloader openshift-monitoring 37m Normal Started pod/alertmanager-main-1 Started container alertmanager openshift-monitoring 37m Normal Created pod/alertmanager-main-1 Created container alertmanager openshift-user-workload-monitoring 37m Normal Started pod/prometheus-user-workload-1 Started container config-reloader openshift-ingress-canary 37m Normal Pulling pod/ingress-canary-bn5dn Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6de5d1e6d774c20a0c3bd4d4c544234519bc4528e5535f24d0cf78a6788b4056" openshift-monitoring 37m Warning FailedMount pod/prometheus-k8s-1 MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" : configmap "prometheus-k8s-rulefiles-0" not found openshift-user-workload-monitoring 37m Normal Created pod/prometheus-user-workload-1 Created container kube-rbac-proxy-thanos openshift-monitoring 37m Normal Pulling pod/prometheus-k8s-1 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1cdf8e55288584a04e1a403133d4508bac27808a20427a18aee85a92c367316" openshift-user-workload-monitoring 37m Normal Started pod/prometheus-user-workload-1 Started container kube-rbac-proxy-thanos openshift-monitoring 37m Normal Pulling pod/prometheus-k8s-0 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1cdf8e55288584a04e1a403133d4508bac27808a20427a18aee85a92c367316" openshift-monitoring 37m Normal SuccessfulCreate replicaset/configure-alertmanager-operator-7b9b57dbdd Created pod: configure-alertmanager-operator-7b9b57dbdd-fjt5w openshift-monitoring 37m Normal Killing pod/configure-alertmanager-operator-7b9b57dbdd-xgqtw Stopping container configure-alertmanager-operator openshift-user-workload-monitoring 37m Normal Killing pod/thanos-ruler-user-workload-0 Stopping container config-reloader openshift-monitoring 37m Normal Killing pod/configure-alertmanager-operator-registry-w7zdk Stopping container registry-server openshift-kube-storage-version-migrator 37m Normal Killing pod/migrator-579f5cd9c5-flz72 Stopping container migrator openshift-ingress-canary 37m Normal Started pod/ingress-canary-bn5dn Started container serve-healthcheck-canary openshift-console 37m Normal Killing pod/downloads-fcdb597fd-grdr7 Stopping container download-server openshift-monitoring 37m Normal SuccessfulCreate replicaset/token-refresher-5dbcf88876 Created pod: token-refresher-5dbcf88876-hfhjz openshift-ingress-canary 37m Normal Pulled pod/ingress-canary-bn5dn Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6de5d1e6d774c20a0c3bd4d4c544234519bc4528e5535f24d0cf78a6788b4056" in 1.666142953s (1.666154837s including waiting) openshift-console 37m Normal SuccessfulCreate replicaset/downloads-fcdb597fd Created pod: downloads-fcdb597fd-sbcw8 openshift-kube-apiserver-operator 37m Normal PodCreated deployment/kube-apiserver-operator Created Pod/installer-11-ip-10-0-239-132.ec2.internal -n openshift-kube-apiserver because it was missing openshift-ingress-canary 37m Normal Created pod/ingress-canary-bn5dn Created container serve-healthcheck-canary openshift-monitoring 37m Normal Killing pod/sre-stuck-ebs-vols-1-ws5wv Stopping container main openshift-user-workload-monitoring 37m Normal Killing pod/thanos-ruler-user-workload-0 Stopping container thanos-ruler-proxy openshift-kube-storage-version-migrator 37m Normal SuccessfulCreate replicaset/migrator-579f5cd9c5 Created pod: migrator-579f5cd9c5-qkfvb openshift-console 37m Normal Pulling pod/downloads-fcdb597fd-sbcw8 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec5351e220112a5b70451310b563175ae713c4d2864765c861b969730515a21b" openshift-network-diagnostics 37m Normal Created pod/network-check-source-677bdb7d9-2tx2m Created container check-endpoints openshift-network-diagnostics 37m Normal Pulled pod/network-check-source-677bdb7d9-2tx2m Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" already present on machine openshift-network-diagnostics 37m Normal AddedInterface pod/network-check-source-677bdb7d9-2tx2m Add eth0 [10.131.0.14/23] from ovn-kubernetes openshift-monitoring 37m Normal AddedInterface pod/token-refresher-5dbcf88876-hfhjz Add eth0 [10.131.0.8/23] from ovn-kubernetes openshift-monitoring 37m Normal Pulling pod/token-refresher-5dbcf88876-hfhjz Pulling image "quay.io/observatorium/token-refresher@sha256:6ce9b80cd1d907cb6c9ed2a18612f386f7503257772d1d88155a4a2e6773fd00" openshift-monitoring 37m Normal SuccessfulCreate replicationcontroller/sre-stuck-ebs-vols-1 Created pod: sre-stuck-ebs-vols-1-fzwz8 openshift-kube-storage-version-migrator 37m Normal Pulling pod/migrator-579f5cd9c5-qkfvb Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39ef66439265e28941d847694107b349dff04d9cc64f0b713882e1895ea2acb9" openshift-monitoring 37m Normal AddedInterface pod/configure-alertmanager-operator-7b9b57dbdd-fjt5w Add eth0 [10.131.0.7/23] from ovn-kubernetes openshift-monitoring 37m Normal Pulling pod/configure-alertmanager-operator-7b9b57dbdd-fjt5w Pulling image "quay.io/app-sre/configure-alertmanager-operator@sha256:6ecbda84a8bf59a69d77329a32bf63939018d4ea4899a6c9fe4bde1adbace56e" openshift-network-diagnostics 37m Normal Started pod/network-check-source-677bdb7d9-2tx2m Started container check-endpoints openshift-monitoring 37m Normal Pulling pod/sre-stuck-ebs-vols-1-fzwz8 Pulling image "quay.io/app-sre/managed-prometheus-exporter-initcontainer:latest" openshift-monitoring 37m Normal AddedInterface pod/sre-stuck-ebs-vols-1-fzwz8 Add eth0 [10.131.0.13/23] from ovn-kubernetes openshift-network-diagnostics 37m Normal SuccessfulCreate replicaset/network-check-source-677bdb7d9 Created pod: network-check-source-677bdb7d9-2tx2m openshift-monitoring 37m Normal SuccessfulCreate job/osd-cluster-ready Created pod: osd-cluster-ready-pzbtd openshift-console 37m Normal AddedInterface pod/downloads-fcdb597fd-sbcw8 Add eth0 [10.131.0.11/23] from ovn-kubernetes openshift-kube-storage-version-migrator-operator 37m Normal OperatorStatusChanged deployment/kube-storage-version-migrator-operator Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from False to True ("KubeStorageVersionMigratorProgressing: Waiting for Deployment to deploy pods"),Available changed from True to False ("KubeStorageVersionMigratorAvailable: Waiting for Deployment") openshift-kube-storage-version-migrator 37m Normal AddedInterface pod/migrator-579f5cd9c5-qkfvb Add eth0 [10.131.0.12/23] from ovn-kubernetes openshift-etcd-operator 37m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:true Took:2.957328ms Error:} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:true Took:1.267647ms Error:} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.239.132:2379]: context deadline exceeded}]\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy" to "ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:true Took:867.776µs Error:} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:true Took:2.003817ms Error:} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.239.132:2379]: context deadline exceeded}]\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy" openshift-user-workload-monitoring 37m Normal SuccessfulCreate statefulset/thanos-ruler-user-workload create Pod thanos-ruler-user-workload-0 in StatefulSet thanos-ruler-user-workload successful openshift-monitoring 37m Normal Created pod/prometheus-k8s-1 Created container kube-rbac-proxy openshift-monitoring 37m Normal Started pod/prometheus-k8s-0 Started container prometheus openshift-monitoring 37m Normal Created pod/prometheus-k8s-0 Created container prometheus openshift-dns 37m Warning TopologyAwareHintsDisabled service/dns-default Insufficient Node information: allocatable CPU or zone not specified on one or more nodes, addressType: IPv4 openshift-monitoring 37m Normal Pulled pod/prometheus-k8s-0 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1cdf8e55288584a04e1a403133d4508bac27808a20427a18aee85a92c367316" in 3.520473871s (3.52048326s including waiting) openshift-monitoring 37m Normal Started pod/prometheus-k8s-1 Started container thanos-sidecar openshift-monitoring 37m Normal Pulled pod/prometheus-k8s-1 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1cdf8e55288584a04e1a403133d4508bac27808a20427a18aee85a92c367316" in 2.800457097s (2.800471763s including waiting) openshift-monitoring 37m Normal Created pod/prometheus-k8s-1 Created container prometheus openshift-monitoring 37m Normal Started pod/prometheus-k8s-1 Started container prometheus openshift-monitoring 37m Normal Pulled pod/prometheus-k8s-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" already present on machine openshift-monitoring 37m Normal Created pod/prometheus-k8s-1 Created container config-reloader openshift-monitoring 37m Normal Pulled pod/prometheus-k8s-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" already present on machine openshift-network-diagnostics 37m Warning FastControllerResync node/ip-10-0-160-152.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling openshift-monitoring 37m Normal Started pod/prometheus-k8s-1 Started container config-reloader openshift-monitoring 37m Normal Pulled pod/prometheus-k8s-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a898acb346b7aae546bb9dc31e43266415a096fa755241b56acb5969190584f9" already present on machine openshift-monitoring 37m Normal Pulled pod/prometheus-k8s-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-network-diagnostics 37m Warning FastControllerResync node/ip-10-0-160-152.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling openshift-monitoring 37m Normal Started pod/prometheus-k8s-1 Started container prometheus-proxy openshift-monitoring 37m Normal Created pod/prometheus-k8s-1 Created container prometheus-proxy openshift-monitoring 37m Normal Created pod/prometheus-k8s-1 Created container thanos-sidecar openshift-monitoring 37m Warning ComponentUnhealthy clusterserviceversion/configure-alertmanager-operator.v0.1.516-bdea4ea installing: waiting for deployment configure-alertmanager-operator to become ready: deployment "configure-alertmanager-operator" not available: Deployment does not have minimum availability. openshift-monitoring 37m Normal Pulled pod/prometheus-k8s-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-monitoring 37m Normal AddedInterface pod/osd-cluster-ready-pzbtd Add eth0 [10.131.0.15/23] from ovn-kubernetes openshift-monitoring 37m Normal Started pod/prometheus-k8s-0 Started container config-reloader openshift-monitoring 37m Normal Pulled pod/prometheus-k8s-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a898acb346b7aae546bb9dc31e43266415a096fa755241b56acb5969190584f9" already present on machine openshift-monitoring 37m Normal Created pod/prometheus-k8s-0 Created container thanos-sidecar openshift-monitoring 37m Normal Started pod/prometheus-k8s-0 Started container kube-rbac-proxy openshift-monitoring 37m Normal Started pod/prometheus-k8s-0 Started container thanos-sidecar openshift-monitoring 37m Normal Pulled pod/prometheus-k8s-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 37m Normal NeedsReinstall clusterserviceversion/configure-alertmanager-operator.v0.1.516-bdea4ea installing: waiting for deployment configure-alertmanager-operator to become ready: deployment "configure-alertmanager-operator" not available: Deployment does not have minimum availability. openshift-monitoring 37m Normal Started pod/prometheus-k8s-1 Started container kube-rbac-proxy openshift-monitoring 37m Normal Pulled pod/prometheus-k8s-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 37m Normal Created pod/prometheus-k8s-1 Created container kube-rbac-proxy-thanos openshift-monitoring 37m Normal Started pod/prometheus-k8s-1 Started container kube-rbac-proxy-thanos openshift-monitoring 37m Normal Pulled pod/prometheus-k8s-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-monitoring 37m Normal Created pod/prometheus-k8s-0 Created container prometheus-proxy openshift-monitoring 37m Normal Started pod/prometheus-k8s-0 Started container prometheus-proxy openshift-monitoring 37m Normal Pulled pod/prometheus-k8s-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-user-workload-monitoring 37m Normal Killing pod/thanos-ruler-user-workload-0 Stopping container thanos-ruler openshift-network-diagnostics 37m Normal Killing pod/network-check-source-677bdb7d9-m9sqk Stopping container check-endpoints openshift-monitoring 37m Normal AllRequirementsMet clusterserviceversion/configure-alertmanager-operator.v0.1.516-bdea4ea all requirements found, attempting install openshift-monitoring 37m Normal InstallSucceeded clusterserviceversion/configure-alertmanager-operator.v0.1.516-bdea4ea waiting for install components to report healthy openshift-user-workload-monitoring 37m Normal Killing pod/thanos-ruler-user-workload-0 Stopping container kube-rbac-proxy-metrics openshift-monitoring 37m Normal Created pod/prometheus-k8s-0 Created container kube-rbac-proxy openshift-monitoring 37m Normal Started pod/prometheus-k8s-0 Started container kube-rbac-proxy-thanos openshift-monitoring 37m Normal Created pod/prometheus-k8s-0 Created container kube-rbac-proxy-thanos openshift-monitoring 37m Normal Pulling pod/osd-cluster-ready-pzbtd Pulling image "quay.io/app-sre/osd-cluster-ready@sha256:f70aa8033565fc73c006acb9199845242b1f729cb5a407b5174cf22656b4e2d5" openshift-monitoring 37m Normal Killing pod/token-refresher-5dbcf88876-cbn8j Stopping container token-refresher openshift-monitoring 37m Normal Created pod/prometheus-k8s-0 Created container config-reloader openshift-monitoring 37m Normal InstallWaiting clusterserviceversion/configure-alertmanager-operator.v0.1.516-bdea4ea installing: waiting for deployment configure-alertmanager-operator to become ready: waiting for spec update of deployment "configure-alertmanager-operator" to be observed... openshift-monitoring 37m Normal InstallWaiting clusterserviceversion/configure-alertmanager-operator.v0.1.516-bdea4ea installing: waiting for deployment configure-alertmanager-operator to become ready: deployment "configure-alertmanager-operator" not available: Deployment does not have minimum availability. default 37m Normal NodeNotSchedulable node/ip-10-0-232-8.ec2.internal Node ip-10-0-232-8.ec2.internal status is now: NodeNotSchedulable openshift-kube-storage-version-migrator 37m Normal Pulled pod/migrator-579f5cd9c5-qkfvb Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:39ef66439265e28941d847694107b349dff04d9cc64f0b713882e1895ea2acb9" in 4.56864029s (4.568655371s including waiting) openshift-monitoring 37m Normal Pulled pod/token-refresher-5dbcf88876-hfhjz Successfully pulled image "quay.io/observatorium/token-refresher@sha256:6ce9b80cd1d907cb6c9ed2a18612f386f7503257772d1d88155a4a2e6773fd00" in 5.100288984s (5.100304545s including waiting) openshift-monitoring 37m Normal Pulled pod/osd-cluster-ready-pzbtd Successfully pulled image "quay.io/app-sre/osd-cluster-ready@sha256:f70aa8033565fc73c006acb9199845242b1f729cb5a407b5174cf22656b4e2d5" in 4.075501946s (4.075514558s including waiting) openshift-monitoring 37m Normal Pulled pod/configure-alertmanager-operator-7b9b57dbdd-fjt5w Successfully pulled image "quay.io/app-sre/configure-alertmanager-operator@sha256:6ecbda84a8bf59a69d77329a32bf63939018d4ea4899a6c9fe4bde1adbace56e" in 7.29913332s (7.299148753s including waiting) openshift-monitoring 37m Normal Pulled pod/sre-stuck-ebs-vols-1-fzwz8 Successfully pulled image "quay.io/app-sre/managed-prometheus-exporter-initcontainer:latest" in 7.321178201s (7.321185406s including waiting) openshift-monitoring 37m Normal Started pod/configure-alertmanager-operator-7b9b57dbdd-fjt5w Started container configure-alertmanager-operator openshift-kube-storage-version-migrator 37m Normal Created pod/migrator-579f5cd9c5-qkfvb Created container migrator openshift-monitoring 37m Normal Started pod/sre-stuck-ebs-vols-1-fzwz8 Started container setupcreds openshift-kube-storage-version-migrator 37m Normal Started pod/migrator-579f5cd9c5-qkfvb Started container migrator openshift-monitoring 37m Normal Created pod/sre-stuck-ebs-vols-1-fzwz8 Created container setupcreds openshift-monitoring 37m Normal Created pod/token-refresher-5dbcf88876-hfhjz Created container token-refresher openshift-monitoring 37m Normal Started pod/token-refresher-5dbcf88876-hfhjz Started container token-refresher openshift-ovn-kubernetes 37m Normal Created pod/ovnkube-master-l7mb9 Created container sbdb openshift-monitoring 37m Normal AddedInterface pod/configure-alertmanager-operator-registry-xvjrx Add eth0 [10.131.0.16/23] from ovn-kubernetes openshift-monitoring 37m Normal Pulling pod/configure-alertmanager-operator-registry-xvjrx Pulling image "quay.io/app-sre/configure-alertmanager-operator-registry@sha256:4cd6cdcb961b519e306ff2ea3c276ef4037edb429e14df405bc3ccbed8531ac9" openshift-monitoring 37m Normal Created pod/configure-alertmanager-operator-7b9b57dbdd-fjt5w Created container configure-alertmanager-operator openshift-kube-apiserver 37m Normal AddedInterface pod/revision-pruner-11-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.5/23] from ovn-kubernetes openshift-multus 37m Normal AddedInterface pod/network-metrics-daemon-7vpmf Add eth0 [10.129.0.4/23] from ovn-kubernetes openshift-ovn-kubernetes 37m Normal Started pod/ovnkube-node-wsrzb Started container ovn-controller openshift-ovn-kubernetes 37m Normal Created pod/ovnkube-node-wsrzb Created container ovn-controller openshift-network-diagnostics 37m Normal AddedInterface pod/network-check-target-v92f6 Add eth0 [10.129.0.3/23] from ovn-kubernetes openshift-etcd 37m Normal Pulled pod/etcd-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 37m Normal Started pod/etcd-ip-10-0-239-132.ec2.internal Started container etcd-resources-copy openshift-ovn-kubernetes 37m Normal Started pod/ovnkube-master-l7mb9 Started container sbdb openshift-dns 37m Normal AddedInterface pod/dns-default-tnhzk Add eth0 [10.129.0.14/23] from ovn-kubernetes openshift-security 37m Normal AddedInterface pod/audit-exporter-7bwkj Add eth0 [10.129.0.35/23] from ovn-kubernetes openshift-monitoring 37m Normal AddedInterface pod/sre-dns-latency-exporter-fvnpq Add eth0 [10.129.0.33/23] from ovn-kubernetes openshift-validation-webhook 37m Normal AddedInterface pod/validation-webhook-dt8g2 Add eth0 [10.129.0.32/23] from ovn-kubernetes openshift-monitoring 37m Normal InstallSucceeded clusterserviceversion/configure-alertmanager-operator.v0.1.516-bdea4ea install strategy completed with no errors openshift-security 37m Normal Pulling pod/audit-exporter-7bwkj Pulling image "quay.io/app-sre/splunk-audit-exporter@sha256:bbca8dfd77d15c6dde3495985c1a75354ad79339ecba6820e7ceef2282422964" openshift-kube-storage-version-migrator-operator 37m Normal OperatorStatusChanged deployment/kube-storage-version-migrator-operator Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") openshift-etcd 37m Normal Created pod/etcd-ip-10-0-239-132.ec2.internal Created container etcd-resources-copy openshift-kube-apiserver 37m Normal Pulled pod/installer-11-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 37m Normal AddedInterface pod/installer-11-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.6/23] from ovn-kubernetes openshift-kube-controller-manager 37m Warning FastControllerResync pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager 37m Normal Pulled pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-user-workload-monitoring 37m Normal Killing pod/prometheus-user-workload-0 Stopping container kube-rbac-proxy-thanos openshift-kube-controller-manager 37m Normal Started pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Started container kube-controller-manager-cert-syncer openshift-user-workload-monitoring 37m Normal Killing pod/prometheus-user-workload-0 Stopping container prometheus openshift-kube-scheduler 37m Warning FastControllerResync pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling openshift-monitoring 37m Normal Pulling pod/sre-dns-latency-exporter-fvnpq Pulling image "quay.io/app-sre/managed-prometheus-exporter-base:latest" openshift-kube-controller-manager 37m Normal Killing pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Stopping container cluster-policy-controller openshift-kube-controller-manager 37m Normal Killing pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Stopping container kube-controller-manager-recovery-controller openshift-kube-controller-manager 37m Normal Killing pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Stopping container kube-controller-manager-cert-syncer openshift-validation-webhook 37m Normal Pulling pod/validation-webhook-dt8g2 Pulling image "quay.io/app-sre/managed-cluster-validating-webhooks@sha256:3b13c3a89da30c5fbfaf7529ec3175dd43053c508d4bd09c79ef369d53ecc023" openshift-multus 37m Normal Started pod/multus-additional-cni-plugins-g7hvw Started container cni-plugins openshift-cluster-csi-drivers 37m Normal Started pod/aws-ebs-csi-driver-node-ts9mc Started container csi-liveness-probe openshift-kube-apiserver 37m Normal Created pod/kube-apiserver-ip-10-0-239-132.ec2.internal Created container kube-apiserver-cert-regeneration-controller openshift-etcd 37m Normal Created pod/etcd-ip-10-0-239-132.ec2.internal Created container etcd openshift-cluster-csi-drivers 37m Normal Created pod/aws-ebs-csi-driver-node-ts9mc Created container csi-liveness-probe openshift-kube-scheduler 37m Normal Created pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Created container kube-scheduler-recovery-controller openshift-etcd 37m Normal Pulled pod/etcd-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-network-diagnostics 37m Normal Pulling pod/network-check-target-v92f6 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" openshift-kube-apiserver 37m Normal Started pod/revision-pruner-11-ip-10-0-239-132.ec2.internal Started container pruner openshift-kube-apiserver 37m Warning FastControllerResync pod/kube-apiserver-ip-10-0-239-132.ec2.internal Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver 37m Normal Pulled pod/kube-apiserver-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 37m Normal Started pod/kube-apiserver-ip-10-0-239-132.ec2.internal Started container kube-apiserver-cert-syncer openshift-etcd 37m Normal Started pod/etcd-ip-10-0-239-132.ec2.internal Started container etcdctl openshift-kube-controller-manager 37m Normal Created pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Created container kube-controller-manager-cert-syncer openshift-kube-controller-manager 37m Normal Pulled pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" in 20.247697312s (20.247705925s including waiting) openshift-etcd 37m Normal Created pod/etcd-ip-10-0-239-132.ec2.internal Created container etcdctl openshift-kube-apiserver 37m Normal Created pod/kube-apiserver-ip-10-0-239-132.ec2.internal Created container kube-apiserver-cert-syncer openshift-kube-scheduler 37m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-multus 37m Normal Pulling pod/network-metrics-daemon-7vpmf Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5e01ac462952a60254c8a9a0f43cce8b323da60014d37a49250a2e586142a5" openshift-kube-controller-manager 37m Normal Killing pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Stopping container kube-controller-manager openshift-kube-apiserver 37m Normal Pulled pod/kube-apiserver-ip-10-0-239-132.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" in 22.548505452s (22.548518247s including waiting) openshift-cluster-csi-drivers 37m Normal Pulled pod/aws-ebs-csi-driver-node-ts9mc Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" in 20.204139369s (20.204152274s including waiting) openshift-kube-apiserver 37m Normal Pulled pod/revision-pruner-11-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-multus 37m Normal Created pod/multus-additional-cni-plugins-g7hvw Created container cni-plugins openshift-kube-apiserver 37m Normal Created pod/revision-pruner-11-ip-10-0-239-132.ec2.internal Created container pruner openshift-kube-scheduler 37m Normal Started pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Started container kube-scheduler-cert-syncer openshift-kube-scheduler 37m Normal Created pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Created container kube-scheduler-cert-syncer openshift-kube-scheduler 37m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" in 20.457132781s (20.457142153s including waiting) openshift-multus 37m Normal Pulled pod/multus-additional-cni-plugins-g7hvw Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be088cb9218a248cd4a616132c8daf531502c7b85404aadb0b73e76f8dba01df" in 21.847443449s (21.847448603s including waiting) openshift-dns 37m Normal Pulling pod/dns-default-tnhzk Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a604c96952da5e59f104a305e0c8303d474e33ef345b8c534fd677f189d05299" openshift-kube-controller-manager 37m Normal StaticPodInstallerCompleted pod/installer-7-ip-10-0-197-197.ec2.internal Successfully installed revision 7 openshift-monitoring 37m Normal Pulled pod/configure-alertmanager-operator-registry-xvjrx Successfully pulled image "quay.io/app-sre/configure-alertmanager-operator-registry@sha256:4cd6cdcb961b519e306ff2ea3c276ef4037edb429e14df405bc3ccbed8531ac9" in 2.063630182s (2.063640008s including waiting) openshift-monitoring 37m Normal Created pod/configure-alertmanager-operator-registry-xvjrx Created container registry-server openshift-monitoring 37m Normal Started pod/configure-alertmanager-operator-registry-xvjrx Started container registry-server openshift-user-workload-monitoring 37m Normal Killing pod/prometheus-user-workload-0 Stopping container kube-rbac-proxy-metrics openshift-kube-controller-manager 37m Normal Created pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Created container kube-controller-manager-recovery-controller openshift-user-workload-monitoring 37m Normal Killing pod/prometheus-user-workload-0 Stopping container kube-rbac-proxy-federate openshift-kube-apiserver 37m Normal Started pod/installer-11-ip-10-0-239-132.ec2.internal Started container installer openshift-etcd 37m Normal Started pod/etcd-ip-10-0-239-132.ec2.internal Started container etcd openshift-kube-apiserver 37m Normal Created pod/installer-11-ip-10-0-239-132.ec2.internal Created container installer openshift-etcd 37m Normal Pulled pod/etcd-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-kube-controller-manager 37m Normal Started pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Started container kube-controller-manager-recovery-controller openshift-kube-apiserver 37m Normal Pulled pod/kube-apiserver-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-controller-manager 37m Normal LeaderElection lease/cert-recovery-controller-lock ip-10-0-239-132_63a6f244-3fdb-4eef-bb54-ed215353b19a became leader openshift-kube-apiserver 37m Normal Started pod/kube-apiserver-ip-10-0-239-132.ec2.internal Started container kube-apiserver-cert-regeneration-controller openshift-kube-controller-manager 37m Normal LeaderElection configmap/cert-recovery-controller-lock ip-10-0-239-132_63a6f244-3fdb-4eef-bb54-ed215353b19a became leader openshift-multus 37m Normal Pulling pod/multus-additional-cni-plugins-g7hvw Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dec1af3c24148179ab45f9137ba74876f4287121f07aa32dc7a3643da749a9" openshift-ovn-kubernetes 37m Normal Pulled pod/ovnkube-master-l7mb9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-monitoring 37m Normal Pulled pod/sre-stuck-ebs-vols-1-fzwz8 Container image "quay.io/app-sre/managed-prometheus-exporter-base:latest" already present on machine openshift-kube-scheduler 37m Normal Started pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Started container kube-scheduler-recovery-controller default 37m Warning ResolutionFailed namespace/openshift-ocm-agent-operator constraints not satisfiable: subscription ocm-agent-operator exists, no operators found from catalog ocm-agent-operator-registry in namespace openshift-ocm-agent-operator referenced by subscription ocm-agent-operator openshift-kube-controller-manager-operator 37m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nNodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"cluster-policy-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-cert-syncer\" is terminated: Error: 6599 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:35:34.416890 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:35:34.417056 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:35:34.417301 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:35:41.838906 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:35:41.839876 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:35:58.851648 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:35:58.852010 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:36:07.940809 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:36:07.941172 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:36:24.874150 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:36:24.874518 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:36:33.958691 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:36:33.959423 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" openshift-console 37m Normal Created pod/downloads-fcdb597fd-sbcw8 Created container download-server openshift-monitoring 37m Normal Started pod/sre-stuck-ebs-vols-1-fzwz8 Started container main openshift-monitoring 37m Normal Created pod/sre-stuck-ebs-vols-1-fzwz8 Created container main openshift-console 37m Normal Pulled pod/downloads-fcdb597fd-sbcw8 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec5351e220112a5b70451310b563175ae713c4d2864765c861b969730515a21b" in 14.82864596s (14.828658841s including waiting) openshift-console 37m Normal Started pod/downloads-fcdb597fd-sbcw8 Started container download-server openshift-ovn-kubernetes 37m Normal Pulled pod/ovnkube-master-l7mb9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-etcd-operator 37m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:true Took:867.776µs Error:} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:true Took:2.003817ms Error:} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.239.132:2379]: context deadline exceeded}]\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy" to "ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:true Took:3.632696ms Error:} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:true Took:3.134401ms Error:} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.239.132:2379]: context deadline exceeded}]\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy" openshift-security 37m Normal Pulled pod/audit-exporter-7bwkj Successfully pulled image "quay.io/app-sre/splunk-audit-exporter@sha256:bbca8dfd77d15c6dde3495985c1a75354ad79339ecba6820e7ceef2282422964" in 7.30773586s (7.30774324s including waiting) default 37m Warning ResolutionFailed namespace/openshift-osd-metrics constraints not satisfiable: subscription osd-metrics-exporter exists, no operators found from catalog osd-metrics-exporter-registry in namespace openshift-osd-metrics referenced by subscription osd-metrics-exporter openshift-ovn-kubernetes 37m Normal Started pod/ovnkube-master-l7mb9 Started container ovnkube-master openshift-ovn-kubernetes 37m Normal Created pod/ovnkube-master-l7mb9 Created container ovnkube-master openshift-etcd 37m Normal Created pod/etcd-ip-10-0-239-132.ec2.internal Created container etcd-metrics openshift-network-diagnostics 37m Normal Pulled pod/network-check-target-v92f6 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" in 7.09221163s (7.09222418s including waiting) openshift-console 37m Warning ProbeError pod/downloads-fcdb597fd-sbcw8 Readiness probe error: Get "http://10.131.0.11:8080/": dial tcp 10.131.0.11:8080: connect: connection refused... openshift-dns 37m Normal Pulled pod/dns-default-tnhzk Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a604c96952da5e59f104a305e0c8303d474e33ef345b8c534fd677f189d05299" in 6.960723228s (6.960729977s including waiting) openshift-kube-controller-manager-operator 37m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"cluster-policy-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-cert-syncer\" is terminated: Error: 6599 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:35:34.416890 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:35:34.417056 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:35:34.417301 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:35:41.838906 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:35:41.839876 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:35:58.851648 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:35:58.852010 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:36:07.940809 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:36:07.941172 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:36:24.874150 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:36:24.874518 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:36:33.958691 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:36:33.959423 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"cluster-policy-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-cert-syncer\" is terminated: Error: 6599 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:35:34.416890 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:35:34.417056 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:35:34.417301 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:35:41.838906 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:35:41.839876 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:35:58.851648 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:35:58.852010 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:36:07.940809 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:36:07.941172 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:36:24.874150 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:36:24.874518 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:36:33.958691 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:36:33.959423 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" openshift-console 37m Warning Unhealthy pod/downloads-fcdb597fd-sbcw8 Readiness probe failed: Get "http://10.131.0.11:8080/": dial tcp 10.131.0.11:8080: connect: connection refused openshift-validation-webhook 37m Normal Pulled pod/validation-webhook-dt8g2 Successfully pulled image "quay.io/app-sre/managed-cluster-validating-webhooks@sha256:3b13c3a89da30c5fbfaf7529ec3175dd43053c508d4bd09c79ef369d53ecc023" in 7.283939831s (7.283949998s including waiting) default 37m Normal Uncordon node/ip-10-0-239-132.ec2.internal Update completed for config rendered-master-d273453f5fe4894c22cd393f5c0dbfa3 and node has been uncordoned openshift-kube-apiserver 37m Normal Pulled pod/kube-apiserver-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 37m Normal Started pod/kube-apiserver-ip-10-0-239-132.ec2.internal Started container kube-apiserver-insecure-readyz default 37m Normal NodeDone node/ip-10-0-239-132.ec2.internal Setting node ip-10-0-239-132.ec2.internal, currentConfig rendered-master-d273453f5fe4894c22cd393f5c0dbfa3 to Done default 37m Normal ConfigDriftMonitorStarted node/ip-10-0-239-132.ec2.internal Config Drift Monitor started, watching against rendered-master-d273453f5fe4894c22cd393f5c0dbfa3 openshift-kube-apiserver 37m Normal Created pod/kube-apiserver-ip-10-0-239-132.ec2.internal Created container kube-apiserver-insecure-readyz openshift-multus 37m Normal Pulled pod/network-metrics-daemon-7vpmf Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5e01ac462952a60254c8a9a0f43cce8b323da60014d37a49250a2e586142a5" in 6.960732191s (6.960738906s including waiting) openshift-multus 37m Normal Pulled pod/multus-additional-cni-plugins-g7hvw Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dec1af3c24148179ab45f9137ba74876f4287121f07aa32dc7a3643da749a9" in 6.249488263s (6.249496644s including waiting) openshift-network-diagnostics 37m Normal Created pod/network-check-target-v92f6 Created container network-check-target-container openshift-network-diagnostics 37m Normal Started pod/network-check-target-v92f6 Started container network-check-target-container openshift-multus 37m Normal Created pod/multus-additional-cni-plugins-g7hvw Created container bond-cni-plugin openshift-monitoring 37m Normal Started pod/sre-dns-latency-exporter-fvnpq Started container main openshift-kube-apiserver 37m Normal Started pod/kube-apiserver-ip-10-0-239-132.ec2.internal Started container kube-apiserver-check-endpoints openshift-multus 37m Normal Started pod/network-metrics-daemon-7vpmf Started container network-metrics-daemon openshift-multus 37m Normal Pulled pod/network-metrics-daemon-7vpmf Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-kube-apiserver 37m Normal Created pod/kube-apiserver-ip-10-0-239-132.ec2.internal Created container kube-apiserver-check-endpoints openshift-validation-webhook 37m Normal Created pod/validation-webhook-dt8g2 Created container webhooks openshift-dns 37m Normal Pulled pod/dns-default-tnhzk Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-dns 37m Normal Started pod/dns-default-tnhzk Started container dns openshift-dns 37m Normal Created pod/dns-default-tnhzk Created container dns openshift-validation-webhook 37m Normal Started pod/validation-webhook-dt8g2 Started container webhooks openshift-ovn-kubernetes 37m Normal Started pod/ovnkube-master-l7mb9 Started container ovn-dbchecker default 37m Warning ResolutionFailed namespace/openshift-managed-upgrade-operator constraints not satisfiable: subscription managed-upgrade-operator exists, no operators found from catalog managed-upgrade-operator-catalog in namespace openshift-managed-upgrade-operator referenced by subscription managed-upgrade-operator openshift-monitoring 37m Normal Pulled pod/sre-dns-latency-exporter-fvnpq Successfully pulled image "quay.io/app-sre/managed-prometheus-exporter-base:latest" in 8.539114704s (8.539122093s including waiting) openshift-monitoring 37m Normal Created pod/sre-dns-latency-exporter-fvnpq Created container main openshift-ovn-kubernetes 37m Normal Created pod/ovnkube-master-l7mb9 Created container ovn-dbchecker openshift-multus 37m Normal Created pod/network-metrics-daemon-7vpmf Created container network-metrics-daemon openshift-multus 37m Normal Started pod/multus-additional-cni-plugins-g7hvw Started container bond-cni-plugin openshift-kube-controller-manager 37m Warning FastControllerResync pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Controller "namespace-security-allocation-controller" resync interval is set to 0s which might lead to client request throttling openshift-dns 37m Normal Started pod/dns-default-tnhzk Started container kube-rbac-proxy openshift-security 37m Normal Created pod/audit-exporter-7bwkj Created container audit-exporter default 37m Normal NodeSchedulable node/ip-10-0-239-132.ec2.internal Node ip-10-0-239-132.ec2.internal status is now: NodeSchedulable openshift-multus 37m Normal Created pod/network-metrics-daemon-7vpmf Created container kube-rbac-proxy openshift-kube-apiserver 37m Warning FastControllerResync node/ip-10-0-239-132.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling openshift-multus 37m Normal Started pod/network-metrics-daemon-7vpmf Started container kube-rbac-proxy openshift-kube-controller-manager 37m Normal LeaderElection lease/cluster-policy-controller-lock ip-10-0-140-6_cabf1ff7-8dee-4105-9c37-2b67d0b7ff3a became leader openshift-kube-controller-manager 37m Normal LeaderElection configmap/cluster-policy-controller-lock ip-10-0-140-6_cabf1ff7-8dee-4105-9c37-2b67d0b7ff3a became leader openshift-kube-controller-manager 37m Warning FastControllerResync pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Controller "pod-security-admission-label-synchronization-controller" resync interval is set to 0s which might lead to client request throttling openshift-multus 37m Normal Pulling pod/multus-additional-cni-plugins-g7hvw Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4903dfa59fc0fcc3521f6431d51c6ed0ef563b0c63614a2df44ee4d447ab6c5d" openshift-kube-apiserver 37m Warning FastControllerResync node/ip-10-0-239-132.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling openshift-dns 37m Normal Created pod/dns-default-tnhzk Created container kube-rbac-proxy openshift-security 37m Normal Started pod/audit-exporter-7bwkj Started container audit-exporter openshift-multus 37m Normal Started pod/multus-additional-cni-plugins-g7hvw Started container routeoverride-cni openshift-multus 37m Normal Created pod/multus-additional-cni-plugins-g7hvw Created container routeoverride-cni openshift-multus 37m Normal Pulled pod/multus-additional-cni-plugins-g7hvw Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4903dfa59fc0fcc3521f6431d51c6ed0ef563b0c63614a2df44ee4d447ab6c5d" in 794.302897ms (794.315556ms including waiting) openshift-multus 37m Normal Pulling pod/multus-additional-cni-plugins-g7hvw Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" openshift-marketplace 37m Normal AddedInterface pod/redhat-operators-rwpx4 Add eth0 [10.128.0.11/23] from ovn-kubernetes openshift-etcd-operator 37m Warning UnhealthyEtcdMember deployment/etcd-operator unhealthy members: ip-10-0-239-132.ec2.internal openshift-kube-controller-manager 37m Warning ProbeError pod/kube-controller-manager-guard-ip-10-0-197-197.ec2.internal Readiness probe error: Get "https://10.0.197.197:10257/healthz": dial tcp 10.0.197.197:10257: connect: connection refused... openshift-kube-controller-manager 37m Warning Unhealthy pod/kube-controller-manager-guard-ip-10-0-197-197.ec2.internal Readiness probe failed: Get "https://10.0.197.197:10257/healthz": dial tcp 10.0.197.197:10257: connect: connection refused openshift-marketplace 37m Normal Pulling pod/redhat-operators-rwpx4 Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.12" default 37m Normal SetDesiredConfig machineconfigpool/master Targeted node ip-10-0-140-6.ec2.internal to config rendered-master-d273453f5fe4894c22cd393f5c0dbfa3 default 37m Warning ResolutionFailed namespace/openshift-rbac-permissions constraints not satisfiable: subscription rbac-permissions-operator exists, no operators found from catalog rbac-permissions-operator-registry in namespace openshift-rbac-permissions referenced by subscription rbac-permissions-operator default 37m Normal AnnotationChange machineconfigpool/master Node ip-10-0-140-6.ec2.internal now has machineconfiguration.openshift.io/desiredConfig=rendered-master-d273453f5fe4894c22cd393f5c0dbfa3 openshift-multus 37m Normal Pulled pod/multus-additional-cni-plugins-g7hvw Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" in 1.886745848s (1.886756541s including waiting) openshift-kube-controller-manager 37m Normal Pulled pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fec19020202ba3b7f09460bc0cb52fc001941fbb3a3fa7d1ca34a2e9ebf0e9" already present on machine openshift-etcd-operator 37m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy") openshift-etcd-operator 37m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:true Took:3.632696ms Error:} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:true Took:3.134401ms Error:} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.239.132:2379]: context deadline exceeded}]\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy" to "ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy" openshift-kube-controller-manager 37m Normal Started pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Started container kube-controller-manager openshift-kube-controller-manager 37m Normal Created pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Created container cluster-policy-controller openshift-multus 37m Normal Created pod/multus-additional-cni-plugins-g7hvw Created container whereabouts-cni-bincopy openshift-kube-controller-manager 37m Normal Created pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Created container kube-controller-manager openshift-multus 37m Normal Started pod/multus-additional-cni-plugins-g7hvw Started container whereabouts-cni-bincopy openshift-kube-controller-manager 37m Normal Pulled pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-controller-manager 37m Normal Started pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Started container cluster-policy-controller openshift-kube-controller-manager 37m Normal Created pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Created container kube-controller-manager-cert-syncer openshift-kube-controller-manager 37m Normal Pulled pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 37m Warning FastControllerResync pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager 37m Normal Created pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Created container kube-controller-manager-recovery-controller openshift-kube-controller-manager 37m Normal Pulled pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine default 37m Normal ConfigDriftMonitorStopped node/ip-10-0-140-6.ec2.internal Config Drift Monitor stopped default 37m Normal Drain node/ip-10-0-140-6.ec2.internal Draining node to update config. openshift-kube-controller-manager 37m Normal Started pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Started container kube-controller-manager-cert-syncer default 37m Normal AnnotationChange machineconfigpool/master Node ip-10-0-140-6.ec2.internal now has machineconfiguration.openshift.io/state=Working openshift-kube-controller-manager-operator 37m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"cluster-policy-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-cert-syncer\" is terminated: Error: 6599 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:35:34.416890 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:35:34.417056 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:35:34.417301 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:35:41.838906 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:35:41.839876 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:35:58.851648 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:35:58.852010 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:36:07.940809 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:36:07.941172 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:36:24.874150 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:36:24.874518 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:36:33.958691 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:36:33.959423 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" openshift-kube-controller-manager 37m Warning ClusterInfrastructureStatus pod/kube-controller-manager-ip-10-0-197-197.ec2.internal unable to get cluster infrastructure status, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope default 37m Normal Cordon node/ip-10-0-140-6.ec2.internal Cordoned node to apply update openshift-kube-controller-manager 37m Normal Started pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Started container kube-controller-manager-recovery-controller kube-system 37m Normal LeaderElection lease/kube-controller-manager ip-10-0-140-6_fd59d8c3-957f-46fd-85d7-47124c2b9bec became leader kube-system 37m Normal LeaderElection configmap/kube-controller-manager ip-10-0-140-6_fd59d8c3-957f-46fd-85d7-47124c2b9bec became leader default 37m Normal RegisteredNode node/ip-10-0-239-132.ec2.internal Node ip-10-0-239-132.ec2.internal event: Registered Node ip-10-0-239-132.ec2.internal in Controller default 37m Normal RegisteredNode node/ip-10-0-187-75.ec2.internal Node ip-10-0-187-75.ec2.internal event: Registered Node ip-10-0-187-75.ec2.internal in Controller openshift-kube-controller-manager-operator 37m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "GuardControllerDegraded: Missing PodIP in operand kube-controller-manager-ip-10-0-197-197.ec2.internal on node ip-10-0-197-197.ec2.internal\nNodeControllerDegraded: All master nodes are ready" default 37m Normal RegisteredNode node/ip-10-0-160-152.ec2.internal Node ip-10-0-160-152.ec2.internal event: Registered Node ip-10-0-160-152.ec2.internal in Controller default 37m Normal RegisteredNode node/ip-10-0-140-6.ec2.internal Node ip-10-0-140-6.ec2.internal event: Registered Node ip-10-0-140-6.ec2.internal in Controller default 37m Normal RegisteredNode node/ip-10-0-197-197.ec2.internal Node ip-10-0-197-197.ec2.internal event: Registered Node ip-10-0-197-197.ec2.internal in Controller default 37m Normal RegisteredNode node/ip-10-0-232-8.ec2.internal Node ip-10-0-232-8.ec2.internal event: Registered Node ip-10-0-232-8.ec2.internal in Controller openshift-ingress 37m Normal EnsuringLoadBalancer service/router-default Ensuring load balancer default 37m Normal RegisteredNode node/ip-10-0-195-121.ec2.internal Node ip-10-0-195-121.ec2.internal event: Registered Node ip-10-0-195-121.ec2.internal in Controller openshift-monitoring 37m Normal Killing pod/prometheus-adapter-5b77f96bd4-vm8xp Stopping container prometheus-adapter openshift-monitoring 37m Normal ScalingReplicaSet deployment/prometheus-adapter Scaled down replica set prometheus-adapter-5b77f96bd4 to 0 from 1 openshift-monitoring 37m Normal SuccessfulDelete replicaset/prometheus-adapter-5b77f96bd4 Deleted pod: prometheus-adapter-5b77f96bd4-vm8xp openshift-kube-scheduler-operator 37m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/revision-status-9 -n openshift-kube-scheduler because it was missing openshift-user-workload-monitoring 37m Normal SuccessfulCreate statefulset/prometheus-user-workload create Pod prometheus-user-workload-0 in StatefulSet prometheus-user-workload successful openshift-authentication 37m Normal SuccessfulCreate replicaset/oauth-openshift-6cd75d67b9 Created pod: oauth-openshift-6cd75d67b9-27tx4 openshift-cluster-csi-drivers 37m Normal SuccessfulCreate replicaset/aws-ebs-csi-driver-operator-667bfc499d Created pod: aws-ebs-csi-driver-operator-667bfc499d-7fmff openshift-monitoring 37m Warning Unhealthy pod/prometheus-adapter-5b77f96bd4-vm8xp Readiness probe failed: Get "https://10.128.2.11:6443/readyz": dial tcp 10.128.2.11:6443: connect: connection refused openshift-authentication 37m Normal Pulling pod/oauth-openshift-5c9d8ccbcc-bkr8m Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2156db516db2fda43363a063c11f91bd201dde2de7eb4fbab9c1210d17e699ba" openshift-cloud-controller-manager-operator 37m Normal SuccessfulCreate replicaset/cluster-cloud-controller-manager-operator-5dcbbcf757 Created pod: cluster-cloud-controller-manager-operator-5dcbbcf757-zfxcs openshift-user-workload-monitoring 37m Normal Killing pod/prometheus-operator-6cbc5c4f45-dt4j5 Stopping container kube-rbac-proxy openshift-user-workload-monitoring 37m Normal SuccessfulCreate replicaset/prometheus-operator-6cbc5c4f45 Created pod: prometheus-operator-6cbc5c4f45-t95ht openshift-cluster-csi-drivers 37m Normal Killing pod/aws-ebs-csi-driver-operator-667bfc499d-pjs9d Stopping container aws-ebs-csi-driver-operator openshift-monitoring 37m Warning ProbeError pod/prometheus-adapter-5b77f96bd4-vm8xp Readiness probe error: Get "https://10.128.2.11:6443/readyz": dial tcp 10.128.2.11:6443: connect: connection refused... openshift-ingress 37m Normal EnsuredLoadBalancer service/router-default Ensured load balancer openshift-authentication 37m Normal AddedInterface pod/oauth-openshift-5c9d8ccbcc-bkr8m Add eth0 [10.129.0.7/23] from ovn-kubernetes openshift-user-workload-monitoring 37m Normal Killing pod/prometheus-operator-6cbc5c4f45-dt4j5 Stopping container prometheus-operator openshift-cluster-storage-operator 37m Normal SuccessfulCreate replicaset/csi-snapshot-webhook-75476bf784 Created pod: csi-snapshot-webhook-75476bf784-zlxp4 openshift-cloud-controller-manager-operator 37m Normal Killing pod/cluster-cloud-controller-manager-operator-5dcbbcf757-wggmm Stopping container cluster-cloud-controller-manager openshift-kube-scheduler-operator 37m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/kube-scheduler-pod-9 -n openshift-kube-scheduler because it was missing openshift-cluster-storage-operator 37m Normal Killing pod/csi-snapshot-controller-f58c44499-k4v7v Stopping container snapshot-controller openshift-cluster-csi-drivers 37m Normal Killing pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Stopping container csi-liveness-probe openshift-cluster-storage-operator 37m Normal SuccessfulCreate replicaset/csi-snapshot-controller-f58c44499 Created pod: csi-snapshot-controller-f58c44499-xkth2 openshift-cluster-csi-drivers 37m Normal Killing pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Stopping container csi-driver openshift-authentication 37m Normal Killing pod/oauth-openshift-6cd75d67b9-btb4m Stopping container oauth-openshift openshift-cloud-controller-manager-operator 37m Normal Killing pod/cluster-cloud-controller-manager-operator-5dcbbcf757-wggmm Stopping container config-sync-controllers openshift-cluster-csi-drivers 37m Normal Killing pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Stopping container snapshotter-kube-rbac-proxy openshift-cluster-storage-operator 37m Normal Killing pod/csi-snapshot-webhook-75476bf784-7z4rl Stopping container webhook openshift-cluster-csi-drivers 37m Normal Killing pod/aws-ebs-csi-driver-controller-5ff7cf9694-8cbv9 Stopping container csi-snapshotter openshift-cluster-samples-operator 37m Normal Killing pod/cluster-samples-operator-bf9b9498c-mkgcp Stopping container cluster-samples-operator openshift-cluster-samples-operator 37m Normal Killing pod/cluster-samples-operator-bf9b9498c-mkgcp Stopping container cluster-samples-operator-watch openshift-cluster-samples-operator 37m Normal SuccessfulCreate replicaset/cluster-samples-operator-bf9b9498c Created pod: cluster-samples-operator-bf9b9498c-gn68l openshift-cloud-credential-operator 37m Normal Killing pod/pod-identity-webhook-b645775d7-24tr2 Stopping container pod-identity-webhook openshift-kube-controller-manager-operator 37m Normal RevisionTriggered deployment/kube-controller-manager-operator new revision 8 triggered by "secret/localhost-recovery-client-token has changed" openshift-cluster-csi-drivers 37m Normal SuccessfulCreate replicaset/aws-ebs-csi-driver-controller-5ff7cf9694 Created pod: aws-ebs-csi-driver-controller-5ff7cf9694-bg92z openshift-cluster-storage-operator 37m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Progressing message changed from "AWSEBSProgressing: Waiting for Deployment to deploy pods" to "AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAWSEBSProgressing: Waiting for Deployment to deploy pods" default 37m Warning ResolutionFailed namespace/openshift-monitoring failed to populate resolver cache from source configure-alertmanager-operator-registry/openshift-monitoring: failed to list bundles: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp 172.30.195.176:50051: connect: connection refused" openshift-cluster-csi-drivers 37m Normal AddedInterface pod/aws-ebs-csi-driver-operator-667bfc499d-7fmff Add eth0 [10.129.0.12/23] from ovn-kubernetes openshift-cluster-csi-drivers 37m Normal Pulling pod/aws-ebs-csi-driver-operator-667bfc499d-7fmff Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:189279778e9140f0b47f3e8c58ac6262cf1dbe573ae1a651e8a6e675b7d7b369" openshift-cluster-storage-operator 37m Normal AddedInterface pod/csi-snapshot-controller-f58c44499-xkth2 Add eth0 [10.129.0.9/23] from ovn-kubernetes openshift-console-operator 37m Normal AddedInterface pod/console-operator-57cbc6b88f-b2ttj Add eth0 [10.129.0.16/23] from ovn-kubernetes openshift-cluster-storage-operator 37m Normal OperatorStatusChanged deployment/csi-snapshot-controller-operator Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods\nCSISnapshotWebhookControllerProgressing: Waiting for Deployment to deploy pods" openshift-controller-manager 37m Normal Killing pod/controller-manager-c5c84d6f9-wrj8l Stopping container controller-manager openshift-cluster-version 37m Normal Pulling pod/cluster-version-operator-5d74b9d6f5-nclrf Pulling image "quay.io/openshift-release-dev/ocp-release@sha256:e686d3cd173d9848fc304da0ebe4d348c6e3be902989f500c5382590e2e41a11" openshift-console-operator 37m Normal Killing pod/console-operator-57cbc6b88f-tbq55 Stopping container console-operator openshift-cluster-version 37m Normal SuccessfulCreate replicaset/cluster-version-operator-5d74b9d6f5 Created pod: cluster-version-operator-5d74b9d6f5-nclrf openshift-console 37m Normal Killing pod/downloads-fcdb597fd-tr9zh Stopping container download-server openshift-console-operator 37m Normal Killing pod/console-operator-57cbc6b88f-tbq55 Stopping container conversion-webhook-server openshift-console-operator 37m Normal SuccessfulCreate replicaset/console-operator-57cbc6b88f Created pod: console-operator-57cbc6b88f-b2ttj openshift-user-workload-monitoring 37m Normal Pulling pod/prometheus-operator-6cbc5c4f45-t95ht Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f0c9dc9888697e244d61cd89f8fe5a61dcb09dc100889be738db21b2fc5bbf7" openshift-etcd-operator 37m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" openshift-user-workload-monitoring 37m Normal AddedInterface pod/prometheus-operator-6cbc5c4f45-t95ht Add eth0 [10.129.0.8/23] from ovn-kubernetes openshift-cluster-storage-operator 37m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Progressing changed from False to True ("AWSEBSProgressing: Waiting for Deployment to deploy pods") openshift-console 37m Normal SuccessfulCreate replicaset/downloads-fcdb597fd Created pod: downloads-fcdb597fd-vfqwm openshift-cluster-version 37m Normal Killing pod/cluster-version-operator-5d74b9d6f5-689xc Stopping container cluster-version-operator openshift-kube-scheduler-operator 37m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/config-9 -n openshift-kube-scheduler because it was missing openshift-cluster-csi-drivers 37m Normal AddedInterface pod/aws-ebs-csi-driver-controller-5ff7cf9694-bg92z Add eth0 [10.129.0.13/23] from ovn-kubernetes openshift-controller-manager 37m Normal SuccessfulCreate replicaset/controller-manager-c5c84d6f9 Created pod: controller-manager-c5c84d6f9-tll5c openshift-cluster-storage-operator 37m Normal AddedInterface pod/csi-snapshot-webhook-75476bf784-zlxp4 Add eth0 [10.129.0.10/23] from ovn-kubernetes openshift-console 37m Normal SuccessfulCreate replicaset/console-65cc7f8b45 Created pod: console-65cc7f8b45-4xp2z openshift-cloud-credential-operator 37m Normal SuccessfulCreate replicaset/pod-identity-webhook-b645775d7 Created pod: pod-identity-webhook-b645775d7-jb5tx openshift-cloud-credential-operator 37m Normal Pulling pod/pod-identity-webhook-b645775d7-jb5tx Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e248571068c87bc5b2f69bd4fc2bc3934d8bcd2b2a7fecadc754a30e06ac592" openshift-cluster-csi-drivers 37m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-bg92z Created container driver-kube-rbac-proxy openshift-cloud-credential-operator 37m Normal AddedInterface pod/pod-identity-webhook-b645775d7-jb5tx Add eth0 [10.129.0.15/23] from ovn-kubernetes openshift-cluster-csi-drivers 37m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-bg92z Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-cluster-csi-drivers 37m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-bg92z Started container csi-driver openshift-backplane-srep 37m Normal AddedInterface pod/osd-delete-ownerrefs-serviceaccounts-27990037-cdprm Add eth0 [10.131.0.17/23] from ovn-kubernetes openshift-etcd-operator 37m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 2 of 3 members are available, ip-10-0-239-132.ec2.internal is unhealthy" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available" openshift-marketplace 37m Normal AddedInterface pod/community-operators-kp7pr Add eth0 [10.129.0.17/23] from ovn-kubernetes openshift-marketplace 37m Normal Pulling pod/community-operators-kp7pr Pulling image "registry.redhat.io/redhat/community-operator-index:v4.12" openshift-authentication 37m Normal Started pod/oauth-openshift-5c9d8ccbcc-bkr8m Started container oauth-openshift openshift-cluster-storage-operator 37m Normal Pulling pod/csi-snapshot-webhook-75476bf784-zlxp4 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e7a32310238d69d56d35be8f7de426bdbedf96ff73edcd198698ac174c6d3c34" openshift-authentication 37m Normal Created pod/oauth-openshift-5c9d8ccbcc-bkr8m Created container oauth-openshift openshift-authentication 37m Normal Pulled pod/oauth-openshift-5c9d8ccbcc-bkr8m Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2156db516db2fda43363a063c11f91bd201dde2de7eb4fbab9c1210d17e699ba" in 1.704382869s (1.704395234s including waiting) openshift-backplane-srep 37m Normal SuccessfulCreate job/osd-delete-ownerrefs-serviceaccounts-27990037 Created pod: osd-delete-ownerrefs-serviceaccounts-27990037-cdprm openshift-console 37m Normal AddedInterface pod/downloads-fcdb597fd-vfqwm Add eth0 [10.129.0.19/23] from ovn-kubernetes openshift-console 37m Normal Pulling pod/downloads-fcdb597fd-vfqwm Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec5351e220112a5b70451310b563175ae713c4d2864765c861b969730515a21b" openshift-cluster-csi-drivers 37m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-bg92z Created container csi-driver openshift-backplane-srep 37m Normal SuccessfulCreate cronjob/osd-delete-ownerrefs-serviceaccounts Created job osd-delete-ownerrefs-serviceaccounts-27990037 openshift-backplane-srep 37m Normal Pulling pod/osd-delete-ownerrefs-serviceaccounts-27990037-cdprm Pulling image "image-registry.openshift-image-registry.svc:5000/openshift/cli:latest" openshift-cluster-csi-drivers 37m Normal Pulling pod/aws-ebs-csi-driver-controller-5ff7cf9694-bg92z Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77941761aca0cba770d56fcf4d213512b4dd959aa49d3f50c9da02a7aee8d62e" openshift-controller-manager 37m Normal Pulling pod/controller-manager-c5c84d6f9-tll5c Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:03d999968e93d76624e3d10f22591abfbfdaaf2dd8f315abf64cdc590ab64195" openshift-controller-manager 37m Normal AddedInterface pod/controller-manager-c5c84d6f9-tll5c Add eth0 [10.129.0.20/23] from ovn-kubernetes openshift-console-operator 37m Normal Pulling pod/console-operator-57cbc6b88f-b2ttj Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6dd6ba37d430e9e8e248b4c5911ef0903f8bd8d05451ed65eeb1d9d2b3c42e4" openshift-kube-controller-manager-operator 37m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/revision-status-8 -n openshift-kube-controller-manager because it was missing openshift-cluster-csi-drivers 37m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-bg92z Started container driver-kube-rbac-proxy openshift-cluster-csi-drivers 37m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-bg92z Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" already present on machine openshift-cluster-storage-operator 37m Normal Pulling pod/csi-snapshot-controller-f58c44499-xkth2 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f6985210e2dec2b96cd8cd1dc6965ce2710b23b2c515d9ae67a694245bd41082" openshift-cluster-samples-operator 37m Warning FailedMount pod/cluster-samples-operator-bf9b9498c-gn68l MountVolume.SetUp failed for volume "kube-api-access-dqz7b" : failed to sync configmap cache: timed out waiting for the condition openshift-kube-scheduler-operator 37m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/serviceaccount-ca-9 -n openshift-kube-scheduler because it was missing openshift-kube-scheduler-operator 37m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/openshift-kube-scheduler-guard-ip-10-0-239-132.ec2.internal -n openshift-kube-scheduler because it was missing openshift-console 37m Normal Pulling pod/console-65cc7f8b45-4xp2z Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f8ed86b29b0df00f0cfb8b6d170e5fa8d9b0092ee92140788ec5a0a1eb60a10" openshift-console 37m Normal AddedInterface pod/console-65cc7f8b45-4xp2z Add eth0 [10.129.0.18/23] from ovn-kubernetes openshift-cloud-controller-manager-operator 37m Warning FailedMount pod/cluster-cloud-controller-manager-operator-5dcbbcf757-zfxcs MountVolume.SetUp failed for volume "kube-api-access-xg4jv" : failed to sync configmap cache: timed out waiting for the condition openshift-authentication 37m Normal ScalingReplicaSet deployment/oauth-openshift Scaled up replica set oauth-openshift-5c9d8ccbcc to 2 from 1 openshift-kube-apiserver-operator 37m Normal PodCreated deployment/kube-apiserver-operator Created Pod/kube-apiserver-guard-ip-10-0-239-132.ec2.internal -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 37m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/revision-status-12 -n openshift-kube-apiserver because it was missing openshift-cloud-controller-manager-operator 37m Normal Pulling pod/cluster-cloud-controller-manager-operator-5dcbbcf757-zfxcs Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77345c48a82b167f67364ffd41788160b5d06e746946d9ea67191fa18cf34806" openshift-kube-controller-manager-operator 37m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/kube-controller-manager-pod-8 -n openshift-kube-controller-manager because it was missing openshift-cluster-samples-operator 37m Normal AddedInterface pod/cluster-samples-operator-bf9b9498c-gn68l Add eth0 [10.129.0.11/23] from ovn-kubernetes openshift-kube-scheduler 37m Normal AddedInterface pod/openshift-kube-scheduler-guard-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.21/23] from ovn-kubernetes openshift-authentication 37m Normal SuccessfulCreate replicaset/oauth-openshift-5c9d8ccbcc Created pod: oauth-openshift-5c9d8ccbcc-vkchb openshift-kube-scheduler-operator 37m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/scheduler-kubeconfig-9 -n openshift-kube-scheduler because it was missing openshift-authentication 37m Normal SuccessfulDelete replicaset/oauth-openshift-6cd75d67b9 Deleted pod: oauth-openshift-6cd75d67b9-27tx4 openshift-authentication 37m Normal ScalingReplicaSet deployment/oauth-openshift Scaled down replica set oauth-openshift-6cd75d67b9 to 0 from 1 openshift-kube-controller-manager-operator 37m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GuardControllerDegraded: Missing PodIP in operand kube-controller-manager-ip-10-0-197-197.ec2.internal on node ip-10-0-197-197.ec2.internal\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" openshift-backplane-srep 37m Normal Started pod/osd-delete-ownerrefs-serviceaccounts-27990037-cdprm Started container osd-delete-ownerrefs-serviceaccounts openshift-backplane-srep 37m Normal Pulled pod/osd-delete-ownerrefs-serviceaccounts-27990037-cdprm Successfully pulled image "image-registry.openshift-image-registry.svc:5000/openshift/cli:latest" in 1.346668169s (1.346684842s including waiting) openshift-kube-controller-manager-operator 37m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/config-8 -n openshift-kube-controller-manager because it was missing openshift-kube-scheduler 37m Normal Pulled pod/openshift-kube-scheduler-guard-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-cluster-samples-operator 37m Normal Pulling pod/cluster-samples-operator-bf9b9498c-gn68l Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3066c35df5c02d6013ee2944ff5d100cdf41fb0d25076ce846d6e094b36d45c" openshift-backplane-srep 37m Normal Created pod/osd-delete-ownerrefs-serviceaccounts-27990037-cdprm Created container osd-delete-ownerrefs-serviceaccounts openshift-kube-scheduler-operator 37m Normal ConfigMapCreated deployment/openshift-kube-scheduler-operator Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-9 -n openshift-kube-scheduler because it was missing openshift-kube-scheduler-operator 37m Normal SecretCreated deployment/openshift-kube-scheduler-operator Created Secret/serving-cert-9 -n openshift-kube-scheduler because it was missing default 37m Normal NodeNotSchedulable node/ip-10-0-140-6.ec2.internal Node ip-10-0-140-6.ec2.internal status is now: NodeNotSchedulable openshift-kube-apiserver-operator 37m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-pod-12 -n openshift-kube-apiserver because it was missing openshift-etcd-operator 37m Normal PodCreated deployment/etcd-operator Created Pod/etcd-guard-ip-10-0-239-132.ec2.internal -n openshift-etcd because it was missing openshift-kube-controller-manager-operator 37m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/cluster-policy-controller-config-8 -n openshift-kube-controller-manager because it was missing default 37m Warning ResolutionFailed namespace/openshift-managed-upgrade-operator constraints not satisfiable: no operators found from catalog managed-upgrade-operator-catalog in namespace openshift-managed-upgrade-operator referenced by subscription managed-upgrade-operator, subscription managed-upgrade-operator exists openshift-kube-controller-manager-operator 37m Normal PodCreated deployment/kube-controller-manager-operator Created Pod/kube-controller-manager-guard-ip-10-0-239-132.ec2.internal -n openshift-kube-controller-manager because it was missing openshift-kube-scheduler-operator 37m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: conflicting latestAvailableRevision 9" openshift-kube-scheduler-operator 37m Normal RevisionCreate deployment/openshift-kube-scheduler-operator Revision 8 created because secret/localhost-recovery-client-token has changed openshift-kube-controller-manager-operator 37m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/controller-manager-kubeconfig-8 -n openshift-kube-controller-manager because it was missing openshift-kube-scheduler-operator 37m Normal SecretCreated deployment/openshift-kube-scheduler-operator Created Secret/localhost-recovery-client-token-9 -n openshift-kube-scheduler because it was missing openshift-kube-scheduler-operator 37m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: conflicting latestAvailableRevision 9" to "NodeControllerDegraded: All master nodes are ready" openshift-machine-api 37m Normal SuccessfulCreate replicaset/machine-api-controllers-674d9f54f6 Created pod: machine-api-controllers-674d9f54f6-h4xz6 openshift-machine-api 37m Normal Killing pod/machine-api-controllers-674d9f54f6-r6g9g Stopping container kube-rbac-proxy-machineset-mtrc openshift-kube-scheduler-operator 37m Normal RevisionTriggered deployment/openshift-kube-scheduler-operator new revision 9 triggered by "secret/localhost-recovery-client-token has changed" openshift-machine-api 37m Normal Killing pod/machine-api-controllers-674d9f54f6-r6g9g Stopping container kube-rbac-proxy-machine-mtrc openshift-kube-apiserver 37m Normal AddedInterface pod/kube-apiserver-guard-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.22/23] from ovn-kubernetes openshift-machine-api 37m Normal Killing pod/machine-api-controllers-674d9f54f6-r6g9g Stopping container kube-rbac-proxy-mhc-mtrc openshift-machine-api 37m Normal Killing pod/machine-api-controllers-674d9f54f6-r6g9g Stopping container machineset-controller openshift-kube-controller-manager-operator 37m Normal NodeCurrentRevisionChanged deployment/kube-controller-manager-operator Updated node "ip-10-0-197-197.ec2.internal" from revision 6 to 7 because static pod is ready openshift-machine-config-operator 37m Normal Killing pod/machine-config-controller-7f488c778d-c8svb Stopping container oauth-proxy openshift-machine-config-operator 37m Normal SuccessfulCreate replicaset/machine-config-controller-7f488c778d Created pod: machine-config-controller-7f488c778d-vjl7t openshift-kube-apiserver-operator 37m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/config-12 -n openshift-kube-apiserver because it was missing openshift-machine-config-operator 37m Normal Killing pod/machine-config-controller-7f488c778d-c8svb Stopping container machine-config-controller openshift-kube-controller-manager-operator 37m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 7"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 6; 2 nodes are at revision 7" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7" openshift-user-workload-monitoring 36m Normal Pulled pod/prometheus-operator-6cbc5c4f45-t95ht Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f0c9dc9888697e244d61cd89f8fe5a61dcb09dc100889be738db21b2fc5bbf7" in 7.533910729s (7.533933262s including waiting) openshift-authentication-operator 36m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-74455c7c5-tqs7k pod)" openshift-kube-apiserver 36m Normal Pulled pod/kube-apiserver-guard-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver-operator 36m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-12 -n openshift-kube-apiserver because it was missing openshift-kube-scheduler 36m Normal Created pod/openshift-kube-scheduler-guard-ip-10-0-239-132.ec2.internal Created container guard openshift-kube-controller-manager-operator 36m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/kube-controller-cert-syncer-kubeconfig-8 -n openshift-kube-controller-manager because it was missing default 36m Warning ResolutionFailed namespace/openshift-velero constraints not satisfiable: no operators found from catalog managed-velero-operator-registry in namespace openshift-velero referenced by subscription managed-velero-operator, subscription managed-velero-operator exists openshift-etcd 36m Normal AddedInterface pod/revision-pruner-7-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.15/23] from ovn-kubernetes openshift-backplane-srep 36m Normal Completed job/osd-delete-ownerrefs-serviceaccounts-27990037 Job completed openshift-backplane-srep 36m Normal SawCompletedJob cronjob/osd-delete-ownerrefs-serviceaccounts Saw completed job: osd-delete-ownerrefs-serviceaccounts-27990037, status: Complete openshift-kube-scheduler 36m Normal Started pod/openshift-kube-scheduler-guard-ip-10-0-239-132.ec2.internal Started container guard openshift-etcd 36m Normal Pulled pod/revision-pruner-7-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-kube-scheduler-operator 36m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/revision-pruner-9-ip-10-0-239-132.ec2.internal -n openshift-kube-scheduler because it was missing openshift-etcd 36m Normal Created pod/revision-pruner-7-ip-10-0-140-6.ec2.internal Created container pruner openshift-etcd 36m Normal Started pod/revision-pruner-7-ip-10-0-140-6.ec2.internal Started container pruner openshift-kube-controller-manager-operator 36m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/serviceaccount-ca-8 -n openshift-kube-controller-manager because it was missing openshift-cluster-storage-operator 36m Normal Pulled pod/csi-snapshot-webhook-75476bf784-zlxp4 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e7a32310238d69d56d35be8f7de426bdbedf96ff73edcd198698ac174c6d3c34" in 8.680470513s (8.680482851s including waiting) openshift-kube-apiserver-operator 36m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/oauth-metadata-12 -n openshift-kube-apiserver because it was missing openshift-cluster-csi-drivers 36m Normal Pulled pod/aws-ebs-csi-driver-operator-667bfc499d-7fmff Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:189279778e9140f0b47f3e8c58ac6262cf1dbe573ae1a651e8a6e675b7d7b369" in 10.271117608s (10.271146341s including waiting) openshift-kube-scheduler-operator 36m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 7; 0 nodes have achieved new revision 8" to "NodeInstallerProgressing: 2 nodes are at revision 7; 1 nodes are at revision 8; 0 nodes have achieved new revision 9",Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7; 0 nodes have achieved new revision 8" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 7; 1 nodes are at revision 8; 0 nodes have achieved new revision 9" openshift-kube-controller-manager-operator 36m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/service-ca-8 -n openshift-kube-controller-manager because it was missing openshift-kube-scheduler-operator 36m Normal NodeCurrentRevisionChanged deployment/openshift-kube-scheduler-operator Updated node "ip-10-0-239-132.ec2.internal" from revision 7 to 8 because static pod is ready openshift-cluster-storage-operator 36m Normal Pulled pod/csi-snapshot-controller-f58c44499-xkth2 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f6985210e2dec2b96cd8cd1dc6965ce2710b23b2c515d9ae67a694245bd41082" in 9.679017198s (9.679025042s including waiting) openshift-cloud-credential-operator 36m Normal Pulled pod/pod-identity-webhook-b645775d7-jb5tx Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e248571068c87bc5b2f69bd4fc2bc3934d8bcd2b2a7fecadc754a30e06ac592" in 9.663694569s (9.663704188s including waiting) openshift-console-operator 36m Normal Pulled pod/console-operator-57cbc6b88f-b2ttj Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6dd6ba37d430e9e8e248b4c5911ef0903f8bd8d05451ed65eeb1d9d2b3c42e4" in 9.676891698s (9.67689904s including waiting) openshift-route-controller-manager 36m Normal AddedInterface pod/route-controller-manager-6594987c6f-dcrpz Add eth0 [10.129.0.24/23] from ovn-kubernetes openshift-oauth-apiserver 36m Normal AddedInterface pod/apiserver-74455c7c5-tqs7k Add eth0 [10.129.0.23/23] from ovn-kubernetes openshift-kube-apiserver-operator 36m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/bound-sa-token-signing-certs-12 -n openshift-kube-apiserver because it was missing openshift-controller-manager 36m Normal LeaderElection lease/openshift-master-controllers controller-manager-c5c84d6f9-qxhsq became leader openshift-controller-manager 36m Normal LeaderElection configmap/openshift-master-controllers controller-manager-c5c84d6f9-qxhsq became leader default 36m Warning ResolutionFailed namespace/openshift-must-gather-operator constraints not satisfiable: subscription must-gather-operator exists, no operators found from catalog must-gather-operator-registry in namespace openshift-must-gather-operator referenced by subscription must-gather-operator openshift-kube-controller-manager-operator 36m Normal ConfigMapCreated deployment/kube-controller-manager-operator Created ConfigMap/recycler-config-8 -n openshift-kube-controller-manager because it was missing openshift-oauth-apiserver 36m Normal Pulling pod/apiserver-74455c7c5-tqs7k Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" openshift-etcd 36m Normal AddedInterface pod/etcd-guard-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.25/23] from ovn-kubernetes openshift-apiserver 36m Normal AddedInterface pod/apiserver-5f568869f-8zhkc Add eth0 [10.129.0.26/23] from ovn-kubernetes openshift-route-controller-manager 36m Normal Pulling pod/route-controller-manager-6594987c6f-dcrpz Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:884971410b4691a0666287eb389e07a425040d4fc90665c864aa9f4728d4599c" openshift-kube-controller-manager-operator 36m Normal SecretCreated deployment/kube-controller-manager-operator Created Secret/service-account-private-key-8 -n openshift-kube-controller-manager because it was missing openshift-kube-scheduler 36m Normal Pulled pod/revision-pruner-9-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-apiserver-operator 36m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/etcd-serving-ca-12 -n openshift-kube-apiserver because it was missing openshift-kube-scheduler 36m Normal Started pod/revision-pruner-9-ip-10-0-140-6.ec2.internal Started container pruner openshift-kube-scheduler 36m Normal Created pod/revision-pruner-9-ip-10-0-140-6.ec2.internal Created container pruner openshift-kube-scheduler 36m Normal AddedInterface pod/revision-pruner-9-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.16/23] from ovn-kubernetes default 36m Warning ResolutionFailed namespace/openshift-splunk-forwarder-operator constraints not satisfiable: no operators found from catalog splunk-forwarder-operator-catalog in namespace openshift-splunk-forwarder-operator referenced by subscription openshift-splunk-forwarder-operator, subscription openshift-splunk-forwarder-operator exists openshift-kube-controller-manager-operator 36m Normal SecretCreated deployment/kube-controller-manager-operator Created Secret/serving-cert-8 -n openshift-kube-controller-manager because it was missing openshift-kube-apiserver-operator 36m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-server-ca-12 -n openshift-kube-apiserver because it was missing openshift-cluster-samples-operator 36m Normal Pulled pod/cluster-samples-operator-bf9b9498c-gn68l Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3066c35df5c02d6013ee2944ff5d100cdf41fb0d25076ce846d6e094b36d45c" in 11.296190845s (11.296197716s including waiting) openshift-etcd 36m Normal Pulled pod/etcd-guard-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-kube-scheduler-operator 36m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/revision-pruner-9-ip-10-0-197-197.ec2.internal -n openshift-kube-scheduler because it was missing openshift-machine-api 36m Normal AddedInterface pod/machine-api-controllers-674d9f54f6-h4xz6 Add eth0 [10.129.0.27/23] from ovn-kubernetes openshift-kube-controller-manager 36m Normal AddedInterface pod/kube-controller-manager-guard-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.29/23] from ovn-kubernetes openshift-machine-config-operator 36m Normal AddedInterface pod/machine-config-controller-7f488c778d-vjl7t Add eth0 [10.129.0.28/23] from ovn-kubernetes openshift-apiserver 36m Normal Pulling pod/apiserver-5f568869f-8zhkc Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" openshift-console 36m Normal Pulled pod/console-65cc7f8b45-4xp2z Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f8ed86b29b0df00f0cfb8b6d170e5fa8d9b0092ee92140788ec5a0a1eb60a10" in 13.809123652s (13.809129363s including waiting) openshift-kube-controller-manager-operator 36m Normal RevisionCreate deployment/kube-controller-manager-operator Revision 7 created because secret/localhost-recovery-client-token has changed openshift-kube-controller-manager-operator 36m Normal SecretCreated deployment/kube-controller-manager-operator Created Secret/localhost-recovery-client-token-8 -n openshift-kube-controller-manager because it was missing openshift-kube-scheduler 36m Normal AddedInterface pod/revision-pruner-9-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.36/23] from ovn-kubernetes openshift-kube-scheduler 36m Normal Pulled pod/revision-pruner-9-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 36m Normal Started pod/revision-pruner-9-ip-10-0-197-197.ec2.internal Started container pruner openshift-kube-scheduler 36m Normal Created pod/revision-pruner-9-ip-10-0-197-197.ec2.internal Created container pruner openshift-kube-apiserver-operator 36m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kubelet-serving-ca-12 -n openshift-kube-apiserver because it was missing openshift-marketplace 36m Normal Created pod/redhat-operators-rwpx4 Created container registry-server openshift-kube-apiserver-operator 36m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/sa-token-signing-certs-12 -n openshift-kube-apiserver because it was missing openshift-marketplace 36m Normal Started pod/redhat-operators-rwpx4 Started container registry-server openshift-marketplace 36m Normal Pulled pod/redhat-operators-rwpx4 Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.12" in 25.97566416s (25.97570511s including waiting) openshift-kube-apiserver-operator 36m Normal ConfigMapCreated deployment/kube-apiserver-operator Created ConfigMap/kube-apiserver-audit-policies-12 -n openshift-kube-apiserver because it was missing openshift-kube-scheduler-operator 36m Normal NodeTargetRevisionChanged deployment/openshift-kube-scheduler-operator Updating node "ip-10-0-140-6.ec2.internal" from revision 7 to 9 because node ip-10-0-140-6.ec2.internal with revision 7 is the oldest openshift-kube-apiserver 36m Normal Killing pod/kube-apiserver-ip-10-0-239-132.ec2.internal Stopping container kube-apiserver-insecure-readyz openshift-kube-apiserver 36m Normal Killing pod/kube-apiserver-ip-10-0-239-132.ec2.internal Stopping container kube-apiserver-cert-regeneration-controller openshift-kube-apiserver 36m Normal Killing pod/kube-apiserver-ip-10-0-239-132.ec2.internal Stopping container kube-apiserver-check-endpoints openshift-kube-apiserver 36m Normal StaticPodInstallerCompleted pod/installer-11-ip-10-0-239-132.ec2.internal Successfully installed revision 11 openshift-kube-apiserver-operator 36m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/etcd-client-12 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver 36m Normal Killing pod/kube-apiserver-ip-10-0-239-132.ec2.internal Stopping container kube-apiserver-cert-syncer openshift-kube-apiserver 36m Normal Killing pod/kube-apiserver-ip-10-0-239-132.ec2.internal Stopping container kube-apiserver openshift-machine-api 36m Normal Pulling pod/machine-api-controllers-674d9f54f6-h4xz6 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fae409a0e6467f2d4e5e1cd0974a33f71fddf6f3b567c278b3a9aad56aa0f089" openshift-kube-apiserver 36m Normal Started pod/kube-apiserver-guard-ip-10-0-239-132.ec2.internal Started container guard openshift-kube-controller-manager 36m Normal Pulled pod/kube-controller-manager-guard-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-scheduler 36m Normal AddedInterface pod/revision-pruner-9-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.30/23] from ovn-kubernetes openshift-kube-apiserver 36m Normal Created pod/kube-apiserver-guard-ip-10-0-239-132.ec2.internal Created container guard openshift-cluster-csi-drivers 36m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-bg92z Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77941761aca0cba770d56fcf4d213512b4dd959aa49d3f50c9da02a7aee8d62e" in 18.830117234s (18.830129315s including waiting) openshift-machine-config-operator 36m Normal Pulled pod/machine-config-controller-7f488c778d-vjl7t Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" already present on machine openshift-cluster-version 36m Normal Pulled pod/cluster-version-operator-5d74b9d6f5-nclrf Successfully pulled image "quay.io/openshift-release-dev/ocp-release@sha256:e686d3cd173d9848fc304da0ebe4d348c6e3be902989f500c5382590e2e41a11" in 19.871212698s (19.871221751s including waiting) openshift-cloud-controller-manager-operator 36m Normal Pulled pod/cluster-cloud-controller-manager-operator-5dcbbcf757-zfxcs Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77345c48a82b167f67364ffd41788160b5d06e746946d9ea67191fa18cf34806" in 18.487280113s (18.487289093s including waiting) default 36m Warning ResolutionFailed namespace/openshift-observability-operator constraints not satisfiable: no operators found from catalog observability-operator-catalog in namespace openshift-observability-operator referenced by subscription observability-operator, subscription observability-operator exists openshift-controller-manager 36m Normal Pulled pod/controller-manager-c5c84d6f9-tll5c Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:03d999968e93d76624e3d10f22591abfbfdaaf2dd8f315abf64cdc590ab64195" in 18.854019944s (18.854026064s including waiting) openshift-kube-scheduler-operator 36m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/installer-9-ip-10-0-140-6.ec2.internal -n openshift-kube-scheduler because it was missing openshift-kube-controller-manager-operator 36m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 3 nodes are at revision 7; 0 nodes have achieved new revision 8"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7; 0 nodes have achieved new revision 8" openshift-kube-controller-manager-operator 36m Normal NodeTargetRevisionChanged deployment/kube-controller-manager-operator Updating node "ip-10-0-239-132.ec2.internal" from revision 7 to 8 because node ip-10-0-239-132.ec2.internal with revision 7 is the oldest default 36m Warning ResolutionFailed namespace/openshift-route-monitor-operator constraints not satisfiable: subscription route-monitor-operator exists, no operators found from catalog route-monitor-operator-registry in namespace openshift-route-monitor-operator referenced by subscription route-monitor-operator openshift-cluster-storage-operator 36m Normal Started pod/csi-snapshot-webhook-75476bf784-zlxp4 Started container webhook openshift-user-workload-monitoring 36m Normal Started pod/prometheus-operator-6cbc5c4f45-t95ht Started container prometheus-operator openshift-kube-apiserver-operator 36m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/localhost-recovery-serving-certkey-12 -n openshift-kube-apiserver because it was missing openshift-user-workload-monitoring 36m Normal Pulled pod/prometheus-operator-6cbc5c4f45-t95ht Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-kube-scheduler 36m Normal Started pod/installer-9-ip-10-0-140-6.ec2.internal Started container installer openshift-user-workload-monitoring 36m Normal Created pod/prometheus-operator-6cbc5c4f45-t95ht Created container prometheus-operator openshift-kube-scheduler 36m Normal Pulled pod/revision-pruner-9-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-etcd 36m Normal Created pod/etcd-guard-ip-10-0-239-132.ec2.internal Created container guard openshift-console-operator 36m Normal Started pod/console-operator-57cbc6b88f-b2ttj Started container console-operator openshift-console-operator 36m Normal Created pod/console-operator-57cbc6b88f-b2ttj Created container console-operator openshift-kube-scheduler 36m Normal Created pod/installer-9-ip-10-0-140-6.ec2.internal Created container installer openshift-cloud-credential-operator 36m Normal Created pod/pod-identity-webhook-b645775d7-jb5tx Created container pod-identity-webhook openshift-cloud-credential-operator 36m Normal Started pod/pod-identity-webhook-b645775d7-jb5tx Started container pod-identity-webhook openshift-kube-scheduler 36m Normal Pulled pod/installer-9-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-cluster-storage-operator 36m Normal Created pod/csi-snapshot-controller-f58c44499-xkth2 Created container snapshot-controller openshift-cluster-storage-operator 36m Normal Started pod/csi-snapshot-controller-f58c44499-xkth2 Started container snapshot-controller openshift-cluster-storage-operator 36m Normal Created pod/csi-snapshot-webhook-75476bf784-zlxp4 Created container webhook openshift-etcd 36m Normal Started pod/etcd-guard-ip-10-0-239-132.ec2.internal Started container guard openshift-kube-scheduler 36m Normal AddedInterface pod/installer-9-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.21/23] from ovn-kubernetes openshift-cluster-storage-operator 36m Normal OperatorStatusChanged deployment/csi-snapshot-controller-operator Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods\nCSISnapshotWebhookControllerProgressing: Waiting for Deployment to deploy pods" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods" openshift-cluster-storage-operator 36m Normal OperatorStatusChanged deployment/csi-snapshot-controller-operator Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well") openshift-machine-config-operator 36m Normal Started pod/machine-config-controller-7f488c778d-vjl7t Started container machine-config-controller openshift-machine-config-operator 36m Normal Pulled pod/machine-config-controller-7f488c778d-vjl7t Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-machine-config-operator 36m Normal Created pod/machine-config-controller-7f488c778d-vjl7t Created container machine-config-controller openshift-kube-controller-manager 36m Normal Started pod/kube-controller-manager-guard-ip-10-0-239-132.ec2.internal Started container guard default 36m Warning ResolutionFailed namespace/openshift-ocm-agent-operator constraints not satisfiable: no operators found from catalog ocm-agent-operator-registry in namespace openshift-ocm-agent-operator referenced by subscription ocm-agent-operator, subscription ocm-agent-operator exists openshift-cluster-csi-drivers 36m Normal Created pod/aws-ebs-csi-driver-operator-667bfc499d-7fmff Created container aws-ebs-csi-driver-operator openshift-kube-controller-manager-operator 36m Normal PodCreated deployment/kube-controller-manager-operator Created Pod/installer-8-ip-10-0-239-132.ec2.internal -n openshift-kube-controller-manager because it was missing openshift-kube-controller-manager 36m Normal Created pod/kube-controller-manager-guard-ip-10-0-239-132.ec2.internal Created container guard openshift-cluster-csi-drivers 36m Normal Started pod/aws-ebs-csi-driver-operator-667bfc499d-7fmff Started container aws-ebs-csi-driver-operator default 36m Warning ResolutionFailed namespace/openshift-must-gather-operator constraints not satisfiable: no operators found from catalog must-gather-operator-registry in namespace openshift-must-gather-operator referenced by subscription must-gather-operator, subscription must-gather-operator exists openshift-route-controller-manager 36m Normal Pulled pod/route-controller-manager-6594987c6f-dcrpz Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:884971410b4691a0666287eb389e07a425040d4fc90665c864aa9f4728d4599c" in 13.315754812s (13.315765307s including waiting) openshift-marketplace 36m Normal Pulled pod/community-operators-kp7pr Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.12" in 24.187477559s (24.187488033s including waiting) openshift-cluster-csi-drivers 36m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-bg92z Created container csi-provisioner openshift-apiserver 36m Normal Pulled pod/apiserver-5f568869f-8zhkc Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" in 10.620571611s (10.620578569s including waiting) openshift-controller-manager 36m Normal Created pod/controller-manager-c5c84d6f9-tll5c Created container controller-manager openshift-cluster-version 36m Normal Created pod/cluster-version-operator-5d74b9d6f5-nclrf Created container cluster-version-operator default 36m Warning ResolutionFailed namespace/openshift-splunk-forwarder-operator constraints not satisfiable: subscription openshift-splunk-forwarder-operator exists, no operators found from catalog splunk-forwarder-operator-catalog in namespace openshift-splunk-forwarder-operator referenced by subscription openshift-splunk-forwarder-operator openshift-cluster-samples-operator 36m Normal Created pod/cluster-samples-operator-bf9b9498c-gn68l Created container cluster-samples-operator openshift-console-operator 36m Normal Pulled pod/console-operator-57cbc6b88f-b2ttj Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e6dd6ba37d430e9e8e248b4c5911ef0903f8bd8d05451ed65eeb1d9d2b3c42e4" already present on machine openshift-console 36m Normal Created pod/console-65cc7f8b45-4xp2z Created container console openshift-console 36m Normal Pulled pod/downloads-fcdb597fd-vfqwm Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec5351e220112a5b70451310b563175ae713c4d2864765c861b969730515a21b" in 23.750024542s (23.75003599s including waiting) openshift-kube-apiserver-operator 36m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/localhost-recovery-client-token-12 -n openshift-kube-apiserver because it was missing openshift-kube-scheduler 36m Normal Created pod/revision-pruner-9-ip-10-0-239-132.ec2.internal Created container pruner openshift-cluster-storage-operator 36m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Progressing message changed from "AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAWSEBSProgressing: Waiting for Deployment to deploy pods" to "AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods" openshift-oauth-apiserver 36m Normal Pulled pod/apiserver-74455c7c5-tqs7k Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" in 13.178770421s (13.17878302s including waiting) openshift-cloud-controller-manager-operator 36m Normal Created pod/cluster-cloud-controller-manager-operator-5dcbbcf757-zfxcs Created container cluster-cloud-controller-manager openshift-console-operator 36m Normal Created pod/console-operator-57cbc6b88f-b2ttj Created container conversion-webhook-server openshift-cluster-samples-operator 36m Normal Started pod/cluster-samples-operator-bf9b9498c-gn68l Started container cluster-samples-operator-watch openshift-cluster-samples-operator 36m Normal Started pod/cluster-samples-operator-bf9b9498c-gn68l Started container cluster-samples-operator openshift-cloud-controller-manager-operator 36m Normal Created pod/cluster-cloud-controller-manager-operator-5dcbbcf757-zfxcs Created container config-sync-controllers openshift-cluster-version 36m Normal LeaderElection lease/version ip-10-0-239-132_49047d51-78cb-4b67-b0ad-35e9288f916e became leader openshift-user-workload-monitoring 36m Normal Created pod/prometheus-operator-6cbc5c4f45-t95ht Created container kube-rbac-proxy openshift-cloud-controller-manager-operator 36m Normal Pulled pod/cluster-cloud-controller-manager-operator-5dcbbcf757-zfxcs Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77345c48a82b167f67364ffd41788160b5d06e746946d9ea67191fa18cf34806" already present on machine openshift-cluster-samples-operator 36m Normal Created pod/cluster-samples-operator-bf9b9498c-gn68l Created container cluster-samples-operator-watch openshift-console 36m Normal Created pod/downloads-fcdb597fd-vfqwm Created container download-server openshift-console 36m Normal Started pod/downloads-fcdb597fd-vfqwm Started container download-server openshift-cluster-samples-operator 36m Normal Pulled pod/cluster-samples-operator-bf9b9498c-gn68l Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b3066c35df5c02d6013ee2944ff5d100cdf41fb0d25076ce846d6e094b36d45c" already present on machine openshift-user-workload-monitoring 36m Normal Started pod/prometheus-operator-6cbc5c4f45-t95ht Started container kube-rbac-proxy openshift-oauth-apiserver 36m Normal Created pod/apiserver-74455c7c5-tqs7k Created container fix-audit-permissions openshift-marketplace 36m Normal Created pod/community-operators-kp7pr Created container registry-server openshift-marketplace 36m Normal Started pod/community-operators-kp7pr Started container registry-server openshift-cloud-controller-manager-operator 36m Normal Started pod/cluster-cloud-controller-manager-operator-5dcbbcf757-zfxcs Started container cluster-cloud-controller-manager openshift-machine-config-operator 36m Normal Created pod/machine-config-controller-7f488c778d-vjl7t Created container oauth-proxy openshift-oauth-apiserver 36m Normal Started pod/apiserver-74455c7c5-tqs7k Started container fix-audit-permissions openshift-cluster-csi-drivers 36m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-bg92z Started container csi-provisioner openshift-kube-scheduler 36m Normal Started pod/revision-pruner-9-ip-10-0-239-132.ec2.internal Started container pruner openshift-cluster-version 36m Normal Started pod/cluster-version-operator-5d74b9d6f5-nclrf Started container cluster-version-operator openshift-apiserver 36m Normal Created pod/apiserver-5f568869f-8zhkc Created container fix-audit-permissions openshift-route-controller-manager 36m Normal Created pod/route-controller-manager-6594987c6f-dcrpz Created container route-controller-manager openshift-cluster-csi-drivers 36m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-bg92z Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-controller-manager 36m Normal Started pod/controller-manager-c5c84d6f9-tll5c Started container controller-manager openshift-route-controller-manager 36m Normal Started pod/route-controller-manager-6594987c6f-dcrpz Started container route-controller-manager openshift-cloud-controller-manager-operator 36m Normal Started pod/cluster-cloud-controller-manager-operator-5dcbbcf757-zfxcs Started container config-sync-controllers openshift-machine-config-operator 36m Normal Started pod/machine-config-controller-7f488c778d-vjl7t Started container oauth-proxy openshift-cluster-version 36m Normal LeaderElection configmap/version ip-10-0-239-132_49047d51-78cb-4b67-b0ad-35e9288f916e became leader openshift-cluster-csi-drivers 36m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-bg92z Created container provisioner-kube-rbac-proxy openshift-cluster-csi-drivers 36m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-bg92z Started container provisioner-kube-rbac-proxy openshift-cluster-samples-operator 36m Normal FileChangeWatchdogStarted deployment/cluster-samples-operator Started watching files for process cluster-samples-operator[7] openshift-kube-controller-manager 36m Normal AddedInterface pod/installer-8-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.31/23] from ovn-kubernetes openshift-cluster-csi-drivers 36m Normal Pulling pod/aws-ebs-csi-driver-controller-5ff7cf9694-bg92z Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:726ed98ed8df6da72ea0aaecf62714470ad60d9e5665b65286271e92e4f46c1d" openshift-kube-controller-manager 36m Normal Pulled pod/installer-8-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-console-operator 36m Normal Started pod/console-operator-57cbc6b88f-b2ttj Started container conversion-webhook-server openshift-apiserver 36m Normal Started pod/apiserver-5f568869f-8zhkc Started container fix-audit-permissions openshift-console 36m Normal Started pod/console-65cc7f8b45-4xp2z Started container console openshift-kube-controller-manager 36m Normal Started pod/installer-8-ip-10-0-239-132.ec2.internal Started container installer openshift-kube-apiserver-operator 36m Normal SecretCreated deployment/kube-apiserver-operator Created Secret/webhook-authenticator-12 -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 36m Normal RevisionCreate deployment/kube-apiserver-operator Revision 11 created because required secret/localhost-recovery-client-token has changed openshift-route-controller-manager 36m Normal Killing pod/route-controller-manager-9b45479c5-69h2c Stopping container route-controller-manager openshift-route-controller-manager 36m Normal ScalingReplicaSet deployment/route-controller-manager Scaled up replica set route-controller-manager-6594987c6f to 2 from 1 openshift-route-controller-manager 36m Normal ScalingReplicaSet deployment/route-controller-manager Scaled down replica set route-controller-manager-9b45479c5 to 1 from 2 openshift-cluster-version 36m Normal RetrievePayload clusterversion/version Retrieving and verifying payload version="4.13.0-rc.0" image="quay.io/openshift-release-dev/ocp-release@sha256:e686d3cd173d9848fc304da0ebe4d348c6e3be902989f500c5382590e2e41a11" openshift-cluster-version 36m Normal LoadPayload clusterversion/version Loading payload version="4.13.0-rc.0" image="quay.io/openshift-release-dev/ocp-release@sha256:e686d3cd173d9848fc304da0ebe4d348c6e3be902989f500c5382590e2e41a11" openshift-route-controller-manager 36m Normal SuccessfulDelete replicaset/route-controller-manager-9b45479c5 Deleted pod: route-controller-manager-9b45479c5-69h2c openshift-route-controller-manager 36m Normal SuccessfulCreate replicaset/route-controller-manager-6594987c6f Created pod: route-controller-manager-6594987c6f-qfkcc openshift-controller-manager-operator 36m Normal OperatorStatusChanged deployment/openshift-controller-manager-operator Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: updated replicas is 1, desired replicas is 3" to "Progressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: updated replicas is 2, desired replicas is 3" openshift-kube-controller-manager 36m Normal Created pod/installer-8-ip-10-0-239-132.ec2.internal Created container installer openshift-kube-apiserver-operator 36m Normal RevisionTriggered deployment/kube-apiserver-operator new revision 12 triggered by "required secret/localhost-recovery-client-token has changed" openshift-oauth-apiserver 36m Normal Pulled pod/apiserver-74455c7c5-tqs7k Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" already present on machine openshift-console 36m Normal Killing pod/console-65cc7f8b45-md5n8 Stopping container console openshift-apiserver 36m Normal Pulled pod/apiserver-5f568869f-8zhkc Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" already present on machine openshift-console 36m Warning Unhealthy pod/downloads-fcdb597fd-vfqwm Readiness probe failed: Get "http://10.129.0.19:8080/": dial tcp 10.129.0.19:8080: connect: connection refused openshift-marketplace 36m Normal Killing pod/redhat-operators-jzt5b Stopping container registry-server openshift-cluster-version 36m Normal PayloadLoaded clusterversion/version Payload loaded version="4.13.0-rc.0" image="quay.io/openshift-release-dev/ocp-release@sha256:e686d3cd173d9848fc304da0ebe4d348c6e3be902989f500c5382590e2e41a11" architecture="amd64" openshift-console 36m Warning ProbeError pod/downloads-fcdb597fd-vfqwm Readiness probe error: Get "http://10.129.0.19:8080/": dial tcp 10.129.0.19:8080: connect: connection refused... openshift-kube-apiserver-operator 36m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: conflicting latestAvailableRevision 12" openshift-apiserver 36m Normal Started pod/apiserver-5f568869f-8zhkc Started container openshift-apiserver openshift-machine-api 36m Normal Pulling pod/machine-api-controllers-674d9f54f6-h4xz6 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9f622d5408462011492d823946b98c1043c08d2ecf2a264dc9d90f48084a9c8" openshift-machine-api 36m Normal Started pod/machine-api-controllers-674d9f54f6-h4xz6 Started container machineset-controller openshift-machine-api 36m Normal Pulled pod/machine-api-controllers-674d9f54f6-h4xz6 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fae409a0e6467f2d4e5e1cd0974a33f71fddf6f3b567c278b3a9aad56aa0f089" in 8.535147965s (8.535157897s including waiting) openshift-apiserver 36m Normal Started pod/apiserver-5f568869f-8zhkc Started container openshift-apiserver-check-endpoints openshift-apiserver 36m Normal Created pod/apiserver-5f568869f-8zhkc Created container openshift-apiserver-check-endpoints openshift-route-controller-manager 36m Normal AddedInterface pod/route-controller-manager-6594987c6f-qfkcc Add eth0 [10.130.0.37/23] from ovn-kubernetes openshift-apiserver 36m Normal Created pod/apiserver-5f568869f-8zhkc Created container openshift-apiserver openshift-route-controller-manager 36m Normal Pulled pod/route-controller-manager-6594987c6f-qfkcc Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:884971410b4691a0666287eb389e07a425040d4fc90665c864aa9f4728d4599c" already present on machine openshift-apiserver 36m Normal Pulled pod/apiserver-5f568869f-8zhkc Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-route-controller-manager 36m Normal Created pod/route-controller-manager-6594987c6f-qfkcc Created container route-controller-manager openshift-oauth-apiserver 36m Normal Created pod/apiserver-74455c7c5-tqs7k Created container oauth-apiserver default 36m Warning ResolutionFailed namespace/openshift-managed-node-metadata-operator constraints not satisfiable: no operators found from catalog managed-node-metadata-operator-registry in namespace openshift-managed-node-metadata-operator referenced by subscription managed-node-metadata-operator, subscription managed-node-metadata-operator exists openshift-kube-apiserver-operator 36m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: conflicting latestAvailableRevision 12" to "NodeControllerDegraded: All master nodes are ready" openshift-oauth-apiserver 36m Normal Started pod/apiserver-74455c7c5-tqs7k Started container oauth-apiserver openshift-machine-api 36m Normal Created pod/machine-api-controllers-674d9f54f6-h4xz6 Created container machineset-controller openshift-apiserver 36m Warning FastControllerResync node/ip-10-0-239-132.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling openshift-cluster-csi-drivers 36m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-bg92z Created container csi-attacher openshift-apiserver 36m Warning FastControllerResync node/ip-10-0-239-132.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling openshift-cluster-csi-drivers 36m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-bg92z Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:726ed98ed8df6da72ea0aaecf62714470ad60d9e5665b65286271e92e4f46c1d" in 3.852360763s (3.852373157s including waiting) openshift-kube-apiserver-operator 36m Normal PodCreated deployment/kube-apiserver-operator Created Pod/revision-pruner-11-ip-10-0-140-6.ec2.internal -n openshift-kube-apiserver because it was missing openshift-cluster-csi-drivers 36m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-bg92z Started container csi-attacher openshift-cluster-csi-drivers 36m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-bg92z Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-route-controller-manager 36m Normal Started pod/route-controller-manager-6594987c6f-qfkcc Started container route-controller-manager openshift-cluster-csi-drivers 36m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-bg92z Created container attacher-kube-rbac-proxy openshift-cluster-csi-drivers 36m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-bg92z Started container attacher-kube-rbac-proxy default 36m Warning ResolutionFailed namespace/openshift-osd-metrics constraints not satisfiable: no operators found from catalog osd-metrics-exporter-registry in namespace openshift-osd-metrics referenced by subscription osd-metrics-exporter, subscription osd-metrics-exporter exists openshift-cluster-csi-drivers 36m Normal Pulling pod/aws-ebs-csi-driver-controller-5ff7cf9694-bg92z Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66daa08f96501fa939342eafe2de7be5307656a3ff3ec9bde82664905c695bb6" openshift-route-controller-manager 36m Normal ScalingReplicaSet deployment/route-controller-manager Scaled down replica set route-controller-manager-9b45479c5 to 0 from 1 openshift-route-controller-manager 36m Normal SuccessfulDelete replicaset/route-controller-manager-9b45479c5 Deleted pod: route-controller-manager-9b45479c5-q5nh8 openshift-route-controller-manager 36m Normal ScalingReplicaSet deployment/route-controller-manager Scaled up replica set route-controller-manager-6594987c6f to 3 from 2 openshift-kube-apiserver 36m Normal Started pod/revision-pruner-11-ip-10-0-140-6.ec2.internal Started container pruner openshift-route-controller-manager 36m Normal SuccessfulCreate replicaset/route-controller-manager-6594987c6f Created pod: route-controller-manager-6594987c6f-246st openshift-kube-apiserver 36m Normal Created pod/revision-pruner-11-ip-10-0-140-6.ec2.internal Created container pruner openshift-apiserver 36m Warning Unhealthy pod/apiserver-5f568869f-8zhkc Startup probe failed: HTTP probe failed with statuscode: 500 openshift-apiserver 36m Warning ProbeError pod/apiserver-5f568869f-8zhkc Startup probe error: HTTP probe failed with statuscode: 500... openshift-kube-apiserver 36m Normal Pulled pod/revision-pruner-11-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 36m Normal AddedInterface pod/revision-pruner-11-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.23/23] from ovn-kubernetes openshift-route-controller-manager 36m Normal Killing pod/route-controller-manager-9b45479c5-q5nh8 Stopping container route-controller-manager openshift-marketplace 36m Normal Killing pod/redhat-operators-rwpx4 Stopping container registry-server openshift-machine-api 36m Normal Started pod/machine-api-controllers-674d9f54f6-h4xz6 Started container machine-controller openshift-service-ca 36m Normal SuccessfulCreate replicaset/service-ca-57bb877df5 Created pod: service-ca-57bb877df5-7tzmh openshift-marketplace 36m Warning Unhealthy pod/redhat-marketplace-crqrm Readiness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of cb29cdf134ffac64ea33ca27760ea05843d65f4dc6bfa67ab08d9ce5bab925f3 is running failed: open /proc/10832/stat: no such file or directory: container process not found openshift-machine-api 36m Normal Created pod/machine-api-controllers-674d9f54f6-h4xz6 Created container machine-controller openshift-machine-api 36m Normal Pulled pod/machine-api-controllers-674d9f54f6-h4xz6 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9f622d5408462011492d823946b98c1043c08d2ecf2a264dc9d90f48084a9c8" in 3.811003186s (3.811014016s including waiting) openshift-marketplace 36m Normal Killing pod/community-operators-7jr7c Stopping container registry-server openshift-marketplace 36m Warning Unhealthy pod/certified-operators-77trp Liveness probe failed: openshift-marketplace 36m Warning Unhealthy pod/certified-operators-77trp Readiness probe failed: openshift-operator-lifecycle-manager 36m Normal SuccessfulCreate replicaset/packageserver-7c998868c6 Created pod: packageserver-7c998868c6-ctgf5 openshift-cluster-csi-drivers 36m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-bg92z Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66daa08f96501fa939342eafe2de7be5307656a3ff3ec9bde82664905c695bb6" in 2.167569331s (2.167578823s including waiting) openshift-operator-lifecycle-manager 36m Normal Killing pod/packageserver-7c998868c6-vtkkk Stopping container packageserver openshift-marketplace 36m Normal Killing pod/certified-operators-77trp Stopping container registry-server openshift-marketplace 36m Normal Killing pod/redhat-marketplace-crqrm Stopping container registry-server openshift-cluster-csi-drivers 36m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-bg92z Created container csi-resizer openshift-cluster-csi-drivers 36m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-bg92z Started container csi-resizer openshift-network-operator 36m Normal Killing pod/network-operator-6c9d58d76b-m2fjb Stopping container network-operator openshift-service-ca 36m Normal Killing pod/service-ca-57bb877df5-24vfr Stopping container service-ca-controller openshift-service-ca-operator 36m Normal OperatorStatusChanged deployment/service-ca-operator Status for clusteroperator/service-ca changed: Progressing changed from False to True ("Progressing: \nProgressing: service-ca does not have available replicas") openshift-multus 36m Normal SuccessfulCreate replicaset/multus-admission-controller-757b6fbf74 Created pod: multus-admission-controller-757b6fbf74-hl64m openshift-multus 36m Normal Killing pod/multus-admission-controller-757b6fbf74-mz54v Stopping container kube-rbac-proxy openshift-multus 36m Normal Killing pod/multus-admission-controller-757b6fbf74-mz54v Stopping container multus-admission-controller openshift-network-operator 36m Normal SuccessfulCreate replicaset/network-operator-6c9d58d76b Created pod: network-operator-6c9d58d76b-b79jx openshift-authentication-operator 36m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-74455c7c5-tqs7k pod)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-74455c7c5-tqs7k pod)" openshift-machine-api 36m Normal Pulled pod/machine-api-controllers-674d9f54f6-h4xz6 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-machine-api 36m Normal Started pod/machine-api-controllers-674d9f54f6-h4xz6 Started container machine-healthcheck-controller openshift-machine-api 36m Normal Pulled pod/machine-api-controllers-674d9f54f6-h4xz6 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fae409a0e6467f2d4e5e1cd0974a33f71fddf6f3b567c278b3a9aad56aa0f089" already present on machine openshift-machine-api 36m Normal Created pod/machine-api-controllers-674d9f54f6-h4xz6 Created container nodelink-controller openshift-machine-api 36m Normal Created pod/machine-api-controllers-674d9f54f6-h4xz6 Created container machine-healthcheck-controller openshift-marketplace 36m Warning Unhealthy pod/community-operators-7jr7c Readiness probe errored: rpc error: code = NotFound desc = container is not created or running: checking if PID of b7f81be442956248fe4901fe9ebc93f16af5a2aa6327148fbc386ea9b6c7f492 is running failed: open /proc/10841/stat: no such file or directory: container process not found openshift-machine-api 36m Normal Started pod/machine-api-controllers-674d9f54f6-h4xz6 Started container nodelink-controller openshift-machine-api 36m Normal Pulled pod/machine-api-controllers-674d9f54f6-h4xz6 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fae409a0e6467f2d4e5e1cd0974a33f71fddf6f3b567c278b3a9aad56aa0f089" already present on machine openshift-machine-api 36m Normal Created pod/machine-api-controllers-674d9f54f6-h4xz6 Created container kube-rbac-proxy-machineset-mtrc openshift-cluster-csi-drivers 36m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-bg92z Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-etcd 36m Normal Killing pod/etcd-guard-ip-10-0-140-6.ec2.internal Stopping container guard openshift-kube-apiserver-operator 36m Normal PodCreated deployment/kube-apiserver-operator Created Pod/revision-pruner-12-ip-10-0-197-197.ec2.internal -n openshift-kube-apiserver because it was missing openshift-machine-api 36m Normal Created pod/machine-api-controllers-674d9f54f6-h4xz6 Created container kube-rbac-proxy-mhc-mtrc openshift-kube-scheduler 36m Normal Killing pod/openshift-kube-scheduler-guard-ip-10-0-140-6.ec2.internal Stopping container guard openshift-machine-api 36m Normal Started pod/machine-api-controllers-674d9f54f6-h4xz6 Started container kube-rbac-proxy-mhc-mtrc default 36m Warning ResolutionFailed namespace/openshift-route-monitor-operator constraints not satisfiable: no operators found from catalog route-monitor-operator-registry in namespace openshift-route-monitor-operator referenced by subscription route-monitor-operator, subscription route-monitor-operator exists openshift-network-operator 36m Normal Started pod/network-operator-6c9d58d76b-b79jx Started container network-operator openshift-network-operator 36m Normal Created pod/network-operator-6c9d58d76b-b79jx Created container network-operator openshift-operator-lifecycle-manager 36m Normal Pulling pod/packageserver-7c998868c6-ctgf5 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:497121bfdb3b293af2c939af393bfbda39aeace9d754f7288a244cd7ec2662d7" openshift-machine-api 36m Normal Started pod/machine-api-controllers-674d9f54f6-h4xz6 Started container kube-rbac-proxy-machineset-mtrc openshift-machine-api 36m Normal Pulled pod/machine-api-controllers-674d9f54f6-h4xz6 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-machine-api 36m Normal Created pod/machine-api-controllers-674d9f54f6-h4xz6 Created container kube-rbac-proxy-machine-mtrc openshift-kube-scheduler 36m Normal Killing pod/installer-9-ip-10-0-140-6.ec2.internal Stopping container installer openshift-service-ca 36m Normal AddedInterface pod/service-ca-57bb877df5-7tzmh Add eth0 [10.129.0.37/23] from ovn-kubernetes openshift-service-ca 36m Normal Pulling pod/service-ca-57bb877df5-7tzmh Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7f7cb6554c1dc9b5b3b58f162f592062e5c63bf24c5ed90a62074e117be3f743" openshift-machine-api 36m Normal Pulled pod/machine-api-controllers-674d9f54f6-h4xz6 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-multus 36m Normal Pulling pod/multus-admission-controller-757b6fbf74-hl64m Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c3cca6e2da92a6cd38e7f20f77bffc675895bd800157fdb50261b7f7ea9fc90" openshift-machine-api 36m Normal Started pod/machine-api-controllers-674d9f54f6-h4xz6 Started container kube-rbac-proxy-machine-mtrc openshift-network-operator 36m Normal Pulled pod/network-operator-6c9d58d76b-b79jx Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" already present on machine openshift-multus 36m Normal AddedInterface pod/multus-admission-controller-757b6fbf74-hl64m Add eth0 [10.129.0.36/23] from ovn-kubernetes openshift-operator-lifecycle-manager 36m Normal AddedInterface pod/packageserver-7c998868c6-ctgf5 Add eth0 [10.129.0.34/23] from ovn-kubernetes openshift-network-operator 36m Warning FastControllerResync deployment/network-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-machine-api 36m Warning ProbeError pod/machine-api-controllers-674d9f54f6-h4xz6 Readiness probe error: Get "http://10.129.0.27:9442/healthz": dial tcp 10.129.0.27:9442: connect: connection refused... openshift-machine-api 36m Warning Unhealthy pod/machine-api-controllers-674d9f54f6-h4xz6 Readiness probe failed: Get "http://10.129.0.27:9442/healthz": dial tcp 10.129.0.27:9442: connect: connection refused openshift-network-operator 36m Normal LeaderElection configmap/network-operator-lock ip-10-0-239-132_c55995ab-2cab-476b-9751-bea75bcab8cb became leader openshift-network-operator 36m Normal LeaderElection lease/network-operator-lock ip-10-0-239-132_c55995ab-2cab-476b-9751-bea75bcab8cb became leader openshift-kube-apiserver 36m Normal AddedInterface pod/revision-pruner-12-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.40/23] from ovn-kubernetes openshift-kube-apiserver 36m Normal Pulled pod/revision-pruner-12-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine default 36m Warning ResolutionFailed namespace/openshift-rbac-permissions constraints not satisfiable: no operators found from catalog rbac-permissions-operator-registry in namespace openshift-rbac-permissions referenced by subscription rbac-permissions-operator, subscription rbac-permissions-operator exists openshift-apiserver 36m Normal Killing pod/apiserver-7475f65d84-whqlh Stopping container openshift-apiserver-check-endpoints openshift-kube-apiserver 36m Normal Started pod/revision-pruner-12-ip-10-0-197-197.ec2.internal Started container pruner openshift-multus 36m Normal Pulled pod/multus-admission-controller-757b6fbf74-hl64m Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c3cca6e2da92a6cd38e7f20f77bffc675895bd800157fdb50261b7f7ea9fc90" in 2.405851672s (2.405858615s including waiting) openshift-apiserver 36m Normal Killing pod/apiserver-7475f65d84-whqlh Stopping container openshift-apiserver openshift-marketplace 36m Normal Killing pod/community-operators-kp7pr Stopping container registry-server openshift-apiserver 36m Normal SuccessfulDelete replicaset/apiserver-7475f65d84 Deleted pod: apiserver-7475f65d84-whqlh openshift-apiserver 36m Normal ScalingReplicaSet deployment/apiserver Scaled down replica set apiserver-7475f65d84 to 0 from 1 openshift-kube-apiserver 36m Normal Created pod/revision-pruner-12-ip-10-0-197-197.ec2.internal Created container pruner openshift-apiserver 36m Normal SuccessfulCreate replicaset/apiserver-5f568869f Created pod: apiserver-5f568869f-b9bw5 openshift-apiserver 36m Normal ScalingReplicaSet deployment/apiserver Scaled up replica set apiserver-5f568869f to 3 from 2 default 36m Normal Reboot node/ip-10-0-232-8.ec2.internal Node will reboot into config rendered-worker-c37c7a9e551f049d382df8406f11fe9b default 36m Warning ResolutionFailed namespace/openshift-observability-operator failed to populate resolver cache from source redhat-operators/openshift-marketplace: failed to list bundles: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial tcp 172.30.93.226:50051: connect: connection refused" default 36m Normal OSUpdateStarted node/ip-10-0-232-8.ec2.internal openshift-multus 36m Normal Started pod/multus-admission-controller-757b6fbf74-hl64m Started container kube-rbac-proxy openshift-multus 36m Normal Created pod/multus-admission-controller-757b6fbf74-hl64m Created container kube-rbac-proxy openshift-apiserver-operator 36m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()",Progressing changed from True to False ("All is well") default 36m Normal OSUpdateStaged node/ip-10-0-232-8.ec2.internal Changes to OS staged openshift-multus 36m Normal Started pod/multus-admission-controller-757b6fbf74-hl64m Started container multus-admission-controller openshift-multus 36m Normal Created pod/multus-admission-controller-757b6fbf74-hl64m Created container multus-admission-controller default 36m Normal PendingConfig node/ip-10-0-232-8.ec2.internal Written pending config rendered-worker-c37c7a9e551f049d382df8406f11fe9b openshift-multus 36m Normal Pulled pod/multus-admission-controller-757b6fbf74-hl64m Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-kube-apiserver-operator 36m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 9; 1 nodes are at revision 11" to "NodeInstallerProgressing: 2 nodes are at revision 9; 1 nodes are at revision 11; 0 nodes have achieved new revision 12",Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 9; 1 nodes are at revision 11" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 9; 1 nodes are at revision 11; 0 nodes have achieved new revision 12" openshift-oauth-apiserver 36m Normal SuccessfulCreate replicaset/apiserver-74455c7c5 Created pod: apiserver-74455c7c5-m45v9 openshift-authentication-operator 36m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-74455c7c5-tqs7k pod)" to "All is well" openshift-service-ca 36m Normal Started pod/service-ca-57bb877df5-7tzmh Started container service-ca-controller openshift-service-ca 36m Normal Created pod/service-ca-57bb877df5-7tzmh Created container service-ca-controller openshift-kube-apiserver-operator 36m Normal PodCreated deployment/kube-apiserver-operator Created Pod/revision-pruner-12-ip-10-0-239-132.ec2.internal -n openshift-kube-apiserver because it was missing openshift-oauth-apiserver 36m Normal Killing pod/apiserver-74455c7c5-h9ck5 Stopping container oauth-apiserver openshift-service-ca 36m Normal Pulled pod/service-ca-57bb877df5-7tzmh Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7f7cb6554c1dc9b5b3b58f162f592062e5c63bf24c5ed90a62074e117be3f743" in 3.373035691s (3.373043517s including waiting) openshift-kube-apiserver 36m Normal AddedInterface pod/revision-pruner-12-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.38/23] from ovn-kubernetes openshift-service-ca 36m Warning FastControllerResync deployment/service-ca Controller "CRDCABundleInjector" resync interval is set to 0s which might lead to client request throttling openshift-service-ca 36m Warning FastControllerResync deployment/service-ca Controller "LegacyVulnerableConfigMapCABundleInjector" resync interval is set to 0s which might lead to client request throttling openshift-service-ca 36m Warning FastControllerResync deployment/service-ca Controller "ServiceServingCertController" resync interval is set to 0s which might lead to client request throttling openshift-service-ca 36m Warning FastControllerResync deployment/service-ca Controller "ServiceServingCertUpdateController" resync interval is set to 0s which might lead to client request throttling openshift-service-ca 36m Warning FastControllerResync deployment/service-ca Controller "MutatingWebhookCABundleInjector" resync interval is set to 0s which might lead to client request throttling openshift-service-ca 36m Normal LeaderElection lease/service-ca-controller-lock service-ca-57bb877df5-7tzmh_ef8b6d12-bc88-4acf-a365-0bb54ed6d8a3 became leader openshift-service-ca 36m Warning FastControllerResync deployment/service-ca Controller "ValidatingWebhookCABundleInjector" resync interval is set to 0s which might lead to client request throttling openshift-service-ca 36m Normal LeaderElection configmap/service-ca-controller-lock service-ca-57bb877df5-7tzmh_ef8b6d12-bc88-4acf-a365-0bb54ed6d8a3 became leader openshift-service-ca-operator 36m Normal OperatorStatusChanged deployment/service-ca-operator Status for clusteroperator/service-ca changed: Progressing changed from True to False ("Progressing: All service-ca-operator deployments updated") openshift-service-ca 36m Warning FastControllerResync deployment/service-ca Controller "APIServiceCABundleInjector" resync interval is set to 0s which might lead to client request throttling openshift-service-ca 36m Warning FastControllerResync deployment/service-ca Controller "ConfigMapCABundleInjector" resync interval is set to 0s which might lead to client request throttling openshift-marketplace 36m Normal Pulling pod/redhat-operators-pcjm7 Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.12" openshift-marketplace 36m Normal AddedInterface pod/redhat-operators-pcjm7 Add eth0 [10.129.0.39/23] from ovn-kubernetes openshift-kube-apiserver 36m Normal Created pod/revision-pruner-12-ip-10-0-239-132.ec2.internal Created container pruner openshift-kube-apiserver 36m Normal Started pod/revision-pruner-12-ip-10-0-239-132.ec2.internal Started container pruner openshift-operator-lifecycle-manager 36m Normal Pulled pod/packageserver-7c998868c6-ctgf5 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:497121bfdb3b293af2c939af393bfbda39aeace9d754f7288a244cd7ec2662d7" in 6.979063521s (6.979075679s including waiting) openshift-authentication-operator 36m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()" openshift-kube-apiserver 36m Normal Pulled pod/revision-pruner-12-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-operator-lifecycle-manager 36m Normal Started pod/packageserver-7c998868c6-ctgf5 Started container packageserver openshift-operator-lifecycle-manager 36m Normal Created pod/packageserver-7c998868c6-ctgf5 Created container packageserver openshift-marketplace 36m Normal Pulling pod/redhat-marketplace-p4zxh Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.12" openshift-marketplace 36m Normal Pulling pod/redhat-marketplace-vj67h Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.12" openshift-marketplace 36m Normal Pulling pod/certified-operators-dplkw Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.12" openshift-marketplace 36m Normal AddedInterface pod/certified-operators-dplkw Add eth0 [10.129.0.40/23] from ovn-kubernetes openshift-kube-apiserver-operator 36m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-239-132.ec2.internal container \"kube-apiserver-check-endpoints\" started at 2023-03-21 12:36:47 +0000 UTC is still not ready" openshift-marketplace 36m Normal AddedInterface pod/redhat-marketplace-p4zxh Add eth0 [10.129.0.42/23] from ovn-kubernetes openshift-marketplace 36m Normal AddedInterface pod/redhat-marketplace-vj67h Add eth0 [10.129.0.41/23] from ovn-kubernetes openshift-kube-apiserver 36m Normal Started pod/revision-pruner-12-ip-10-0-140-6.ec2.internal Started container pruner openshift-kube-apiserver 36m Normal Pulled pod/revision-pruner-12-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 36m Normal Created pod/revision-pruner-12-ip-10-0-140-6.ec2.internal Created container pruner openshift-kube-apiserver 36m Normal AddedInterface pod/revision-pruner-12-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.24/23] from ovn-kubernetes openshift-marketplace 36m Normal Pulling pod/community-operators-wgn28 Pulling image "registry.redhat.io/redhat/community-operator-index:v4.12" openshift-marketplace 36m Normal AddedInterface pod/community-operators-wgn28 Add eth0 [10.129.0.43/23] from ovn-kubernetes openshift-kube-apiserver 36m Normal LeaderElection lease/cert-regeneration-controller-lock ip-10-0-197-197_20f2c0c9-a7cc-43c4-b7c6-fc5e1a6a972d became leader openshift-marketplace 36m Normal Pulled pod/community-operators-wgn28 Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.12" in 634.709023ms (634.717212ms including waiting) openshift-marketplace 36m Normal Created pod/community-operators-wgn28 Created container registry-server openshift-marketplace 36m Normal Started pod/community-operators-wgn28 Started container registry-server openshift-apiserver 36m Warning ProbeError pod/apiserver-7475f65d84-whqlh Readiness probe error: HTTP probe failed with statuscode: 500... openshift-apiserver 36m Warning Unhealthy pod/apiserver-7475f65d84-whqlh Readiness probe failed: HTTP probe failed with statuscode: 500 openshift-controller-manager-operator 36m Normal OperatorStatusChanged deployment/openshift-controller-manager-operator Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3\nProgressing: deployment/route-controller-manager: updated replicas is 2, desired replicas is 3" to "Progressing: deployment/controller-manager: updated replicas is 1, desired replicas is 3" openshift-kube-apiserver-operator 36m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-239-132.ec2.internal container \"kube-apiserver-check-endpoints\" started at 2023-03-21 12:36:47 +0000 UTC is still not ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-239-132.ec2.internal container \"kube-apiserver\" started at 2023-03-21 12:36:16 +0000 UTC is still not ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-239-132.ec2.internal container \"kube-apiserver-check-endpoints\" started at 2023-03-21 12:36:47 +0000 UTC is still not ready" openshift-apiserver 36m Warning Unhealthy pod/apiserver-7475f65d84-whqlh Readiness probe failed: Get "https://10.130.0.50:8443/readyz": dial tcp 10.130.0.50:8443: connect: connection refused openshift-kube-scheduler-operator 36m Warning InstallerPodFailed deployment/openshift-kube-scheduler-operator installer errors: installer: ing) (len=1) "9",... openshift-kube-scheduler-operator 36m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeInstallerDegraded: 1 nodes are failing on revision 9:\nNodeInstallerDegraded: installer: ing) (len=1) \"9\",\nNodeInstallerDegraded: NodeName: (string) \"\",\nNodeInstallerDegraded: Namespace: (string) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:37:22.118786 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.207089 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.210497 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:33.595654 1 cmd.go:124] Received SIGTERM or SIGINT signal, shutting down the process.\nNodeInstallerDegraded: I0321 12:37:52.210904 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:37:52.211019 1 cmd.go:106] client rate limiter Wait returned an error: context canceled\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready" openshift-kube-apiserver-operator 36m Normal PodCreated deployment/kube-apiserver-operator Created Pod/installer-12-ip-10-0-239-132.ec2.internal -n openshift-kube-apiserver because it was missing openshift-monitoring 36m Normal Pulled pod/osd-cluster-ready-pzbtd Container image "quay.io/app-sre/osd-cluster-ready@sha256:f70aa8033565fc73c006acb9199845242b1f729cb5a407b5174cf22656b4e2d5" already present on machine openshift-monitoring 36m Normal Created pod/osd-cluster-ready-pzbtd Created container osd-cluster-ready openshift-monitoring 36m Normal Started pod/osd-cluster-ready-pzbtd Started container osd-cluster-ready openshift-apiserver 36m Warning ProbeError pod/apiserver-7475f65d84-whqlh Readiness probe error: Get "https://10.130.0.50:8443/readyz": dial tcp 10.130.0.50:8443: connect: connection refused... openshift-kube-scheduler 36m Normal Pulled pod/revision-pruner-9-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 36m Normal AddedInterface pod/revision-pruner-9-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.25/23] from ovn-kubernetes openshift-kube-scheduler 36m Normal Started pod/revision-pruner-9-ip-10-0-140-6.ec2.internal Started container pruner openshift-kube-scheduler 36m Normal Created pod/revision-pruner-9-ip-10-0-140-6.ec2.internal Created container pruner openshift-kube-controller-manager 36m Normal Killing pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Stopping container kube-controller-manager-recovery-controller openshift-kube-controller-manager 36m Normal Killing pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Stopping container kube-controller-manager openshift-kube-controller-manager 36m Normal Killing pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Stopping container kube-controller-manager-cert-syncer openshift-kube-controller-manager 36m Normal Killing pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Stopping container cluster-policy-controller openshift-kube-controller-manager 36m Normal StaticPodInstallerCompleted pod/installer-8-ip-10-0-239-132.ec2.internal Successfully installed revision 8 default 36m Normal NodeAllocatableEnforced node/ip-10-0-232-8.ec2.internal Updated Node Allocatable limit across pods default 36m Warning Rebooted node/ip-10-0-232-8.ec2.internal Node ip-10-0-232-8.ec2.internal has been rebooted, boot id: 3f801cd6-e406-4a86-9bee-bf9a2af92ee6 default 36m Normal NodeNotReady node/ip-10-0-232-8.ec2.internal Node ip-10-0-232-8.ec2.internal status is now: NodeNotReady default 36m Normal NodeHasNoDiskPressure node/ip-10-0-232-8.ec2.internal Node ip-10-0-232-8.ec2.internal status is now: NodeHasNoDiskPressure default 36m Normal NodeReady node/ip-10-0-232-8.ec2.internal Node ip-10-0-232-8.ec2.internal status is now: NodeReady default 36m Normal NodeHasSufficientMemory node/ip-10-0-232-8.ec2.internal Node ip-10-0-232-8.ec2.internal status is now: NodeHasSufficientMemory default 36m Normal Starting node/ip-10-0-232-8.ec2.internal Starting kubelet. default 36m Normal NodeNotSchedulable node/ip-10-0-232-8.ec2.internal Node ip-10-0-232-8.ec2.internal status is now: NodeNotSchedulable default 36m Normal NodeHasSufficientPID node/ip-10-0-232-8.ec2.internal Node ip-10-0-232-8.ec2.internal status is now: NodeHasSufficientPID openshift-route-controller-manager 36m Normal LeaderElection lease/openshift-route-controllers route-controller-manager-6594987c6f-dcrpz became leader openshift-kube-apiserver 36m Warning Unhealthy pod/kube-apiserver-guard-ip-10-0-239-132.ec2.internal Readiness probe failed: HTTP probe failed with statuscode: 500 openshift-kube-controller-manager 36m Warning ProbeError pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Readiness probe error: Get "https://10.0.239.132:10357/healthz": dial tcp 10.0.239.132:10357: connect: connection refused... openshift-kube-controller-manager 36m Warning Unhealthy pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Readiness probe failed: Get "https://10.0.239.132:10357/healthz": dial tcp 10.0.239.132:10357: connect: connection refused openshift-kube-apiserver 36m Warning ProbeError pod/kube-apiserver-guard-ip-10-0-239-132.ec2.internal Readiness probe error: HTTP probe failed with statuscode: 500... openshift-marketplace 36m Normal Started pod/redhat-operators-pcjm7 Started container registry-server openshift-marketplace 36m Normal Started pod/redhat-marketplace-p4zxh Started container registry-server openshift-marketplace 36m Normal Pulled pod/redhat-operators-pcjm7 Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.12" in 23.44640034s (23.446410829s including waiting) openshift-kube-apiserver 36m Normal Started pod/installer-12-ip-10-0-239-132.ec2.internal Started container installer openshift-marketplace 36m Normal Pulled pod/redhat-marketplace-vj67h Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.12" in 22.598661002s (22.598676357s including waiting) openshift-marketplace 36m Normal Created pod/redhat-marketplace-p4zxh Created container registry-server openshift-marketplace 36m Normal Created pod/redhat-marketplace-vj67h Created container registry-server openshift-marketplace 36m Normal Created pod/certified-operators-dplkw Created container registry-server openshift-marketplace 36m Normal Created pod/redhat-operators-pcjm7 Created container registry-server openshift-marketplace 36m Normal Started pod/certified-operators-dplkw Started container registry-server openshift-marketplace 36m Normal Started pod/redhat-marketplace-vj67h Started container registry-server openshift-kube-apiserver 36m Normal Created pod/installer-12-ip-10-0-239-132.ec2.internal Created container installer openshift-kube-apiserver 36m Normal AddedInterface pod/installer-12-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.44/23] from ovn-kubernetes openshift-kube-apiserver 36m Normal Pulled pod/installer-12-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-marketplace 36m Normal Pulled pod/redhat-marketplace-p4zxh Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.12" in 22.46572044s (22.465733521s including waiting) openshift-marketplace 36m Normal Pulled pod/certified-operators-dplkw Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.12" in 23.049157675s (23.049168036s including waiting) openshift-kube-controller-manager-operator 35m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"cluster-policy-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager-cert-syncer\" is terminated: Error: 6353 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:37:11.027810 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:37:12.224147 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:37:12.224509 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:37:13.417891 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:37:13.418182 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:37:14.132537 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:37:14.132862 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:37:14.218515 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:37:14.218867 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:37:32.822874 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:37:32.823140 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:37:40.158585 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:37:40.158875 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" openshift-kube-scheduler 35m Normal Pulled pod/installer-9-retry-1-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 35m Normal AddedInterface pod/installer-9-retry-1-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.26/23] from ovn-kubernetes openshift-kube-scheduler 35m Normal Started pod/installer-9-retry-1-ip-10-0-140-6.ec2.internal Started container installer openshift-kube-scheduler 35m Normal Created pod/installer-9-retry-1-ip-10-0-140-6.ec2.internal Created container installer openshift-kube-controller-manager 35m Normal Created pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Created container kube-controller-manager-cert-syncer openshift-kube-controller-manager 35m Warning FastControllerResync pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager 35m Normal LeaderElection lease/cert-recovery-controller-lock ip-10-0-197-197_3c6d5355-10aa-4dbf-b07d-3cf69b45a2d3 became leader openshift-kube-controller-manager 35m Normal Created pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Created container kube-controller-manager-recovery-controller openshift-kube-controller-manager 35m Normal Pulled pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 35m Normal Started pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Started container kube-controller-manager-cert-syncer openshift-kube-controller-manager 35m Normal Pulled pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 35m Normal LeaderElection configmap/cert-recovery-controller-lock ip-10-0-197-197_3c6d5355-10aa-4dbf-b07d-3cf69b45a2d3 became leader openshift-kube-controller-manager 35m Normal Started pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Started container kube-controller-manager openshift-kube-controller-manager 35m Normal Started pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Started container kube-controller-manager-recovery-controller openshift-marketplace 35m Normal Killing pod/redhat-marketplace-p4zxh Stopping container registry-server openshift-kube-controller-manager-operator 35m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"cluster-policy-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager-cert-syncer\" is terminated: Error: 6353 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:37:11.027810 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:37:12.224147 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:37:12.224509 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:37:13.417891 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:37:13.418182 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:37:14.132537 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:37:14.132862 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:37:14.218515 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:37:14.218867 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:37:32.822874 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:37:32.823140 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:37:40.158585 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:37:40.158875 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" openshift-image-registry 35m Normal Pulling pod/node-ca-sfbnk Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" openshift-multus 35m Normal Pulling pod/multus-ztsxl Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" openshift-dns 35m Normal Pulling pod/node-resolver-vfr6q Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" openshift-machine-config-operator 35m Normal Pulling pod/machine-config-daemon-drlvb Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" openshift-monitoring 35m Normal Pulling pod/node-exporter-58wsk Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" openshift-multus 35m Normal Pulling pod/multus-additional-cni-plugins-l7zm7 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633" openshift-ovn-kubernetes 35m Normal Pulling pod/ovnkube-node-x4z8l Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" openshift-cluster-csi-drivers 35m Normal Pulling pod/aws-ebs-csi-driver-node-8w5jv Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" openshift-cluster-node-tuning-operator 35m Normal Pulling pod/tuned-5mn5s Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" openshift-kube-controller-manager 35m Warning ProbeError pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Startup probe error: Get "https://10.0.239.132:10257/healthz": dial tcp 10.0.239.132:10257: connect: connection refused... openshift-kube-controller-manager 35m Warning Unhealthy pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Startup probe failed: Get "https://10.0.239.132:10257/healthz": dial tcp 10.0.239.132:10257: connect: connection refused openshift-monitoring 35m Normal Pulled pod/node-exporter-58wsk Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" in 13.429441287s (13.429454882s including waiting) openshift-multus 35m Normal Created pod/multus-ztsxl Created container kube-multus openshift-ovn-kubernetes 35m Normal Pulled pod/ovnkube-node-x4z8l Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" in 17.285201517s (17.285207399s including waiting) openshift-cluster-node-tuning-operator 35m Normal Started pod/tuned-5mn5s Started container tuned openshift-monitoring 35m Normal Started pod/node-exporter-58wsk Started container init-textfile openshift-cluster-node-tuning-operator 35m Normal Created pod/tuned-5mn5s Created container tuned openshift-dns 35m Normal Pulled pod/node-resolver-vfr6q Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" in 17.194223358s (17.194228821s including waiting) openshift-machine-config-operator 35m Normal Pulled pod/machine-config-daemon-drlvb Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" in 17.173003473s (17.173008507s including waiting) openshift-image-registry 35m Normal Pulled pod/node-ca-sfbnk Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" in 17.172241941s (17.172246115s including waiting) openshift-machine-config-operator 35m Normal Pulling pod/machine-config-daemon-drlvb Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" openshift-multus 35m Normal Pulled pod/multus-ztsxl Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" in 17.184384889s (17.184391691s including waiting) openshift-cluster-node-tuning-operator 35m Normal Pulled pod/tuned-5mn5s Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" in 17.174497441s (17.174512446s including waiting) openshift-machine-config-operator 35m Normal Created pod/machine-config-daemon-drlvb Created container machine-config-daemon openshift-monitoring 35m Normal Created pod/node-exporter-58wsk Created container init-textfile openshift-machine-config-operator 35m Normal Started pod/machine-config-daemon-drlvb Started container machine-config-daemon openshift-cluster-csi-drivers 35m Normal Pulled pod/aws-ebs-csi-driver-node-8w5jv Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" in 17.180861355s (17.180867781s including waiting) openshift-multus 35m Normal Pulled pod/multus-additional-cni-plugins-l7zm7 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633" in 17.194384087s (17.194394045s including waiting) openshift-multus 35m Normal Pulling pod/multus-additional-cni-plugins-l7zm7 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be088cb9218a248cd4a616132c8daf531502c7b85404aadb0b73e76f8dba01df" openshift-multus 35m Normal Started pod/multus-ztsxl Started container kube-multus openshift-ovn-kubernetes 35m Normal Pulling pod/ovnkube-node-x4z8l Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" openshift-ovn-kubernetes 35m Normal Started pod/ovnkube-node-x4z8l Started container ovn-acl-logging openshift-monitoring 35m Normal Pulled pod/node-exporter-58wsk Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" already present on machine openshift-ovn-kubernetes 35m Normal Created pod/ovnkube-node-x4z8l Created container ovn-acl-logging openshift-ovn-kubernetes 35m Normal Pulled pod/ovnkube-node-x4z8l Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-image-registry 35m Normal Created pod/node-ca-sfbnk Created container node-ca openshift-cluster-csi-drivers 35m Normal Pulling pod/aws-ebs-csi-driver-node-8w5jv Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" openshift-multus 35m Normal Created pod/multus-additional-cni-plugins-l7zm7 Created container egress-router-binary-copy openshift-multus 35m Normal Started pod/multus-additional-cni-plugins-l7zm7 Started container egress-router-binary-copy openshift-dns 35m Normal Started pod/node-resolver-vfr6q Started container dns-node-resolver openshift-dns 35m Normal Created pod/node-resolver-vfr6q Created container dns-node-resolver openshift-cluster-csi-drivers 35m Normal Created pod/aws-ebs-csi-driver-node-8w5jv Created container csi-driver openshift-image-registry 35m Normal Started pod/node-ca-sfbnk Started container node-ca openshift-cluster-csi-drivers 35m Normal Started pod/aws-ebs-csi-driver-node-8w5jv Started container csi-driver openshift-machine-config-operator 35m Normal Pulled pod/machine-config-daemon-drlvb Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" in 1.876490506s (1.876505754s including waiting) openshift-monitoring 35m Normal Created pod/node-exporter-58wsk Created container node-exporter openshift-cluster-csi-drivers 35m Normal Pulled pod/aws-ebs-csi-driver-node-8w5jv Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" in 1.493887202s (1.493917653s including waiting) openshift-monitoring 35m Normal Pulling pod/node-exporter-58wsk Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" openshift-monitoring 35m Normal Started pod/node-exporter-58wsk Started container node-exporter openshift-kube-scheduler 35m Normal StaticPodInstallerCompleted pod/installer-9-retry-1-ip-10-0-140-6.ec2.internal Successfully installed revision 9 openshift-kube-scheduler-operator 35m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 9:\nNodeInstallerDegraded: installer: ing) (len=1) \"9\",\nNodeInstallerDegraded: NodeName: (string) \"\",\nNodeInstallerDegraded: Namespace: (string) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:37:22.118786 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.207089 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.210497 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:33.595654 1 cmd.go:124] Received SIGTERM or SIGINT signal, shutting down the process.\nNodeInstallerDegraded: I0321 12:37:52.210904 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:37:52.211019 1 cmd.go:106] client rate limiter Wait returned an error: context canceled\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready" to "NodeInstallerDegraded: 1 nodes are failing on revision 9:\nNodeInstallerDegraded: installer: ing) (len=1) \"9\",\nNodeInstallerDegraded: NodeName: (string) \"\",\nNodeInstallerDegraded: Namespace: (string) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:37:22.118786 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.207089 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.210497 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:33.595654 1 cmd.go:124] Received SIGTERM or SIGINT signal, shutting down the process.\nNodeInstallerDegraded: I0321 12:37:52.210904 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:37:52.211019 1 cmd.go:106] client rate limiter Wait returned an error: context canceled\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal container \"kube-scheduler\" is terminated: Completed: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal container \"kube-scheduler-cert-syncer\" is terminated: Error: 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:37:03.842426 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:37:03.842538 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:37:05.278545 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:37:05.278574 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:37:14.144985 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:37:14.145012 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:37:19.520836 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:37:19.520858 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:37:40.254303 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:37:40.254325 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:37:45.569599 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:37:45.569622 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:38:06.272854 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:38:06.272954 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:38:11.647957 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:38:11.648047 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:38:32.290546 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:38:32.290570 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:38:37.676055 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:38:37.676091 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal container \"kube-scheduler-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" openshift-kube-scheduler 35m Normal Killing pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Stopping container kube-scheduler openshift-kube-scheduler 35m Normal Killing pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Stopping container kube-scheduler-cert-syncer openshift-kube-scheduler 35m Normal Killing pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Stopping container kube-scheduler-recovery-controller openshift-kube-controller-manager 35m Warning ProbeError pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Startup probe error: Get "https://10.0.239.132:10357/healthz": dial tcp 10.0.239.132:10357: connect: connection refused... openshift-kube-apiserver 35m Normal StaticPodInstallerCompleted pod/installer-12-ip-10-0-239-132.ec2.internal Successfully installed revision 12 openshift-kube-controller-manager 35m Warning Unhealthy pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Startup probe failed: Get "https://10.0.239.132:10357/healthz": dial tcp 10.0.239.132:10357: connect: connection refused openshift-kube-controller-manager 35m Normal Killing pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Container cluster-policy-controller failed startup probe, will be restarted openshift-kube-controller-manager 35m Warning ClusterInfrastructureStatus pod/kube-controller-manager-ip-10-0-239-132.ec2.internal unable to get cluster infrastructure status, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope openshift-apiserver 35m Normal Created pod/apiserver-5f568869f-b9bw5 Created container fix-audit-permissions openshift-apiserver 35m Normal Pulled pod/apiserver-5f568869f-b9bw5 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" already present on machine openshift-apiserver 35m Normal AddedInterface pod/apiserver-5f568869f-b9bw5 Add eth0 [10.130.0.42/23] from ovn-kubernetes openshift-apiserver-operator 35m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-5f568869f-b9bw5 pod)" openshift-apiserver 35m Normal Started pod/apiserver-5f568869f-b9bw5 Started container fix-audit-permissions openshift-kube-scheduler-operator 35m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 9:\nNodeInstallerDegraded: installer: ing) (len=1) \"9\",\nNodeInstallerDegraded: NodeName: (string) \"\",\nNodeInstallerDegraded: Namespace: (string) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:37:22.118786 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.207089 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.210497 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:33.595654 1 cmd.go:124] Received SIGTERM or SIGINT signal, shutting down the process.\nNodeInstallerDegraded: I0321 12:37:52.210904 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:37:52.211019 1 cmd.go:106] client rate limiter Wait returned an error: context canceled\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal container \"kube-scheduler\" is terminated: Completed: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal container \"kube-scheduler-cert-syncer\" is terminated: Error: 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:37:03.842426 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:37:03.842538 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:37:05.278545 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:37:05.278574 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:37:14.144985 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:37:14.145012 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:37:19.520836 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:37:19.520858 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:37:40.254303 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:37:40.254325 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:37:45.569599 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:37:45.569622 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:38:06.272854 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:38:06.272954 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:38:11.647957 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:38:11.648047 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:38:32.290546 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:38:32.290570 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:38:37.676055 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:38:37.676091 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal container \"kube-scheduler-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" to "NodeInstallerDegraded: 1 nodes are failing on revision 9:\nNodeInstallerDegraded: installer: ing) (len=1) \"9\",\nNodeInstallerDegraded: NodeName: (string) \"\",\nNodeInstallerDegraded: Namespace: (string) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:37:22.118786 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.207089 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.210497 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:33.595654 1 cmd.go:124] Received SIGTERM or SIGINT signal, shutting down the process.\nNodeInstallerDegraded: I0321 12:37:52.210904 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:37:52.211019 1 cmd.go:106] client rate limiter Wait returned an error: context canceled\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready" openshift-apiserver 35m Normal Started pod/apiserver-5f568869f-b9bw5 Started container openshift-apiserver openshift-apiserver 35m Normal Pulled pod/apiserver-5f568869f-b9bw5 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" already present on machine openshift-apiserver 35m Normal Pulled pod/apiserver-5f568869f-b9bw5 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-apiserver 35m Normal Created pod/apiserver-5f568869f-b9bw5 Created container openshift-apiserver openshift-kube-scheduler 35m Normal Created pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Created container kube-scheduler-recovery-controller openshift-kube-scheduler 35m Normal Started pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Started container kube-scheduler-recovery-controller openshift-apiserver 35m Normal Created pod/apiserver-5f568869f-b9bw5 Created container openshift-apiserver-check-endpoints openshift-apiserver 35m Normal Started pod/apiserver-5f568869f-b9bw5 Started container openshift-apiserver-check-endpoints openshift-kube-scheduler 35m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 35m Warning FastControllerResync pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling openshift-apiserver 35m Warning FastControllerResync node/ip-10-0-197-197.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling openshift-apiserver 35m Warning FastControllerResync node/ip-10-0-197-197.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling openshift-apiserver-operator 35m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-5f568869f-b9bw5 pod)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-5f568869f-b9bw5 pod)" openshift-apiserver-operator 35m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-5f568869f-b9bw5 pod)" to "All is well" openshift-monitoring 35m Normal Created pod/node-exporter-58wsk Created container kube-rbac-proxy openshift-monitoring 35m Normal Started pod/node-exporter-58wsk Started container kube-rbac-proxy openshift-cluster-csi-drivers 35m Normal Pulling pod/aws-ebs-csi-driver-node-8w5jv Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" openshift-multus 35m Normal Created pod/multus-additional-cni-plugins-l7zm7 Created container cni-plugins openshift-machine-config-operator 35m Normal Started pod/machine-config-daemon-drlvb Started container oauth-proxy openshift-machine-config-operator 35m Normal Created pod/machine-config-daemon-drlvb Created container oauth-proxy openshift-cluster-csi-drivers 35m Normal Started pod/aws-ebs-csi-driver-node-8w5jv Started container csi-node-driver-registrar openshift-multus 35m Normal Started pod/multus-additional-cni-plugins-l7zm7 Started container cni-plugins openshift-multus 35m Normal Pulled pod/multus-additional-cni-plugins-l7zm7 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be088cb9218a248cd4a616132c8daf531502c7b85404aadb0b73e76f8dba01df" in 24.989179954s (24.989193173s including waiting) openshift-monitoring 35m Normal Pulled pod/node-exporter-58wsk Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" in 24.342780369s (24.342795574s including waiting) openshift-ovn-kubernetes 35m Normal Pulled pod/ovnkube-node-x4z8l Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" in 25.475209111s (25.475223568s including waiting) openshift-ovn-kubernetes 35m Normal Created pod/ovnkube-node-x4z8l Created container kube-rbac-proxy openshift-ovn-kubernetes 35m Normal Started pod/ovnkube-node-x4z8l Started container kube-rbac-proxy openshift-cluster-csi-drivers 35m Normal Created pod/aws-ebs-csi-driver-node-8w5jv Created container csi-node-driver-registrar openshift-ovn-kubernetes 35m Normal Started pod/ovnkube-node-x4z8l Started container ovnkube-node openshift-ovn-kubernetes 35m Normal Started pod/ovnkube-node-x4z8l Started container kube-rbac-proxy-ovn-metrics openshift-ovn-kubernetes 35m Normal Created pod/ovnkube-node-x4z8l Created container kube-rbac-proxy-ovn-metrics openshift-ovn-kubernetes 35m Normal Pulled pod/ovnkube-node-x4z8l Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-ovn-kubernetes 35m Normal Pulled pod/ovnkube-node-x4z8l Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 35m Normal Created pod/ovnkube-node-x4z8l Created container ovnkube-node openshift-cluster-csi-drivers 35m Normal Started pod/aws-ebs-csi-driver-node-8w5jv Started container csi-liveness-probe openshift-ovn-kubernetes 35m Normal Started pod/ovnkube-node-x4z8l Started container ovn-controller openshift-ovn-kubernetes 35m Normal Pulled pod/ovnkube-node-x4z8l Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-cluster-csi-drivers 35m Normal Created pod/aws-ebs-csi-driver-node-8w5jv Created container csi-liveness-probe openshift-cluster-csi-drivers 35m Normal Pulled pod/aws-ebs-csi-driver-node-8w5jv Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" in 1.006282628s (1.006297056s including waiting) openshift-ovn-kubernetes 35m Normal Created pod/ovnkube-node-x4z8l Created container ovn-controller openshift-multus 35m Normal Pulling pod/multus-additional-cni-plugins-l7zm7 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dec1af3c24148179ab45f9137ba74876f4287121f07aa32dc7a3643da749a9" openshift-dns 35m Normal Pulling pod/dns-default-f7bt7 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a604c96952da5e59f104a305e0c8303d474e33ef345b8c534fd677f189d05299" openshift-ingress-canary 35m Normal AddedInterface pod/ingress-canary-2zk7z Add eth0 [10.128.2.6/23] from ovn-kubernetes openshift-multus 35m Normal Pulling pod/network-metrics-daemon-f6tv8 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5e01ac462952a60254c8a9a0f43cce8b323da60014d37a49250a2e586142a5" openshift-monitoring 35m Normal Pulling pod/sre-dns-latency-exporter-snmkd Pulling image "quay.io/app-sre/managed-prometheus-exporter-base:latest" openshift-multus 35m Normal AddedInterface pod/network-metrics-daemon-f6tv8 Add eth0 [10.128.2.4/23] from ovn-kubernetes openshift-ingress-canary 35m Normal Pulling pod/ingress-canary-2zk7z Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6de5d1e6d774c20a0c3bd4d4c544234519bc4528e5535f24d0cf78a6788b4056" openshift-multus 35m Normal Pulled pod/multus-additional-cni-plugins-l7zm7 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dec1af3c24148179ab45f9137ba74876f4287121f07aa32dc7a3643da749a9" in 789.532024ms (789.542222ms including waiting) openshift-dns 35m Normal AddedInterface pod/dns-default-f7bt7 Add eth0 [10.128.2.7/23] from ovn-kubernetes openshift-network-diagnostics 35m Normal AddedInterface pod/network-check-target-2799t Add eth0 [10.128.2.5/23] from ovn-kubernetes openshift-multus 35m Normal Started pod/multus-additional-cni-plugins-l7zm7 Started container bond-cni-plugin openshift-network-diagnostics 35m Normal Pulling pod/network-check-target-2799t Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" openshift-multus 35m Normal Pulling pod/multus-additional-cni-plugins-l7zm7 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4903dfa59fc0fcc3521f6431d51c6ed0ef563b0c63614a2df44ee4d447ab6c5d" openshift-monitoring 35m Normal AddedInterface pod/sre-dns-latency-exporter-snmkd Add eth0 [10.128.2.20/23] from ovn-kubernetes openshift-multus 35m Normal Created pod/multus-additional-cni-plugins-l7zm7 Created container bond-cni-plugin openshift-apiserver 35m Normal SuccessfulCreate replicaset/apiserver-5f568869f Created pod: apiserver-5f568869f-wdslz openshift-apiserver 35m Normal Killing pod/apiserver-5f568869f-mpswm Stopping container openshift-apiserver openshift-apiserver 35m Normal Killing pod/apiserver-5f568869f-mpswm Stopping container openshift-apiserver-check-endpoints openshift-multus 34m Normal Pulled pod/multus-additional-cni-plugins-l7zm7 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4903dfa59fc0fcc3521f6431d51c6ed0ef563b0c63614a2df44ee4d447ab6c5d" in 4.175323266s (4.175336657s including waiting) openshift-apiserver-operator 34m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()" openshift-dns 34m Normal Pulled pod/dns-default-f7bt7 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a604c96952da5e59f104a305e0c8303d474e33ef345b8c534fd677f189d05299" in 5.025808814s (5.025816503s including waiting) openshift-ingress-canary 34m Normal Pulled pod/ingress-canary-2zk7z Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6de5d1e6d774c20a0c3bd4d4c544234519bc4528e5535f24d0cf78a6788b4056" in 5.04055957s (5.040574686s including waiting) openshift-multus 34m Normal Pulled pod/network-metrics-daemon-f6tv8 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5e01ac462952a60254c8a9a0f43cce8b323da60014d37a49250a2e586142a5" in 4.942527256s (4.942540266s including waiting) openshift-monitoring 34m Normal Started pod/sre-dns-latency-exporter-snmkd Started container main openshift-kube-scheduler 34m Normal AddedInterface pod/installer-9-retry-1-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.27/23] from ovn-kubernetes openshift-monitoring 34m Normal Pulled pod/sre-dns-latency-exporter-snmkd Successfully pulled image "quay.io/app-sre/managed-prometheus-exporter-base:latest" in 5.898733667s (5.89874229s including waiting) openshift-kube-scheduler 34m Normal Created pod/installer-9-retry-1-ip-10-0-140-6.ec2.internal Created container installer openshift-kube-scheduler 34m Normal Pulled pod/installer-9-retry-1-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-multus 34m Normal Created pod/network-metrics-daemon-f6tv8 Created container network-metrics-daemon openshift-dns 34m Normal Created pod/dns-default-f7bt7 Created container dns openshift-multus 34m Normal Started pod/network-metrics-daemon-f6tv8 Started container network-metrics-daemon openshift-multus 34m Normal Pulled pod/network-metrics-daemon-f6tv8 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-multus 34m Normal Created pod/network-metrics-daemon-f6tv8 Created container kube-rbac-proxy openshift-multus 34m Normal Started pod/network-metrics-daemon-f6tv8 Started container kube-rbac-proxy openshift-monitoring 34m Normal Created pod/sre-dns-latency-exporter-snmkd Created container main openshift-network-diagnostics 34m Normal Started pod/network-check-target-2799t Started container network-check-target-container openshift-ingress-canary 34m Normal Created pod/ingress-canary-2zk7z Created container serve-healthcheck-canary openshift-network-diagnostics 34m Normal Created pod/network-check-target-2799t Created container network-check-target-container openshift-ingress-canary 34m Normal Started pod/ingress-canary-2zk7z Started container serve-healthcheck-canary openshift-dns 34m Normal Started pod/dns-default-f7bt7 Started container dns openshift-dns 34m Normal Pulled pod/dns-default-f7bt7 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-multus 34m Normal Pulling pod/multus-additional-cni-plugins-l7zm7 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" openshift-multus 34m Normal Started pod/multus-additional-cni-plugins-l7zm7 Started container routeoverride-cni openshift-dns 34m Normal Created pod/dns-default-f7bt7 Created container kube-rbac-proxy openshift-network-diagnostics 34m Normal Pulled pod/network-check-target-2799t Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" in 5.825450433s (5.825461822s including waiting) openshift-dns 34m Normal Started pod/dns-default-f7bt7 Started container kube-rbac-proxy openshift-multus 34m Normal Created pod/multus-additional-cni-plugins-l7zm7 Created container routeoverride-cni openshift-kube-scheduler 34m Normal Started pod/installer-9-retry-1-ip-10-0-140-6.ec2.internal Started container installer default 34m Normal Uncordon node/ip-10-0-232-8.ec2.internal Update completed for config rendered-worker-c37c7a9e551f049d382df8406f11fe9b and node has been uncordoned default 34m Normal SetDesiredConfig machineconfigpool/worker Targeted node ip-10-0-187-75.ec2.internal to config rendered-worker-c37c7a9e551f049d382df8406f11fe9b default 34m Normal ConfigDriftMonitorStarted node/ip-10-0-232-8.ec2.internal Config Drift Monitor started, watching against rendered-worker-c37c7a9e551f049d382df8406f11fe9b default 34m Normal NodeDone node/ip-10-0-232-8.ec2.internal Setting node ip-10-0-232-8.ec2.internal, currentConfig rendered-worker-c37c7a9e551f049d382df8406f11fe9b to Done openshift-apiserver-operator 34m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-5f568869f-mpswm pod)" openshift-multus 34m Normal Created pod/multus-additional-cni-plugins-l7zm7 Created container whereabouts-cni-bincopy openshift-multus 34m Normal Pulled pod/multus-additional-cni-plugins-l7zm7 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" in 1.754129668s (1.754145573s including waiting) openshift-multus 34m Normal Started pod/multus-additional-cni-plugins-l7zm7 Started container whereabouts-cni-bincopy openshift-multus 34m Normal Started pod/multus-additional-cni-plugins-l7zm7 Started container whereabouts-cni openshift-multus 34m Normal Created pod/multus-additional-cni-plugins-l7zm7 Created container whereabouts-cni openshift-multus 34m Normal Pulled pod/multus-additional-cni-plugins-l7zm7 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" already present on machine default 34m Normal Drain node/ip-10-0-187-75.ec2.internal Draining node to update config. default 34m Normal Cordon node/ip-10-0-187-75.ec2.internal Cordoned node to apply update openshift-multus 34m Normal Created pod/multus-additional-cni-plugins-l7zm7 Created container kube-multus-additional-cni-plugins default 34m Normal ConfigDriftMonitorStopped node/ip-10-0-187-75.ec2.internal Config Drift Monitor stopped openshift-multus 34m Normal Pulled pod/multus-additional-cni-plugins-l7zm7 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" already present on machine default 34m Normal NodeSchedulable node/ip-10-0-232-8.ec2.internal Node ip-10-0-232-8.ec2.internal status is now: NodeSchedulable openshift-kube-controller-manager 34m Normal Started pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Started container cluster-policy-controller openshift-kube-controller-manager 34m Normal Created pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Created container cluster-policy-controller openshift-kube-controller-manager 34m Normal Pulled pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fec19020202ba3b7f09460bc0cb52fc001941fbb3a3fa7d1ca34a2e9ebf0e9" already present on machine openshift-user-workload-monitoring 34m Normal AddedInterface pod/prometheus-user-workload-0 Add eth0 [10.128.2.9/23] from ovn-kubernetes openshift-kube-controller-manager 34m Warning ClusterInfrastructureStatus pod/kube-controller-manager-ip-10-0-239-132.ec2.internal unable to get cluster infrastructure status, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope openshift-user-workload-monitoring 34m Normal Pulling pod/thanos-ruler-user-workload-0 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a898acb346b7aae546bb9dc31e43266415a096fa755241b56acb5969190584f9" openshift-user-workload-monitoring 34m Normal AddedInterface pod/thanos-ruler-user-workload-0 Add eth0 [10.128.2.10/23] from ovn-kubernetes openshift-user-workload-monitoring 34m Normal Pulling pod/prometheus-user-workload-0 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" openshift-user-workload-monitoring 34m Normal Started pod/prometheus-user-workload-0 Started container init-config-reloader openshift-user-workload-monitoring 34m Normal Pulled pod/prometheus-user-workload-0 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" in 817.275381ms (817.289029ms including waiting) openshift-user-workload-monitoring 34m Normal Pulling pod/prometheus-user-workload-0 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1cdf8e55288584a04e1a403133d4508bac27808a20427a18aee85a92c367316" openshift-user-workload-monitoring 34m Normal Created pod/prometheus-user-workload-0 Created container init-config-reloader openshift-image-registry 34m Normal SuccessfulCreate replicaset/image-registry-55b7d998b9 Created pod: image-registry-55b7d998b9-pf4xh openshift-ingress 34m Normal Killing pod/router-default-7cf4c94d4-s4mh5 Stopping container router openshift-monitoring 34m Normal Killing pod/prometheus-adapter-8467ff79fd-rl8p7 Stopping container prometheus-adapter openshift-user-workload-monitoring 34m Normal Pulled pod/thanos-ruler-user-workload-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" already present on machine openshift-user-workload-monitoring 34m Normal Created pod/thanos-ruler-user-workload-0 Created container config-reloader openshift-user-workload-monitoring 34m Normal Started pod/thanos-ruler-user-workload-0 Started container config-reloader openshift-user-workload-monitoring 34m Normal Pulled pod/thanos-ruler-user-workload-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-user-workload-monitoring 34m Normal Started pod/thanos-ruler-user-workload-0 Started container thanos-ruler openshift-user-workload-monitoring 34m Normal Started pod/thanos-ruler-user-workload-0 Started container thanos-ruler-proxy openshift-user-workload-monitoring 34m Normal Pulled pod/thanos-ruler-user-workload-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-ingress 34m Normal SuccessfulCreate replicaset/router-default-7cf4c94d4 Created pod: router-default-7cf4c94d4-klqtt openshift-monitoring 34m Normal Killing pod/openshift-state-metrics-8757cbbb4-whgf4 Stopping container kube-rbac-proxy-main openshift-monitoring 34m Normal Killing pod/prometheus-operator-7f64545d8-cxj25 Stopping container prometheus-operator openshift-monitoring 34m Normal Killing pod/prometheus-operator-7f64545d8-cxj25 Stopping container kube-rbac-proxy openshift-monitoring 34m Normal SuccessfulCreate replicaset/prometheus-operator-admission-webhook-5c9b9d98cc Created pod: prometheus-operator-admission-webhook-5c9b9d98cc-dvsqk openshift-user-workload-monitoring 34m Normal Created pod/thanos-ruler-user-workload-0 Created container kube-rbac-proxy-metrics openshift-user-workload-monitoring 34m Normal Started pod/thanos-ruler-user-workload-0 Started container kube-rbac-proxy-metrics openshift-monitoring 34m Normal SuccessfulCreate replicaset/prometheus-operator-7f64545d8 Created pod: prometheus-operator-7f64545d8-j6vlm openshift-monitoring 34m Normal Killing pod/openshift-state-metrics-8757cbbb4-whgf4 Stopping container openshift-state-metrics openshift-monitoring 34m Normal Killing pod/openshift-state-metrics-8757cbbb4-whgf4 Stopping container kube-rbac-proxy-self openshift-monitoring 34m Normal Killing pod/prometheus-k8s-0 Stopping container prometheus openshift-monitoring 34m Normal SuccessfulCreate replicaset/openshift-state-metrics-8757cbbb4 Created pod: openshift-state-metrics-8757cbbb4-lk7sd openshift-monitoring 34m Normal Killing pod/alertmanager-main-1 Stopping container alertmanager openshift-monitoring 34m Normal Killing pod/alertmanager-main-1 Stopping container config-reloader openshift-monitoring 34m Normal Killing pod/thanos-querier-6566ccfdd9-jmz7s Stopping container thanos-query openshift-monitoring 34m Normal Killing pod/thanos-querier-6566ccfdd9-jmz7s Stopping container kube-rbac-proxy-rules openshift-monitoring 34m Normal Killing pod/thanos-querier-6566ccfdd9-jmz7s Stopping container prom-label-proxy openshift-user-workload-monitoring 34m Normal Created pod/thanos-ruler-user-workload-0 Created container thanos-ruler openshift-monitoring 34m Normal Killing pod/thanos-querier-6566ccfdd9-jmz7s Stopping container kube-rbac-proxy openshift-monitoring 34m Normal Killing pod/thanos-querier-6566ccfdd9-jmz7s Stopping container kube-rbac-proxy-metrics openshift-monitoring 34m Normal Killing pod/prometheus-k8s-0 Stopping container prometheus-proxy openshift-monitoring 34m Normal Killing pod/prometheus-operator-admission-webhook-5c9b9d98cc-nznt8 Stopping container prometheus-operator-admission-webhook openshift-monitoring 34m Normal SuccessfulCreate replicaset/thanos-querier-6566ccfdd9 Created pod: thanos-querier-6566ccfdd9-vppqt openshift-user-workload-monitoring 34m Normal Pulled pod/thanos-ruler-user-workload-0 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a898acb346b7aae546bb9dc31e43266415a096fa755241b56acb5969190584f9" in 1.759619503s (1.759632982s including waiting) openshift-user-workload-monitoring 34m Normal Created pod/thanos-ruler-user-workload-0 Created container thanos-ruler-proxy openshift-monitoring 34m Normal SuccessfulCreate replicaset/prometheus-adapter-8467ff79fd Created pod: prometheus-adapter-8467ff79fd-xg97t openshift-monitoring 34m Normal Killing pod/prometheus-k8s-0 Stopping container kube-rbac-proxy openshift-monitoring 34m Normal Pulled pod/openshift-state-metrics-8757cbbb4-lk7sd Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-image-registry 34m Normal Started pod/image-registry-55b7d998b9-pf4xh Started container registry openshift-image-registry 34m Normal Created pod/image-registry-55b7d998b9-pf4xh Created container registry openshift-monitoring 34m Normal Pulling pod/openshift-state-metrics-8757cbbb4-lk7sd Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:907363827442bc34c33be580ea3ac30198ca65f46a95eb80b2c5255e24d173f3" openshift-monitoring 34m Normal Started pod/openshift-state-metrics-8757cbbb4-lk7sd Started container kube-rbac-proxy-self openshift-monitoring 34m Normal Created pod/openshift-state-metrics-8757cbbb4-lk7sd Created container kube-rbac-proxy-self openshift-image-registry 34m Normal Pulled pod/image-registry-55b7d998b9-pf4xh Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" already present on machine openshift-image-registry 34m Normal AddedInterface pod/image-registry-55b7d998b9-pf4xh Add eth0 [10.128.2.11/23] from ovn-kubernetes openshift-monitoring 34m Normal Pulled pod/openshift-state-metrics-8757cbbb4-lk7sd Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine default 34m Normal NodeNotSchedulable node/ip-10-0-187-75.ec2.internal Node ip-10-0-187-75.ec2.internal status is now: NodeNotSchedulable openshift-monitoring 34m Normal Pulling pod/prometheus-operator-7f64545d8-j6vlm Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f0c9dc9888697e244d61cd89f8fe5a61dcb09dc100889be738db21b2fc5bbf7" openshift-monitoring 34m Normal AddedInterface pod/openshift-state-metrics-8757cbbb4-lk7sd Add eth0 [10.130.2.16/23] from ovn-kubernetes openshift-monitoring 34m Normal AddedInterface pod/prometheus-operator-7f64545d8-j6vlm Add eth0 [10.130.2.17/23] from ovn-kubernetes openshift-monitoring 34m Normal Started pod/openshift-state-metrics-8757cbbb4-lk7sd Started container kube-rbac-proxy-main openshift-monitoring 34m Normal Created pod/openshift-state-metrics-8757cbbb4-lk7sd Created container kube-rbac-proxy-main openshift-monitoring 34m Normal SuccessfulCreate statefulset/alertmanager-main create Pod alertmanager-main-1 in StatefulSet alertmanager-main successful openshift-monitoring 34m Normal SuccessfulCreate statefulset/prometheus-k8s create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful openshift-user-workload-monitoring 34m Normal Pulled pod/prometheus-user-workload-0 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1cdf8e55288584a04e1a403133d4508bac27808a20427a18aee85a92c367316" in 2.781514012s (2.781523423s including waiting) openshift-monitoring 34m Normal Pulled pod/prometheus-operator-7f64545d8-j6vlm Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f0c9dc9888697e244d61cd89f8fe5a61dcb09dc100889be738db21b2fc5bbf7" in 1.753485204s (1.753501841s including waiting) openshift-user-workload-monitoring 34m Normal Created pod/prometheus-user-workload-0 Created container prometheus openshift-user-workload-monitoring 34m Normal Started pod/prometheus-user-workload-0 Started container prometheus openshift-user-workload-monitoring 34m Normal Pulled pod/prometheus-user-workload-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" already present on machine openshift-monitoring 34m Normal Pulled pod/openshift-state-metrics-8757cbbb4-lk7sd Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:907363827442bc34c33be580ea3ac30198ca65f46a95eb80b2c5255e24d173f3" in 1.47605967s (1.476067904s including waiting) openshift-user-workload-monitoring 34m Normal Created pod/prometheus-user-workload-0 Created container config-reloader openshift-user-workload-monitoring 34m Normal Started pod/prometheus-user-workload-0 Started container config-reloader openshift-user-workload-monitoring 34m Normal Pulled pod/prometheus-user-workload-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a898acb346b7aae546bb9dc31e43266415a096fa755241b56acb5969190584f9" already present on machine openshift-user-workload-monitoring 34m Normal Started pod/prometheus-user-workload-0 Started container kube-rbac-proxy-metrics openshift-apiserver 34m Warning ProbeError pod/apiserver-5f568869f-mpswm Readiness probe error: HTTP probe failed with statuscode: 500... openshift-monitoring 34m Normal Pulled pod/prometheus-operator-7f64545d8-j6vlm Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-user-workload-monitoring 34m Normal Started pod/prometheus-user-workload-0 Started container kube-rbac-proxy-thanos openshift-user-workload-monitoring 34m Normal Created pod/prometheus-user-workload-0 Created container kube-rbac-proxy-thanos openshift-monitoring 34m Normal Created pod/prometheus-operator-7f64545d8-j6vlm Created container prometheus-operator openshift-user-workload-monitoring 34m Normal Created pod/prometheus-user-workload-0 Created container thanos-sidecar openshift-user-workload-monitoring 34m Normal Started pod/prometheus-user-workload-0 Started container thanos-sidecar openshift-apiserver 34m Warning Unhealthy pod/apiserver-5f568869f-mpswm Readiness probe failed: HTTP probe failed with statuscode: 500 openshift-user-workload-monitoring 34m Normal Pulled pod/prometheus-user-workload-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-user-workload-monitoring 34m Normal Created pod/prometheus-user-workload-0 Created container kube-rbac-proxy-federate openshift-user-workload-monitoring 34m Normal Started pod/prometheus-user-workload-0 Started container kube-rbac-proxy-federate openshift-user-workload-monitoring 34m Normal Pulled pod/prometheus-user-workload-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 34m Normal Started pod/prometheus-operator-7f64545d8-j6vlm Started container prometheus-operator openshift-monitoring 34m Normal Started pod/openshift-state-metrics-8757cbbb4-lk7sd Started container openshift-state-metrics openshift-monitoring 34m Normal Created pod/openshift-state-metrics-8757cbbb4-lk7sd Created container openshift-state-metrics openshift-user-workload-monitoring 34m Normal Created pod/prometheus-user-workload-0 Created container kube-rbac-proxy-metrics openshift-user-workload-monitoring 34m Normal Pulled pod/prometheus-user-workload-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 34m Normal Created pod/prometheus-operator-7f64545d8-j6vlm Created container kube-rbac-proxy openshift-monitoring 34m Normal Started pod/prometheus-operator-7f64545d8-j6vlm Started container kube-rbac-proxy openshift-dns 34m Warning TopologyAwareHintsDisabled service/dns-default Insufficient Node information: allocatable CPU or zone not specified on one or more nodes, addressType: IPv4 openshift-kube-controller-manager-operator 34m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 3 nodes are at revision 7; 0 nodes have achieved new revision 8" to "NodeInstallerProgressing: 2 nodes are at revision 7; 1 nodes are at revision 8",Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7; 0 nodes have achieved new revision 8" to "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 7; 1 nodes are at revision 8" openshift-kube-controller-manager-operator 34m Normal NodeCurrentRevisionChanged deployment/kube-controller-manager-operator Updated node "ip-10-0-239-132.ec2.internal" from revision 7 to 8 because static pod is ready openshift-kube-apiserver-operator 34m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-239-132.ec2.internal container \"kube-apiserver\" started at 2023-03-21 12:36:16 +0000 UTC is still not ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-239-132.ec2.internal container \"kube-apiserver-check-endpoints\" started at 2023-03-21 12:36:47 +0000 UTC is still not ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-239-132.ec2.internal container \"kube-apiserver\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-239-132.ec2.internal container \"kube-apiserver-cert-regeneration-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-239-132.ec2.internal container \"kube-apiserver-cert-syncer\" is terminated: Error: rue} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nStaticPodsDegraded: I0321 12:37:16.452653 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}]\nStaticPodsDegraded: I0321 12:37:16.452904 1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nStaticPodsDegraded: I0321 12:37:17.655942 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}]\nStaticPodsDegraded: I0321 12:37:17.656239 1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-239-132.ec2.internal container \"kube-apiserver-check-endpoints\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-239-132.ec2.internal container \"kube-apiserver-insecure-readyz\" is terminated: Completed: " openshift-kube-controller-manager-operator 34m Normal NodeTargetRevisionChanged deployment/kube-controller-manager-operator Updating node "ip-10-0-140-6.ec2.internal" from revision 7 to 8 because node ip-10-0-140-6.ec2.internal with revision 7 is the oldest openshift-apiserver 34m Warning Unhealthy pod/apiserver-5f568869f-mpswm Readiness probe failed: Get "https://10.128.0.57:8443/readyz": dial tcp 10.128.0.57:8443: connect: connection refused openshift-kube-controller-manager 34m Normal AddedInterface pod/installer-8-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.28/23] from ovn-kubernetes openshift-kube-controller-manager 34m Normal Pulled pod/installer-8-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-apiserver 34m Normal Started pod/kube-apiserver-ip-10-0-239-132.ec2.internal Started container setup openshift-kube-apiserver 34m Normal Pulled pod/kube-apiserver-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-apiserver 34m Normal Created pod/kube-apiserver-ip-10-0-239-132.ec2.internal Created container setup openshift-kube-controller-manager 34m Normal Created pod/installer-8-ip-10-0-140-6.ec2.internal Created container installer openshift-kube-controller-manager 34m Normal Started pod/installer-8-ip-10-0-140-6.ec2.internal Started container installer openshift-kube-apiserver 34m Normal Created pod/kube-apiserver-ip-10-0-239-132.ec2.internal Created container kube-apiserver-cert-regeneration-controller openshift-kube-apiserver 34m Normal Pulled pod/kube-apiserver-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 34m Normal Created pod/revision-pruner-12-ip-10-0-140-6.ec2.internal Created container pruner openshift-kube-apiserver 34m Normal Started pod/kube-apiserver-ip-10-0-239-132.ec2.internal Started container kube-apiserver openshift-kube-apiserver 34m Normal Created pod/kube-apiserver-ip-10-0-239-132.ec2.internal Created container kube-apiserver openshift-kube-apiserver 34m Normal Pulled pod/kube-apiserver-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-apiserver 34m Normal Started pod/kube-apiserver-ip-10-0-239-132.ec2.internal Started container kube-apiserver-cert-syncer openshift-kube-apiserver 34m Normal Pulled pod/kube-apiserver-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 34m Normal Started pod/kube-apiserver-ip-10-0-239-132.ec2.internal Started container kube-apiserver-cert-regeneration-controller openshift-kube-apiserver 34m Normal Created pod/kube-apiserver-ip-10-0-239-132.ec2.internal Created container kube-apiserver-insecure-readyz openshift-kube-apiserver 34m Normal Pulled pod/kube-apiserver-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 34m Normal Started pod/kube-apiserver-ip-10-0-239-132.ec2.internal Started container kube-apiserver-insecure-readyz openshift-kube-apiserver 34m Normal AddedInterface pod/revision-pruner-12-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.29/23] from ovn-kubernetes openshift-kube-apiserver 34m Normal Pulled pod/kube-apiserver-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 34m Normal Started pod/revision-pruner-12-ip-10-0-140-6.ec2.internal Started container pruner openshift-kube-apiserver 34m Normal Pulled pod/revision-pruner-12-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 34m Normal Created pod/kube-apiserver-ip-10-0-239-132.ec2.internal Created container kube-apiserver-cert-syncer openshift-kube-apiserver 34m Normal Started pod/kube-apiserver-ip-10-0-239-132.ec2.internal Started container kube-apiserver-check-endpoints openshift-kube-apiserver 34m Normal Created pod/kube-apiserver-ip-10-0-239-132.ec2.internal Created container kube-apiserver-check-endpoints openshift-kube-apiserver 34m Warning FastControllerResync node/ip-10-0-239-132.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver 34m Warning ClusterInfrastructureStatus namespace/openshift-kube-apiserver unable to get cluster infrastructure status, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:serviceaccount:openshift-kube-apiserver:localhost-recovery-client" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope openshift-kube-apiserver 34m Warning FastControllerResync node/ip-10-0-239-132.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 34m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-239-132.ec2.internal container \"kube-apiserver\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-239-132.ec2.internal container \"kube-apiserver-cert-regeneration-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-239-132.ec2.internal container \"kube-apiserver-cert-syncer\" is terminated: Error: rue} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nStaticPodsDegraded: I0321 12:37:16.452653 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}]\nStaticPodsDegraded: I0321 12:37:16.452904 1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nStaticPodsDegraded: I0321 12:37:17.655942 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}]\nStaticPodsDegraded: I0321 12:37:17.656239 1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-239-132.ec2.internal container \"kube-apiserver-check-endpoints\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-239-132.ec2.internal container \"kube-apiserver-insecure-readyz\" is terminated: Completed: " to "NodeControllerDegraded: All master nodes are ready" openshift-apiserver 34m Warning ProbeError pod/apiserver-5f568869f-mpswm Readiness probe error: Get "https://10.128.0.57:8443/readyz": dial tcp 10.128.0.57:8443: connect: connection refused... openshift-kube-scheduler 34m Normal Killing pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Stopping container kube-scheduler-cert-syncer openshift-kube-scheduler 34m Normal Killing pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Stopping container kube-scheduler-recovery-controller openshift-kube-scheduler 34m Normal Killing pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Stopping container kube-scheduler openshift-machine-api 34m Normal LeaderElection lease/cluster-api-provider-machineset-leader machine-api-controllers-674d9f54f6-h4xz6_e386050e-b916-47f4-b30d-97e70d90772d became leader openshift-kube-scheduler 34m Normal StaticPodInstallerCompleted pod/installer-9-retry-1-ip-10-0-140-6.ec2.internal Successfully installed revision 9 openshift-kube-scheduler 34m Normal SandboxChanged pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Pod sandbox changed, it will be killed and re-created. openshift-kube-scheduler 34m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-machine-api 34m Normal LeaderElection lease/cluster-api-provider-nodelink-leader machine-api-controllers-674d9f54f6-h4xz6_ad201704-d7eb-4fad-b1ff-63c13da45025 became leader openshift-kube-scheduler 34m Normal Created pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Created container wait-for-host-port openshift-kube-scheduler 34m Normal Started pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Started container wait-for-host-port openshift-machine-api 34m Normal LeaderElection lease/cluster-api-provider-aws-leader machine-api-controllers-674d9f54f6-h4xz6_f1164c56-ba69-4a0f-84b5-3649f67e0a65 became leader openshift-kube-scheduler-operator 34m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 9:\nNodeInstallerDegraded: installer: ing) (len=1) \"9\",\nNodeInstallerDegraded: NodeName: (string) \"\",\nNodeInstallerDegraded: Namespace: (string) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:37:22.118786 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.207089 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.210497 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:33.595654 1 cmd.go:124] Received SIGTERM or SIGINT signal, shutting down the process.\nNodeInstallerDegraded: I0321 12:37:52.210904 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:37:52.211019 1 cmd.go:106] client rate limiter Wait returned an error: context canceled\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready" to "NodeInstallerDegraded: 1 nodes are failing on revision 9:\nNodeInstallerDegraded: installer: ing) (len=1) \"9\",\nNodeInstallerDegraded: NodeName: (string) \"\",\nNodeInstallerDegraded: Namespace: (string) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:37:22.118786 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.207089 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.210497 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:33.595654 1 cmd.go:124] Received SIGTERM or SIGINT signal, shutting down the process.\nNodeInstallerDegraded: I0321 12:37:52.210904 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:37:52.211019 1 cmd.go:106] client rate limiter Wait returned an error: context canceled\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal container \"kube-scheduler\" is terminated: Completed: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal container \"kube-scheduler-cert-syncer\" is terminated: Error: I0321 12:38:55.818462 1 base_controller.go:67] Waiting for caches to sync for CertSyncController\nStaticPodsDegraded: I0321 12:38:55.819159 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: I0321 12:38:55.819428 1 event.go:285] Event(v1.ObjectReference{Kind:\"Pod\", Namespace:\"openshift-kube-scheduler\", Name:\"openshift-kube-scheduler-ip-10-0-140-6.ec2.internal\", UID:\"\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}): type: 'Warning' reason: 'FastControllerResync' Controller \"CertSyncController\" resync interval is set to 0s which might lead to client request throttling\nStaticPodsDegraded: I0321 12:38:55.920800 1 base_controller.go:73] Caches are synced for CertSyncController \nStaticPodsDegraded: I0321 12:38:55.920826 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ...\nStaticPodsDegraded: I0321 12:38:55.920885 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:38:55.920891 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:38:58.309407 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:38:58.309432 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:39:03.702379 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:39:03.702402 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:39:24.326064 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:39:24.326096 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:39:29.727686 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:39:29.727806 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal container \"kube-scheduler-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" openshift-kube-scheduler 34m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-scheduler 34m Normal Started pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Started container kube-scheduler openshift-kube-scheduler 34m Normal Created pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Created container kube-scheduler openshift-machine-api 34m Normal Update machine/qeaisrhods-c13-28wr5-infra-us-east-1a-54lb2 Updated Machine qeaisrhods-c13-28wr5-infra-us-east-1a-54lb2 openshift-kube-scheduler 34m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 34m Normal Created pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Created container kube-scheduler-cert-syncer openshift-kube-scheduler 34m Warning FastControllerResync pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler 34m Normal Started pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Started container kube-scheduler-cert-syncer openshift-image-registry 34m Normal Killing pod/image-registry-55b7d998b9-4mbwh Stopping container registry openshift-kube-apiserver 34m Normal Killing pod/kube-apiserver-guard-ip-10-0-140-6.ec2.internal Stopping container guard openshift-kube-scheduler-operator 34m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 9:\nNodeInstallerDegraded: installer: ing) (len=1) \"9\",\nNodeInstallerDegraded: NodeName: (string) \"\",\nNodeInstallerDegraded: Namespace: (string) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:37:22.118786 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.207089 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.210497 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:33.595654 1 cmd.go:124] Received SIGTERM or SIGINT signal, shutting down the process.\nNodeInstallerDegraded: I0321 12:37:52.210904 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:37:52.211019 1 cmd.go:106] client rate limiter Wait returned an error: context canceled\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal container \"kube-scheduler\" is terminated: Completed: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal container \"kube-scheduler-cert-syncer\" is terminated: Error: I0321 12:38:55.818462 1 base_controller.go:67] Waiting for caches to sync for CertSyncController\nStaticPodsDegraded: I0321 12:38:55.819159 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: I0321 12:38:55.819428 1 event.go:285] Event(v1.ObjectReference{Kind:\"Pod\", Namespace:\"openshift-kube-scheduler\", Name:\"openshift-kube-scheduler-ip-10-0-140-6.ec2.internal\", UID:\"\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}): type: 'Warning' reason: 'FastControllerResync' Controller \"CertSyncController\" resync interval is set to 0s which might lead to client request throttling\nStaticPodsDegraded: I0321 12:38:55.920800 1 base_controller.go:73] Caches are synced for CertSyncController \nStaticPodsDegraded: I0321 12:38:55.920826 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ...\nStaticPodsDegraded: I0321 12:38:55.920885 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:38:55.920891 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:38:58.309407 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:38:58.309432 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:39:03.702379 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:39:03.702402 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:39:24.326064 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:39:24.326096 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:39:29.727686 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:39:29.727806 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal container \"kube-scheduler-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" to "NodeInstallerDegraded: 1 nodes are failing on revision 9:\nNodeInstallerDegraded: installer: ing) (len=1) \"9\",\nNodeInstallerDegraded: NodeName: (string) \"\",\nNodeInstallerDegraded: Namespace: (string) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:37:22.118786 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.207089 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.210497 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:33.595654 1 cmd.go:124] Received SIGTERM or SIGINT signal, shutting down the process.\nNodeInstallerDegraded: I0321 12:37:52.210904 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:37:52.211019 1 cmd.go:106] client rate limiter Wait returned an error: context canceled\nNodeInstallerDegraded: \nNodeControllerDegraded: All master nodes are ready" openshift-kube-scheduler 34m Normal AddedInterface pod/revision-pruner-9-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.30/23] from ovn-kubernetes openshift-kube-scheduler 34m Normal Pulled pod/revision-pruner-9-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 34m Normal Created pod/revision-pruner-9-ip-10-0-140-6.ec2.internal Created container pruner openshift-kube-scheduler 34m Normal Started pod/revision-pruner-9-ip-10-0-140-6.ec2.internal Started container pruner openshift-ingress 34m Warning ProbeError pod/router-default-7cf4c94d4-s4mh5 Readiness probe error: HTTP probe failed with statuscode: 500... openshift-kube-scheduler-operator 34m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded changed from False to True ("NodeInstallerDegraded: 1 nodes are failing on revision 9:\nNodeInstallerDegraded: installer: ing) (len=1) \"9\",\nNodeInstallerDegraded: NodeName: (string) \"\",\nNodeInstallerDegraded: Namespace: (string) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:37:22.118786 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.207089 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.210497 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:33.595654 1 cmd.go:124] Received SIGTERM or SIGINT signal, shutting down the process.\nNodeInstallerDegraded: I0321 12:37:52.210904 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:37:52.211019 1 cmd.go:106] client rate limiter Wait returned an error: context canceled\nNodeInstallerDegraded: ") openshift-cloud-controller-manager-operator 34m Normal LeaderElection lease/cluster-cloud-config-sync-leader ip-10-0-239-132_22e89179-7924-4211-8839-58f92a0245ac became leader openshift-console-operator 34m Warning FastControllerResync deployment/console-operator Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling openshift-console-operator 34m Warning FastControllerResync deployment/console-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-console-operator 34m Normal LeaderElection lease/console-operator-lock console-operator-57cbc6b88f-b2ttj_3694f1b7-0b39-4677-8e5b-16394c172ac2 became leader openshift-console-operator 34m Normal LeaderElection configmap/console-operator-lock console-operator-57cbc6b88f-b2ttj_3694f1b7-0b39-4677-8e5b-16394c172ac2 became leader openshift-console-operator 34m Warning FastControllerResync deployment/console-operator Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling openshift-ingress 34m Warning ProbeError pod/router-default-7cf4c94d4-s4mh5 Readiness probe error: HTTP probe failed with statuscode: 500... openshift-ingress 34m Warning Unhealthy pod/router-default-7cf4c94d4-s4mh5 Readiness probe failed: HTTP probe failed with statuscode: 500 openshift-kube-controller-manager 33m Normal Killing pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Stopping container kube-controller-manager openshift-kube-controller-manager 33m Normal Killing pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Stopping container kube-controller-manager-recovery-controller openshift-kube-controller-manager 33m Normal Killing pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Stopping container kube-controller-manager-cert-syncer openshift-kube-controller-manager 33m Normal Killing pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Stopping container cluster-policy-controller openshift-kube-controller-manager 33m Normal StaticPodInstallerCompleted pod/installer-8-ip-10-0-140-6.ec2.internal Successfully installed revision 8 openshift-machine-api 33m Normal LeaderElection lease/cluster-api-provider-healthcheck-leader machine-api-controllers-674d9f54f6-h4xz6_fe2ba16f-4467-435a-93d6-dda2f0b43cb7 became leader openshift-kube-controller-manager-operator 33m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"cluster-policy-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager-cert-syncer\" is terminated: Error: 8885 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:38:38.379169 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:38:58.212609 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:38:58.212905 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:39:04.396923 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:39:04.398579 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:39:24.228930 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:39:24.229283 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:39:30.414316 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:39:30.414613 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:39:50.248017 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:39:50.248380 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:39:56.433676 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:39:56.433951 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" openshift-cluster-storage-operator 33m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Progressing changed from True to False ("AWSEBSCSIDriverOperatorCRProgressing: All is well") openshift-cluster-csi-drivers 33m Normal LeaderElection configmap/aws-ebs-csi-driver-operator-lock aws-ebs-csi-driver-operator-667bfc499d-7fmff_2fb34eed-d9e1-47b0-af04-daf9b9eb86e4 became leader openshift-cluster-csi-drivers 33m Warning FastControllerResync deployment/aws-ebs-csi-driver-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-cluster-csi-drivers 33m Normal LeaderElection lease/aws-ebs-csi-driver-operator-lock aws-ebs-csi-driver-operator-667bfc499d-7fmff_2fb34eed-d9e1-47b0-af04-daf9b9eb86e4 became leader openshift-kube-controller-manager 33m Normal LeaderElection configmap/cluster-policy-controller-lock ip-10-0-239-132_ee6dd9ad-6182-4439-a053-7ec0a1ad80a0 became leader openshift-kube-controller-manager 33m Normal LeaderElection lease/cluster-policy-controller-lock ip-10-0-239-132_ee6dd9ad-6182-4439-a053-7ec0a1ad80a0 became leader default 33m Normal Reboot node/ip-10-0-187-75.ec2.internal Node will reboot into config rendered-worker-c37c7a9e551f049d382df8406f11fe9b default 33m Normal OSUpdateStaged node/ip-10-0-187-75.ec2.internal Changes to OS staged default 33m Normal OSUpdateStarted node/ip-10-0-187-75.ec2.internal openshift-kube-controller-manager 33m Warning FastControllerResync pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Controller "namespace-security-allocation-controller" resync interval is set to 0s which might lead to client request throttling default 33m Normal PendingConfig node/ip-10-0-187-75.ec2.internal Written pending config rendered-worker-c37c7a9e551f049d382df8406f11fe9b openshift-kube-controller-manager 33m Warning FastControllerResync pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Controller "pod-security-admission-label-synchronization-controller" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager 33m Normal Pulled pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 33m Warning FastControllerResync pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager 33m Normal Created pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Created container kube-controller-manager-cert-syncer openshift-kube-controller-manager 33m Normal Started pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Started container kube-controller-manager-cert-syncer openshift-kube-controller-manager 33m Normal Pulled pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 33m Normal Started pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Started container cluster-policy-controller openshift-kube-controller-manager 33m Normal Started pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Started container kube-controller-manager openshift-kube-controller-manager 33m Normal Created pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Created container cluster-policy-controller openshift-kube-controller-manager 33m Normal Created pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Created container kube-controller-manager-recovery-controller openshift-kube-controller-manager 33m Normal Started pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Started container kube-controller-manager-recovery-controller openshift-kube-controller-manager 33m Normal Pulled pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fec19020202ba3b7f09460bc0cb52fc001941fbb3a3fa7d1ca34a2e9ebf0e9" already present on machine openshift-kube-controller-manager 33m Normal Pulled pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-controller-manager 33m Normal Created pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Created container kube-controller-manager openshift-kube-controller-manager 33m Warning ClusterInfrastructureStatus pod/kube-controller-manager-ip-10-0-140-6.ec2.internal unable to get cluster infrastructure status, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope openshift-kube-controller-manager-operator 33m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"cluster-policy-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager-cert-syncer\" is terminated: Error: 8885 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:38:38.379169 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:38:58.212609 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:38:58.212905 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:39:04.396923 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:39:04.398579 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:39:24.228930 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:39:24.229283 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:39:30.414316 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:39:30.414613 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:39:50.248017 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:39:50.248380 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:39:56.433676 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:39:56.433951 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" openshift-cloud-controller-manager-operator 33m Normal LeaderElection lease/cluster-cloud-controller-manager-leader ip-10-0-239-132_b2141000-9d9b-417f-99cb-12377a614964 became leader openshift-apiserver-operator 33m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-5f568869f-mpswm pod)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()" default 33m Normal RegisteredNode node/ip-10-0-232-8.ec2.internal Node ip-10-0-232-8.ec2.internal event: Registered Node ip-10-0-232-8.ec2.internal in Controller default 33m Normal RegisteredNode node/ip-10-0-140-6.ec2.internal Node ip-10-0-140-6.ec2.internal event: Registered Node ip-10-0-140-6.ec2.internal in Controller default 33m Normal RegisteredNode node/ip-10-0-195-121.ec2.internal Node ip-10-0-195-121.ec2.internal event: Registered Node ip-10-0-195-121.ec2.internal in Controller openshift-ingress 33m Normal EnsuringLoadBalancer service/router-default Ensuring load balancer default 33m Normal RegisteredNode node/ip-10-0-187-75.ec2.internal Node ip-10-0-187-75.ec2.internal event: Registered Node ip-10-0-187-75.ec2.internal in Controller default 33m Normal RegisteredNode node/ip-10-0-197-197.ec2.internal Node ip-10-0-197-197.ec2.internal event: Registered Node ip-10-0-197-197.ec2.internal in Controller default 33m Normal RegisteredNode node/ip-10-0-160-152.ec2.internal Node ip-10-0-160-152.ec2.internal event: Registered Node ip-10-0-160-152.ec2.internal in Controller kube-system 33m Normal LeaderElection lease/kube-controller-manager ip-10-0-197-197_d34c2e3f-f130-4e2a-8904-9b0b93483e12 became leader default 33m Normal RegisteredNode node/ip-10-0-239-132.ec2.internal Node ip-10-0-239-132.ec2.internal event: Registered Node ip-10-0-239-132.ec2.internal in Controller kube-system 33m Normal LeaderElection configmap/kube-controller-manager ip-10-0-197-197_d34c2e3f-f130-4e2a-8904-9b0b93483e12 became leader openshift-kube-controller-manager-operator 33m Warning InstallerPodDisappeared deployment/kube-controller-manager-operator pods "installer-8-ip-10-0-140-6.ec2.internal" not found openshift-kube-scheduler-operator 33m Warning InstallerPodDisappeared deployment/openshift-kube-scheduler-operator pods "installer-9-retry-1-ip-10-0-140-6.ec2.internal" not found openshift-ingress 33m Normal EnsuredLoadBalancer service/router-default Ensured load balancer openshift-kube-scheduler-operator 33m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/installer-9-retry-1-ip-10-0-140-6.ec2.internal -n openshift-kube-scheduler because it was missing default 33m Normal OSUpdateStaged node/ip-10-0-140-6.ec2.internal Changes to OS staged default 33m Normal OSUpdateStarted node/ip-10-0-140-6.ec2.internal default 33m Normal PendingConfig node/ip-10-0-140-6.ec2.internal Written pending config rendered-master-d273453f5fe4894c22cd393f5c0dbfa3 openshift-kube-controller-manager-operator 33m Normal PodCreated deployment/kube-controller-manager-operator Created Pod/installer-8-ip-10-0-140-6.ec2.internal -n openshift-kube-controller-manager because it was missing openshift-authentication-operator 33m Normal SecretUpdated deployment/authentication-operator Updated Secret/v4-0-config-system-session -n openshift-authentication because it changed openshift-kube-apiserver-operator 33m Normal NodeCurrentRevisionChanged deployment/kube-apiserver-operator Updated node "ip-10-0-239-132.ec2.internal" from revision 9 to 12 because static pod is ready openshift-kube-apiserver-operator 33m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 9; 1 nodes are at revision 11; 0 nodes have achieved new revision 12" to "NodeInstallerProgressing: 1 nodes are at revision 9; 1 nodes are at revision 11; 1 nodes are at revision 12",Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 9; 1 nodes are at revision 11; 0 nodes have achieved new revision 12" to "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 9; 1 nodes are at revision 11; 1 nodes are at revision 12" openshift-etcd-operator 33m Normal PodCreated deployment/etcd-operator Created Pod/revision-pruner-7-ip-10-0-140-6.ec2.internal -n openshift-etcd because it was missing openshift-authentication 33m Normal SuccessfulDelete replicaset/oauth-openshift-5c9d8ccbcc Deleted pod: oauth-openshift-5c9d8ccbcc-vkchb openshift-authentication 33m Normal ScalingReplicaSet deployment/oauth-openshift Scaled up replica set oauth-openshift-85644d984b to 1 from 0 openshift-authentication-operator 33m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 6, desired generation is 7." openshift-authentication-operator 33m Normal DeploymentUpdated deployment/authentication-operator Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed openshift-authentication 33m Normal SuccessfulCreate replicaset/oauth-openshift-85644d984b Created pod: oauth-openshift-85644d984b-qhpfp openshift-authentication 33m Normal ScalingReplicaSet deployment/oauth-openshift Scaled down replica set oauth-openshift-5c9d8ccbcc to 1 from 2 openshift-kube-scheduler-operator 33m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/revision-pruner-9-ip-10-0-140-6.ec2.internal -n openshift-kube-scheduler because it was missing openshift-kube-apiserver-operator 33m Normal PodCreated deployment/kube-apiserver-operator Created Pod/revision-pruner-12-ip-10-0-140-6.ec2.internal -n openshift-kube-apiserver because it was missing openshift-authentication-operator 33m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 6, desired generation is 7." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation" default 33m Normal Starting node/ip-10-0-187-75.ec2.internal Starting kubelet. default 33m Normal NodeHasSufficientMemory node/ip-10-0-187-75.ec2.internal Node ip-10-0-187-75.ec2.internal status is now: NodeHasSufficientMemory default 33m Normal NodeHasNoDiskPressure node/ip-10-0-187-75.ec2.internal Node ip-10-0-187-75.ec2.internal status is now: NodeHasNoDiskPressure default 33m Normal NodeHasSufficientPID node/ip-10-0-187-75.ec2.internal Node ip-10-0-187-75.ec2.internal status is now: NodeHasSufficientPID default 33m Normal NodeNotReady node/ip-10-0-187-75.ec2.internal Node ip-10-0-187-75.ec2.internal status is now: NodeNotReady default 33m Normal NodeAllocatableEnforced node/ip-10-0-187-75.ec2.internal Updated Node Allocatable limit across pods default 33m Normal NodeNotSchedulable node/ip-10-0-187-75.ec2.internal Node ip-10-0-187-75.ec2.internal status is now: NodeNotSchedulable default 33m Warning Rebooted node/ip-10-0-187-75.ec2.internal Node ip-10-0-187-75.ec2.internal has been rebooted, boot id: 7e0c23d9-936f-4e3e-9a87-38ba3a7bb3f3 default 33m Normal NodeReady node/ip-10-0-187-75.ec2.internal Node ip-10-0-187-75.ec2.internal status is now: NodeReady openshift-kube-apiserver-operator 33m Normal NodeTargetRevisionChanged deployment/kube-apiserver-operator Updating node "ip-10-0-140-6.ec2.internal" from revision 9 to 12 because node ip-10-0-140-6.ec2.internal with revision 9 is the oldest openshift-machine-api 33m Normal DetectedUnhealthy machine/qeaisrhods-c13-28wr5-infra-us-east-1a-qww78 Machine openshift-machine-api/srep-infra-healthcheck/qeaisrhods-c13-28wr5-infra-us-east-1a-qww78/ip-10-0-187-75.ec2.internal has unhealthy node ip-10-0-187-75.ec2.internal openshift-cluster-storage-operator 33m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Progressing changed from False to True ("AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods") openshift-etcd-operator 33m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" openshift-etcd-operator 33m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.140.6:2379]: context deadline exceeded} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:true Took:1.424411ms Error:} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:true Took:943.399µs Error:}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" openshift-kube-apiserver-operator 33m Normal PodCreated deployment/kube-apiserver-operator Created Pod/installer-12-ip-10-0-140-6.ec2.internal -n openshift-kube-apiserver because it was missing openshift-kube-apiserver-operator 33m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-140-6.ec2.internal\" not ready since 2023-03-21 12:40:55 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful])" openshift-kube-controller-manager-operator 33m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-140-6.ec2.internal\" not ready since 2023-03-21 12:40:55 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful])" default 33m Normal NodeHasNoDiskPressure node/ip-10-0-140-6.ec2.internal Node ip-10-0-140-6.ec2.internal status is now: NodeHasNoDiskPressure openshift-etcd-operator 33m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-140-6.ec2.internal\" not ready since 2023-03-21 12:40:55 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful])\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.140.6:2379]: context deadline exceeded} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:true Took:1.424411ms Error:} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:true Took:943.399µs Error:}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.140.6:2379]: context deadline exceeded} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:true Took:1.424411ms Error:} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:true Took:943.399µs Error:}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" openshift-kube-apiserver 33m Normal Pulled pod/kube-apiserver-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-apiserver-operator 33m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-140-6.ec2.internal\" not ready since 2023-03-21 12:40:55 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful])" to "NodeControllerDegraded: All master nodes are ready" default 33m Normal NodeNotReady node/ip-10-0-140-6.ec2.internal Node ip-10-0-140-6.ec2.internal status is now: NodeNotReady default 33m Normal NodeReady node/ip-10-0-140-6.ec2.internal Node ip-10-0-140-6.ec2.internal status is now: NodeReady default 33m Normal NodeHasSufficientPID node/ip-10-0-140-6.ec2.internal Node ip-10-0-140-6.ec2.internal status is now: NodeHasSufficientPID openshift-kube-controller-manager-operator 33m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-140-6.ec2.internal\" not ready since 2023-03-21 12:40:55 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful])" to "NodeControllerDegraded: All master nodes are ready" openshift-kube-scheduler-operator 33m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-140-6.ec2.internal\" not ready since 2023-03-21 12:40:55 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful])\nNodeInstallerDegraded: 1 nodes are failing on revision 9:\nNodeInstallerDegraded: installer: ing) (len=1) \"9\",\nNodeInstallerDegraded: NodeName: (string) \"\",\nNodeInstallerDegraded: Namespace: (string) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:37:22.118786 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.207089 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.210497 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:33.595654 1 cmd.go:124] Received SIGTERM or SIGINT signal, shutting down the process.\nNodeInstallerDegraded: I0321 12:37:52.210904 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:37:52.211019 1 cmd.go:106] client rate limiter Wait returned an error: context canceled\nNodeInstallerDegraded: " to "NodeInstallerDegraded: 1 nodes are failing on revision 9:\nNodeInstallerDegraded: installer: ing) (len=1) \"9\",\nNodeInstallerDegraded: NodeName: (string) \"\",\nNodeInstallerDegraded: Namespace: (string) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:37:22.118786 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.207089 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.210497 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:33.595654 1 cmd.go:124] Received SIGTERM or SIGINT signal, shutting down the process.\nNodeInstallerDegraded: I0321 12:37:52.210904 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:37:52.211019 1 cmd.go:106] client rate limiter Wait returned an error: context canceled\nNodeInstallerDegraded: " openshift-etcd-operator 33m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.140.6:2379]: context deadline exceeded} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:true Took:1.424411ms Error:} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:true Took:943.399µs Error:}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-140-6.ec2.internal\" not ready since 2023-03-21 12:40:55 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful])\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.140.6:2379]: context deadline exceeded} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:true Took:1.424411ms Error:} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:true Took:943.399µs Error:}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" default 33m Normal NodeHasSufficientMemory node/ip-10-0-140-6.ec2.internal Node ip-10-0-140-6.ec2.internal status is now: NodeHasSufficientMemory default 33m Normal NodeNotSchedulable node/ip-10-0-140-6.ec2.internal Node ip-10-0-140-6.ec2.internal status is now: NodeNotSchedulable default 33m Normal Starting node/ip-10-0-140-6.ec2.internal Starting kubelet. openshift-kube-scheduler-operator 33m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 9:\nNodeInstallerDegraded: installer: ing) (len=1) \"9\",\nNodeInstallerDegraded: NodeName: (string) \"\",\nNodeInstallerDegraded: Namespace: (string) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:37:22.118786 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.207089 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.210497 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:33.595654 1 cmd.go:124] Received SIGTERM or SIGINT signal, shutting down the process.\nNodeInstallerDegraded: I0321 12:37:52.210904 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:37:52.211019 1 cmd.go:106] client rate limiter Wait returned an error: context canceled\nNodeInstallerDegraded: " to "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-140-6.ec2.internal\" not ready since 2023-03-21 12:40:55 +0000 UTC because KubeletNotReady ([container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful])\nNodeInstallerDegraded: 1 nodes are failing on revision 9:\nNodeInstallerDegraded: installer: ing) (len=1) \"9\",\nNodeInstallerDegraded: NodeName: (string) \"\",\nNodeInstallerDegraded: Namespace: (string) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:37:22.118786 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.207089 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.210497 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:33.595654 1 cmd.go:124] Received SIGTERM or SIGINT signal, shutting down the process.\nNodeInstallerDegraded: I0321 12:37:52.210904 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:37:52.211019 1 cmd.go:106] client rate limiter Wait returned an error: context canceled\nNodeInstallerDegraded: " default 33m Warning Rebooted node/ip-10-0-140-6.ec2.internal Node ip-10-0-140-6.ec2.internal has been rebooted, boot id: ad8e8c0e-d175-40c3-9bf0-6374d37ccbd9 default 33m Normal NodeAllocatableEnforced node/ip-10-0-140-6.ec2.internal Updated Node Allocatable limit across pods openshift-etcd 33m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-image-registry 33m Normal Pulled pod/node-ca-92xvd Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" already present on machine openshift-cluster-csi-drivers 33m Normal Pulled pod/aws-ebs-csi-driver-node-zcbkq Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" already present on machine openshift-etcd 33m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container setup openshift-etcd 33m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container setup openshift-ovn-kubernetes 33m Normal Pulled pod/ovnkube-master-w7545 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-cluster-node-tuning-operator 33m Normal Pulled pod/tuned-zxj2p Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" already present on machine openshift-multus 33m Normal Pulled pod/multus-additional-cni-plugins-b2lhx Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633" already present on machine openshift-cluster-node-tuning-operator 33m Normal Created pod/tuned-zxj2p Created container tuned openshift-dns 33m Normal Created pod/node-resolver-ndpz5 Created container dns-node-resolver openshift-multus 33m Normal Created pod/multus-additional-cni-plugins-b2lhx Created container egress-router-binary-copy openshift-multus 33m Normal Started pod/multus-additional-cni-plugins-b2lhx Started container egress-router-binary-copy openshift-image-registry 33m Normal Created pod/node-ca-92xvd Created container node-ca openshift-machine-config-operator 33m Normal Pulled pod/machine-config-server-9k88t Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" already present on machine openshift-machine-config-operator 33m Normal Created pod/machine-config-server-9k88t Created container machine-config-server openshift-machine-config-operator 33m Normal Started pod/machine-config-server-9k88t Started container machine-config-server openshift-cluster-node-tuning-operator 33m Normal Started pod/tuned-zxj2p Started container tuned openshift-dns 33m Normal Started pod/node-resolver-ndpz5 Started container dns-node-resolver openshift-machine-config-operator 33m Normal Created pod/machine-config-daemon-s6f62 Created container oauth-proxy openshift-kube-apiserver 33m Normal Started pod/kube-apiserver-ip-10-0-140-6.ec2.internal Started container setup openshift-machine-config-operator 33m Normal Pulled pod/machine-config-daemon-s6f62 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-kube-apiserver 33m Normal Created pod/kube-apiserver-ip-10-0-140-6.ec2.internal Created container setup openshift-machine-config-operator 33m Normal Started pod/machine-config-daemon-s6f62 Started container machine-config-daemon openshift-machine-config-operator 33m Normal Created pod/machine-config-daemon-s6f62 Created container machine-config-daemon openshift-ovn-kubernetes 33m Normal Pulled pod/ovnkube-node-8qw6d Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-machine-config-operator 33m Normal Pulled pod/machine-config-daemon-s6f62 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" already present on machine openshift-multus 33m Normal Pulled pod/multus-7x2mr Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" already present on machine openshift-multus 33m Normal Created pod/multus-7x2mr Created container kube-multus openshift-multus 33m Normal Started pod/multus-7x2mr Started container kube-multus openshift-dns 33m Normal Pulled pod/node-resolver-ndpz5 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" already present on machine openshift-ovn-kubernetes 33m Normal Created pod/ovnkube-master-w7545 Created container northd openshift-ovn-kubernetes 33m Normal Started pod/ovnkube-master-w7545 Started container northd openshift-ovn-kubernetes 33m Normal Pulled pod/ovnkube-master-w7545 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-monitoring 33m Normal Started pod/node-exporter-cghbq Started container init-textfile openshift-monitoring 33m Normal Created pod/node-exporter-cghbq Created container init-textfile openshift-monitoring 33m Normal Pulled pod/node-exporter-cghbq Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" already present on machine openshift-cluster-csi-drivers 33m Normal Created pod/aws-ebs-csi-driver-node-zcbkq Created container csi-driver openshift-cluster-csi-drivers 33m Normal Started pod/aws-ebs-csi-driver-node-zcbkq Started container csi-driver openshift-ovn-kubernetes 33m Normal Pulled pod/ovnkube-master-w7545 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-monitoring 33m Normal Started pod/node-exporter-cghbq Started container kube-rbac-proxy openshift-image-registry 33m Normal Started pod/node-ca-92xvd Started container node-ca openshift-multus 33m Normal Started pod/multus-additional-cni-plugins-b2lhx Started container cni-plugins openshift-ovn-kubernetes 33m Normal Created pod/ovnkube-master-w7545 Created container kube-rbac-proxy openshift-multus 33m Normal Created pod/multus-additional-cni-plugins-b2lhx Created container cni-plugins openshift-kube-scheduler 33m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-scheduler 33m Normal Created pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Created container kube-scheduler openshift-kube-scheduler 33m Normal Started pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Started container kube-scheduler openshift-ovn-kubernetes 33m Normal Pulled pod/ovnkube-master-w7545 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-ovn-kubernetes 33m Normal Started pod/ovnkube-master-w7545 Started container nbdb openshift-ovn-kubernetes 33m Normal Created pod/ovnkube-master-w7545 Created container nbdb openshift-monitoring 33m Normal Pulled pod/node-exporter-cghbq Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" already present on machine openshift-monitoring 33m Normal Created pod/node-exporter-cghbq Created container node-exporter openshift-kube-scheduler 33m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-monitoring 33m Normal Started pod/node-exporter-cghbq Started container node-exporter openshift-multus 33m Normal Pulled pod/multus-additional-cni-plugins-b2lhx Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be088cb9218a248cd4a616132c8daf531502c7b85404aadb0b73e76f8dba01df" already present on machine openshift-monitoring 33m Normal Pulled pod/node-exporter-cghbq Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-etcd 33m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcd-ensure-env-vars openshift-ovn-kubernetes 33m Normal Started pod/ovnkube-node-8qw6d Started container ovn-acl-logging openshift-monitoring 33m Normal Created pod/node-exporter-cghbq Created container kube-rbac-proxy openshift-machine-config-operator 33m Normal Started pod/machine-config-daemon-s6f62 Started container oauth-proxy openshift-ovn-kubernetes 33m Normal Created pod/ovnkube-node-8qw6d Created container ovn-acl-logging openshift-etcd 33m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcd-ensure-env-vars openshift-ovn-kubernetes 33m Normal Pulled pod/ovnkube-node-8qw6d Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-kube-apiserver 33m Normal Pulled pod/kube-apiserver-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-etcd 33m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-ovn-kubernetes 33m Normal Created pod/ovnkube-node-8qw6d Created container kube-rbac-proxy openshift-ovn-kubernetes 33m Normal Started pod/ovnkube-node-8qw6d Started container kube-rbac-proxy openshift-ovn-kubernetes 33m Normal Pulled pod/ovnkube-node-8qw6d Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-ovn-kubernetes 33m Normal Created pod/ovnkube-node-8qw6d Created container kube-rbac-proxy-ovn-metrics openshift-kube-apiserver 33m Normal Started pod/kube-apiserver-ip-10-0-140-6.ec2.internal Started container kube-apiserver openshift-kube-apiserver 33m Normal Created pod/kube-apiserver-ip-10-0-140-6.ec2.internal Created container kube-apiserver openshift-kube-apiserver 33m Normal Pulled pod/kube-apiserver-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-cluster-csi-drivers 33m Normal Started pod/aws-ebs-csi-driver-node-zcbkq Started container csi-liveness-probe openshift-cluster-csi-drivers 33m Normal Created pod/aws-ebs-csi-driver-node-zcbkq Created container csi-liveness-probe openshift-cluster-csi-drivers 33m Normal Pulled pod/aws-ebs-csi-driver-node-zcbkq Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" already present on machine openshift-cluster-csi-drivers 33m Normal Started pod/aws-ebs-csi-driver-node-zcbkq Started container csi-node-driver-registrar openshift-ovn-kubernetes 33m Normal Started pod/ovnkube-node-8qw6d Started container kube-rbac-proxy-ovn-metrics openshift-ovn-kubernetes 33m Normal Pulled pod/ovnkube-node-8qw6d Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-cluster-csi-drivers 33m Normal Created pod/aws-ebs-csi-driver-node-zcbkq Created container csi-node-driver-registrar openshift-cluster-csi-drivers 33m Normal Pulled pod/aws-ebs-csi-driver-node-zcbkq Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" already present on machine openshift-ovn-kubernetes 33m Normal Created pod/ovnkube-node-8qw6d Created container ovnkube-node openshift-ovn-kubernetes 33m Normal Started pod/ovnkube-master-w7545 Started container kube-rbac-proxy openshift-ovn-kubernetes 33m Normal Started pod/ovnkube-node-8qw6d Started container ovnkube-node openshift-ovn-kubernetes 33m Normal Pulled pod/ovnkube-master-w7545 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-kube-controller-manager 33m Normal Started pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Started container kube-controller-manager-recovery-controller openshift-kube-controller-manager 33m Normal Created pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Created container kube-controller-manager-recovery-controller openshift-kube-controller-manager-operator 33m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nNodeControllerDegraded: All master nodes are ready" openshift-kube-controller-manager 33m Normal Pulled pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-apiserver 33m Normal Created pod/kube-apiserver-ip-10-0-140-6.ec2.internal Created container kube-apiserver-cert-syncer openshift-kube-apiserver 33m Normal Started pod/kube-apiserver-ip-10-0-140-6.ec2.internal Started container kube-apiserver-cert-syncer openshift-ovn-kubernetes 33m Normal Created pod/ovnkube-master-w7545 Created container ovnkube-master openshift-ovn-kubernetes 33m Normal Started pod/ovnkube-master-w7545 Started container sbdb openshift-kube-apiserver 33m Normal Pulled pod/kube-apiserver-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-ovn-kubernetes 33m Normal Pulled pod/ovnkube-master-w7545 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 33m Normal Started pod/ovnkube-master-w7545 Started container ovnkube-master openshift-kube-apiserver 33m Normal Created pod/kube-apiserver-ip-10-0-140-6.ec2.internal Created container kube-apiserver-cert-regeneration-controller openshift-etcd 33m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 33m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcd-resources-copy openshift-kube-apiserver 33m Normal Started pod/kube-apiserver-ip-10-0-140-6.ec2.internal Started container kube-apiserver-cert-regeneration-controller openshift-kube-apiserver 33m Normal Pulled pod/kube-apiserver-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-etcd 33m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcd-resources-copy openshift-ovn-kubernetes 33m Normal Created pod/ovnkube-master-w7545 Created container sbdb openshift-kube-controller-manager 33m Normal Started pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Started container kube-controller-manager-cert-syncer openshift-kube-scheduler 33m Normal Started pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Started container kube-scheduler-recovery-controller openshift-kube-scheduler 33m Normal Created pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Created container kube-scheduler-recovery-controller openshift-kube-scheduler 33m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 33m Normal Started pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Started container kube-scheduler-cert-syncer openshift-kube-scheduler 33m Normal Created pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Created container kube-scheduler-cert-syncer openshift-kube-apiserver 33m Normal Started pod/kube-apiserver-ip-10-0-140-6.ec2.internal Started container kube-apiserver-insecure-readyz openshift-ovn-kubernetes 33m Normal Started pod/ovnkube-master-w7545 Started container ovn-dbchecker openshift-ovn-kubernetes 33m Normal Created pod/ovnkube-master-w7545 Created container ovn-dbchecker openshift-authentication-operator 33m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nWellKnownReadyControllerDegraded: failed to GET kube-apiserver oauth endpoint https://10.0.140.6:6443/.well-known/oauth-authorization-server: dial tcp 10.0.140.6:6443: i/o timeout" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nWellKnownReadyControllerDegraded: need at least 3 kube-apiservers, got 2" openshift-kube-apiserver 33m Normal Created pod/kube-apiserver-ip-10-0-140-6.ec2.internal Created container kube-apiserver-insecure-readyz openshift-authentication-operator 33m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: failed to GET kube-apiserver oauth endpoint https://10.0.140.6:6443/.well-known/oauth-authorization-server: dial tcp 10.0.140.6:6443: i/o timeout" to "WellKnownAvailable: The well-known endpoint is not yet available: need at least 3 kube-apiservers, got 2" openshift-etcd 33m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 33m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcdctl openshift-authentication-operator 33m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nWellKnownReadyControllerDegraded: failed to GET kube-apiserver oauth endpoint https://10.0.140.6:6443/.well-known/oauth-authorization-server: dial tcp 10.0.140.6:6443: i/o timeout" openshift-authentication-operator 33m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available changed from True to False ("WellKnownAvailable: The well-known endpoint is not yet available: failed to GET kube-apiserver oauth endpoint https://10.0.140.6:6443/.well-known/oauth-authorization-server: dial tcp 10.0.140.6:6443: i/o timeout") openshift-etcd 33m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 33m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcdctl openshift-etcd-operator 33m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.140.6:2379]: context deadline exceeded} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:true Took:1.424411ms Error:} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:true Took:943.399µs Error:}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.140.6:2379]: context deadline exceeded} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:true Took:1.408104ms Error:} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:true Took:2.250751ms Error:}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" openshift-etcd 33m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcd-readyz openshift-etcd 33m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 33m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcd openshift-etcd 33m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcd openshift-etcd 33m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcd-metrics openshift-etcd 33m Normal Pulled pod/etcd-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd 33m Normal Created pod/etcd-ip-10-0-140-6.ec2.internal Created container etcd-metrics openshift-etcd 33m Normal Started pod/etcd-ip-10-0-140-6.ec2.internal Started container etcd-readyz openshift-multus 33m Normal Pulled pod/multus-additional-cni-plugins-b2lhx Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dec1af3c24148179ab45f9137ba74876f4287121f07aa32dc7a3643da749a9" already present on machine openshift-multus 33m Normal Started pod/multus-additional-cni-plugins-b2lhx Started container bond-cni-plugin openshift-kube-controller-manager-operator 33m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" openshift-multus 33m Normal Created pod/multus-additional-cni-plugins-b2lhx Created container bond-cni-plugin openshift-multus 33m Normal Started pod/multus-additional-cni-plugins-b2lhx Started container routeoverride-cni openshift-multus 33m Normal Created pod/multus-additional-cni-plugins-b2lhx Created container routeoverride-cni openshift-multus 33m Normal Pulled pod/multus-additional-cni-plugins-b2lhx Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4903dfa59fc0fcc3521f6431d51c6ed0ef563b0c63614a2df44ee4d447ab6c5d" already present on machine openshift-multus 33m Normal Pulled pod/multus-additional-cni-plugins-b2lhx Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" already present on machine openshift-multus 33m Normal Created pod/multus-additional-cni-plugins-b2lhx Created container whereabouts-cni-bincopy openshift-multus 33m Normal Started pod/multus-additional-cni-plugins-b2lhx Started container whereabouts-cni-bincopy openshift-kube-apiserver 33m Warning FastControllerResync pod/kube-apiserver-ip-10-0-140-6.ec2.internal Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager 33m Warning FastControllerResync pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling openshift-cluster-node-tuning-operator 32m Normal Pulling pod/tuned-9gtgt Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" openshift-ovn-kubernetes 32m Normal Pulling pod/ovnkube-node-zzdfn Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" openshift-multus 32m Normal Pulling pod/multus-additional-cni-plugins-4qmk6 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633" openshift-multus 32m Normal Pulling pod/multus-xqcfd Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" openshift-monitoring 32m Normal Pulling pod/node-exporter-4g9rl Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" openshift-kube-controller-manager 32m Warning ClusterInfrastructureStatus namespace/openshift-kube-controller-manager unable to get cluster infrastructure status, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp [::1]:6443: connect: connection refused openshift-cluster-csi-drivers 32m Normal Pulling pod/aws-ebs-csi-driver-node-s4chb Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" openshift-kube-scheduler 32m Warning ClusterInfrastructureStatus namespace/openshift-kube-scheduler unable to get cluster infrastructure status, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp [::1]:6443: connect: connection refused openshift-ovn-kubernetes 32m Normal Created pod/ovnkube-node-8qw6d Created container ovn-controller openshift-kube-apiserver 32m Warning ClusterInfrastructureStatus namespace/openshift-kube-apiserver unable to get cluster infrastructure status, using HA cluster values for leader election: Get "https://localhost:6443/apis/config.openshift.io/v1/infrastructures/cluster": dial tcp [::1]:6443: connect: connection refused openshift-kube-controller-manager 32m Warning ClusterInfrastructureStatus pod/kube-controller-manager-ip-10-0-140-6.ec2.internal unable to get cluster infrastructure status, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope openshift-ovn-kubernetes 32m Normal Pulled pod/ovnkube-node-8qw6d Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 32m Normal Started pod/ovnkube-node-8qw6d Started container ovn-controller openshift-machine-config-operator 32m Normal Pulling pod/machine-config-daemon-vlfmm Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" openshift-dns 32m Normal Pulling pod/node-resolver-qqhl6 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" openshift-image-registry 32m Normal Pulling pod/node-ca-5ldj8 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" openshift-network-diagnostics 32m Normal AddedInterface pod/network-check-target-tmbg6 Add eth0 [10.128.0.3/23] from ovn-kubernetes openshift-etcd 32m Normal AddedInterface pod/revision-pruner-7-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.33/23] from ovn-kubernetes openshift-network-diagnostics 32m Normal Pulled pod/network-check-target-tmbg6 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" already present on machine openshift-kube-scheduler 32m Normal AddedInterface pod/installer-9-retry-1-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.31/23] from ovn-kubernetes openshift-kube-scheduler 32m Normal AddedInterface pod/revision-pruner-9-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.34/23] from ovn-kubernetes openshift-multus 32m Normal AddedInterface pod/network-metrics-daemon-v6lsv Add eth0 [10.128.0.4/23] from ovn-kubernetes openshift-dns 32m Normal AddedInterface pod/dns-default-wnmv8 Add eth0 [10.128.0.14/23] from ovn-kubernetes openshift-kube-apiserver 32m Normal AddedInterface pod/installer-12-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.36/23] from ovn-kubernetes openshift-monitoring 32m Normal AddedInterface pod/sre-dns-latency-exporter-t9jjt Add eth0 [10.128.0.47/23] from ovn-kubernetes openshift-kube-controller-manager 32m Normal Pulled pod/installer-8-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 32m Normal AddedInterface pod/installer-8-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.32/23] from ovn-kubernetes openshift-monitoring 32m Normal Pulled pod/sre-dns-latency-exporter-t9jjt Container image "quay.io/app-sre/managed-prometheus-exporter-base:latest" already present on machine openshift-validation-webhook 32m Normal AddedInterface pod/validation-webhook-j7r6j Add eth0 [10.128.0.46/23] from ovn-kubernetes openshift-security 32m Normal AddedInterface pod/audit-exporter-th592 Add eth0 [10.128.0.48/23] from ovn-kubernetes openshift-kube-apiserver 32m Normal AddedInterface pod/revision-pruner-12-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.35/23] from ovn-kubernetes openshift-kube-apiserver 32m Warning ProbeError pod/kube-apiserver-ip-10-0-140-6.ec2.internal Readiness probe error: Get "https://10.0.140.6:6443/readyz": dial tcp 10.0.140.6:6443: connect: connection refused... openshift-kube-apiserver 32m Warning Unhealthy pod/kube-apiserver-ip-10-0-140-6.ec2.internal Readiness probe failed: Get "https://10.0.140.6:6443/readyz": dial tcp 10.0.140.6:6443: connect: connection refused openshift-kube-apiserver-operator 32m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-140-6.ec2.internal container \"kube-apiserver-check-endpoints\" is terminated: Error: W0321 12:41:05.641847 1 cmd.go:213] Using insecure, self-signed certificates\nStaticPodsDegraded: I0321 12:41:05.642242 1 crypto.go:601] Generating new CA for check-endpoints-signer@1679402465 cert, and key in /tmp/serving-cert-1972252469/serving-signer.crt, /tmp/serving-cert-1972252469/serving-signer.key\nStaticPodsDegraded: I0321 12:41:06.313475 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0321 12:41:06.330844 1 builder.go:230] unable to get owner reference (falling back to namespace): Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-140-6.ec2.internal\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0321 12:41:06.331218 1 builder.go:262] check-endpoints version 4.13.0-202303180002.p0.g5cec361.assembly.stream-5cec361-5cec361179f3658986890a87d0b51f40a1da89ad\nStaticPodsDegraded: I0321 12:41:06.347828 1 dynamic_serving_content.go:113] \"Loaded a new cert/key pair\" name=\"serving-cert::/tmp/serving-cert-1972252469/tls.crt::/tmp/serving-cert-1972252469/tls.key\"\nStaticPodsDegraded: F0321 12:41:06.726689 1 cmd.go:138] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: " openshift-monitoring 32m Normal Created pod/node-exporter-4g9rl Created container init-textfile openshift-monitoring 32m Normal Pulled pod/node-exporter-4g9rl Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" in 3.078435684s (3.078443379s including waiting) openshift-kube-scheduler 32m Normal Pulled pod/revision-pruner-9-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-validation-webhook 32m Normal Created pod/validation-webhook-j7r6j Created container webhooks openshift-monitoring 32m Normal Started pod/sre-dns-latency-exporter-t9jjt Started container main openshift-multus 32m Normal Pulled pod/network-metrics-daemon-v6lsv Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5e01ac462952a60254c8a9a0f43cce8b323da60014d37a49250a2e586142a5" already present on machine openshift-etcd 32m Normal Pulled pod/revision-pruner-7-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-kube-apiserver 32m Normal Pulled pod/kube-apiserver-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-security 32m Normal Pulling pod/audit-exporter-th592 Pulling image "quay.io/app-sre/splunk-audit-exporter@sha256:bbca8dfd77d15c6dde3495985c1a75354ad79339ecba6820e7ceef2282422964" openshift-security 32m Normal Pulled pod/audit-exporter-th592 Successfully pulled image "quay.io/app-sre/splunk-audit-exporter@sha256:bbca8dfd77d15c6dde3495985c1a75354ad79339ecba6820e7ceef2282422964" in 326.272668ms (326.279904ms including waiting) openshift-dns 32m Normal Pulled pod/dns-default-wnmv8 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a604c96952da5e59f104a305e0c8303d474e33ef345b8c534fd677f189d05299" already present on machine openshift-kube-scheduler 32m Normal Pulled pod/installer-9-retry-1-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-network-diagnostics 32m Normal Created pod/network-check-target-tmbg6 Created container network-check-target-container openshift-validation-webhook 32m Normal Pulled pod/validation-webhook-j7r6j Container image "quay.io/app-sre/managed-cluster-validating-webhooks@sha256:3b13c3a89da30c5fbfaf7529ec3175dd43053c508d4bd09c79ef369d53ecc023" already present on machine openshift-kube-apiserver 32m Normal Pulled pod/installer-12-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-monitoring 32m Normal Created pod/sre-dns-latency-exporter-t9jjt Created container main openshift-monitoring 32m Normal Started pod/node-exporter-4g9rl Started container init-textfile openshift-kube-apiserver 32m Normal Pulled pod/revision-pruner-12-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-multus 32m Normal Pulled pod/multus-additional-cni-plugins-b2lhx Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" already present on machine openshift-etcd 32m Normal Created pod/revision-pruner-7-ip-10-0-140-6.ec2.internal Created container pruner openshift-multus 32m Normal Created pod/multus-additional-cni-plugins-b2lhx Created container whereabouts-cni openshift-multus 32m Normal Started pod/multus-additional-cni-plugins-b2lhx Started container whereabouts-cni openshift-dns 32m Normal Started pod/dns-default-wnmv8 Started container kube-rbac-proxy openshift-dns 32m Normal Started pod/dns-default-wnmv8 Started container dns openshift-dns 32m Normal Created pod/dns-default-wnmv8 Created container dns openshift-kube-apiserver 32m Normal Started pod/installer-12-ip-10-0-140-6.ec2.internal Started container installer openshift-network-diagnostics 32m Normal Started pod/network-check-target-tmbg6 Started container network-check-target-container openshift-kube-apiserver 32m Normal Created pod/revision-pruner-12-ip-10-0-140-6.ec2.internal Created container pruner openshift-kube-scheduler 32m Normal Started pod/installer-9-retry-1-ip-10-0-140-6.ec2.internal Started container installer openshift-kube-scheduler 32m Normal Created pod/installer-9-retry-1-ip-10-0-140-6.ec2.internal Created container installer openshift-kube-apiserver 32m Normal Created pod/installer-12-ip-10-0-140-6.ec2.internal Created container installer openshift-kube-apiserver 32m Normal Started pod/revision-pruner-12-ip-10-0-140-6.ec2.internal Started container pruner openshift-kube-apiserver 32m Normal Started pod/kube-apiserver-ip-10-0-140-6.ec2.internal Started container kube-apiserver-check-endpoints openshift-kube-scheduler 32m Normal Started pod/revision-pruner-9-ip-10-0-140-6.ec2.internal Started container pruner openshift-kube-scheduler 32m Normal Created pod/revision-pruner-9-ip-10-0-140-6.ec2.internal Created container pruner openshift-kube-apiserver 32m Normal Created pod/kube-apiserver-ip-10-0-140-6.ec2.internal Created container kube-apiserver-check-endpoints openshift-multus 32m Normal Created pod/network-metrics-daemon-v6lsv Created container kube-rbac-proxy openshift-multus 32m Normal Pulled pod/network-metrics-daemon-v6lsv Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-multus 32m Normal Started pod/network-metrics-daemon-v6lsv Started container network-metrics-daemon openshift-kube-controller-manager 32m Normal Created pod/installer-8-ip-10-0-140-6.ec2.internal Created container installer openshift-kube-controller-manager 32m Normal Started pod/installer-8-ip-10-0-140-6.ec2.internal Started container installer openshift-dns 32m Normal Pulled pod/dns-default-wnmv8 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-dns 32m Normal Created pod/dns-default-wnmv8 Created container kube-rbac-proxy openshift-validation-webhook 32m Normal Started pod/validation-webhook-j7r6j Started container webhooks openshift-etcd 32m Normal Started pod/revision-pruner-7-ip-10-0-140-6.ec2.internal Started container pruner openshift-multus 32m Normal Created pod/network-metrics-daemon-v6lsv Created container network-metrics-daemon openshift-multus 32m Normal Started pod/network-metrics-daemon-v6lsv Started container kube-rbac-proxy openshift-multus 32m Normal Pulled pod/multus-additional-cni-plugins-b2lhx Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" already present on machine openshift-multus 32m Normal Started pod/multus-additional-cni-plugins-b2lhx Started container kube-multus-additional-cni-plugins openshift-multus 32m Normal Created pod/multus-additional-cni-plugins-b2lhx Created container kube-multus-additional-cni-plugins openshift-security 32m Normal Created pod/audit-exporter-th592 Created container audit-exporter openshift-security 32m Normal Started pod/audit-exporter-th592 Started container audit-exporter openshift-kube-apiserver 32m Warning FastControllerResync node/ip-10-0-140-6.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling openshift-etcd-operator 32m Warning UnhealthyEtcdMember deployment/etcd-operator unhealthy members: ip-10-0-140-6.ec2.internal openshift-etcd-operator 32m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.140.6:2379]: context deadline exceeded} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:true Took:1.408104ms Error:} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:true Took:2.250751ms Error:}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.140.6:2379]: context deadline exceeded} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:true Took:1.064807ms Error:} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:true Took:820.927µs Error:}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" openshift-kube-apiserver 32m Warning FastControllerResync node/ip-10-0-140-6.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling openshift-etcd-operator 32m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy" openshift-etcd-operator 32m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.140.6:2379]: context deadline exceeded} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:true Took:1.064807ms Error:} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:true Took:820.927µs Error:}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.140.6:2379]: context deadline exceeded} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:true Took:1.064807ms Error:} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:true Took:820.927µs Error:}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy" default 32m Warning OperatorDegraded: RequiredPoolsFailed /machine-config Failed to resync 4.13.0-rc.0 because: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, error required pool master is not ready, retrying. Status: (total: 3, ready 1, updated: 1, unavailable: 1, degraded: 0)] openshift-authentication-operator 32m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Available changed from False to True ("All is well") openshift-authentication-operator 32m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nWellKnownReadyControllerDegraded: need at least 3 kube-apiservers, got 2" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()" openshift-kube-apiserver-operator 32m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-140-6.ec2.internal container \"kube-apiserver-check-endpoints\" is terminated: Error: W0321 12:41:05.641847 1 cmd.go:213] Using insecure, self-signed certificates\nStaticPodsDegraded: I0321 12:41:05.642242 1 crypto.go:601] Generating new CA for check-endpoints-signer@1679402465 cert, and key in /tmp/serving-cert-1972252469/serving-signer.crt, /tmp/serving-cert-1972252469/serving-signer.key\nStaticPodsDegraded: I0321 12:41:06.313475 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: W0321 12:41:06.330844 1 builder.go:230] unable to get owner reference (falling back to namespace): Get \"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-140-6.ec2.internal\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: I0321 12:41:06.331218 1 builder.go:262] check-endpoints version 4.13.0-202303180002.p0.g5cec361.assembly.stream-5cec361-5cec361179f3658986890a87d0b51f40a1da89ad\nStaticPodsDegraded: I0321 12:41:06.347828 1 dynamic_serving_content.go:113] \"Loaded a new cert/key pair\" name=\"serving-cert::/tmp/serving-cert-1972252469/tls.crt::/tmp/serving-cert-1972252469/tls.key\"\nStaticPodsDegraded: F0321 12:41:06.726689 1 cmd.go:138] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\": dial tcp [::1]:6443: connect: connection refused\nStaticPodsDegraded: " to "NodeControllerDegraded: All master nodes are ready" openshift-monitoring 32m Normal Pulled pod/node-exporter-4g9rl Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" already present on machine openshift-etcd-operator 32m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.140.6:2379]: context deadline exceeded} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:true Took:1.064807ms Error:} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:true Took:820.927µs Error:}]\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy" openshift-etcd-operator 32m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.140.6:2379]: context deadline exceeded} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:true Took:1.064807ms Error:} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:true Took:820.927µs Error:}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.140.6:2379]: context deadline exceeded} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:true Took:1.064807ms Error:} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:true Took:820.927µs Error:}]\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy" openshift-authentication 32m Normal Pulled pod/oauth-openshift-85644d984b-qhpfp Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2156db516db2fda43363a063c11f91bd201dde2de7eb4fbab9c1210d17e699ba" already present on machine openshift-authentication 32m Normal AddedInterface pod/oauth-openshift-85644d984b-qhpfp Add eth0 [10.128.0.37/23] from ovn-kubernetes openshift-authentication 32m Normal SuccessfulCreate replicaset/oauth-openshift-85644d984b Created pod: oauth-openshift-85644d984b-5jmpn openshift-authentication 32m Normal Killing pod/oauth-openshift-86966797f8-b47q9 Stopping container oauth-openshift openshift-authentication 32m Normal SuccessfulDelete replicaset/oauth-openshift-86966797f8 Deleted pod: oauth-openshift-86966797f8-b47q9 openshift-authentication 32m Normal Created pod/oauth-openshift-85644d984b-qhpfp Created container oauth-openshift openshift-authentication 32m Normal ScalingReplicaSet deployment/oauth-openshift Scaled down replica set oauth-openshift-86966797f8 to 0 from 1 openshift-authentication 32m Normal ScalingReplicaSet deployment/oauth-openshift Scaled up replica set oauth-openshift-85644d984b to 2 from 1 openshift-authentication 32m Normal Started pod/oauth-openshift-85644d984b-qhpfp Started container oauth-openshift openshift-kube-scheduler-operator 32m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/openshift-kube-scheduler-guard-ip-10-0-140-6.ec2.internal -n openshift-kube-scheduler because it was missing openshift-authentication-operator 32m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 1/3 pods have been updated to the latest generation" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: 2/3 pods have been updated to the latest generation" openshift-kube-scheduler 32m Normal AddedInterface pod/openshift-kube-scheduler-guard-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.38/23] from ovn-kubernetes openshift-kube-scheduler 32m Normal Pulled pod/openshift-kube-scheduler-guard-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 32m Normal Started pod/openshift-kube-scheduler-guard-ip-10-0-140-6.ec2.internal Started container guard openshift-kube-scheduler 32m Normal Created pod/openshift-kube-scheduler-guard-ip-10-0-140-6.ec2.internal Created container guard openshift-kube-controller-manager-operator 32m Normal PodCreated deployment/kube-controller-manager-operator Created Pod/kube-controller-manager-guard-ip-10-0-140-6.ec2.internal -n openshift-kube-controller-manager because it was missing default 32m Normal AnnotationChange machineconfigpool/master Node ip-10-0-197-197.ec2.internal now has machineconfiguration.openshift.io/desiredConfig=rendered-master-d273453f5fe4894c22cd393f5c0dbfa3 default 32m Normal NodeDone node/ip-10-0-140-6.ec2.internal Setting node ip-10-0-140-6.ec2.internal, currentConfig rendered-master-d273453f5fe4894c22cd393f5c0dbfa3 to Done default 32m Warning ResolutionFailed namespace/openshift-custom-domains-operator constraints not satisfiable: no operators found from catalog custom-domains-operator-registry in namespace openshift-custom-domains-operator referenced by subscription custom-domains-operator, subscription custom-domains-operator exists openshift-kube-controller-manager 32m Normal Pulled pod/kube-controller-manager-guard-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine default 32m Normal ConfigDriftMonitorStarted node/ip-10-0-140-6.ec2.internal Config Drift Monitor started, watching against rendered-master-d273453f5fe4894c22cd393f5c0dbfa3 default 32m Normal Uncordon node/ip-10-0-140-6.ec2.internal Update completed for config rendered-master-d273453f5fe4894c22cd393f5c0dbfa3 and node has been uncordoned openshift-kube-controller-manager 32m Normal AddedInterface pod/kube-controller-manager-guard-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.39/23] from ovn-kubernetes default 32m Normal SetDesiredConfig machineconfigpool/master Targeted node ip-10-0-197-197.ec2.internal to config rendered-master-d273453f5fe4894c22cd393f5c0dbfa3 default 32m Warning ResolutionFailed namespace/openshift-deployment-validation-operator constraints not satisfiable: no operators found from catalog deployment-validation-operator-catalog in namespace openshift-deployment-validation-operator referenced by subscription deployment-validation-operator, subscription deployment-validation-operator exists openshift-kube-controller-manager 32m Normal Started pod/kube-controller-manager-guard-ip-10-0-140-6.ec2.internal Started container guard openshift-kube-controller-manager 32m Normal Created pod/kube-controller-manager-guard-ip-10-0-140-6.ec2.internal Created container guard default 32m Normal ConfigDriftMonitorStopped node/ip-10-0-197-197.ec2.internal Config Drift Monitor stopped default 32m Normal Drain node/ip-10-0-197-197.ec2.internal Draining node to update config. default 32m Normal Cordon node/ip-10-0-197-197.ec2.internal Cordoned node to apply update openshift-dns 32m Warning TopologyAwareHintsDisabled service/dns-default Insufficient Node information: allocatable CPU or zone not specified on one or more nodes, addressType: IPv4 openshift-kube-apiserver-operator 32m Normal PodCreated deployment/kube-apiserver-operator Created Pod/kube-apiserver-guard-ip-10-0-140-6.ec2.internal -n openshift-kube-apiserver because it was missing default 32m Normal NodeSchedulable node/ip-10-0-140-6.ec2.internal Node ip-10-0-140-6.ec2.internal status is now: NodeSchedulable openshift-image-registry 32m Normal Pulled pod/node-ca-5ldj8 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" in 18.61374889s (18.613762087s including waiting) default 32m Warning ResolutionFailed namespace/openshift-addon-operator constraints not satisfiable: no operators found from catalog addon-operator-catalog in namespace openshift-addon-operator referenced by subscription addon-operator, subscription addon-operator exists default 32m Warning ResolutionFailed namespace/openshift-cloud-ingress-operator constraints not satisfiable: no operators found from catalog cloud-ingress-operator-registry in namespace openshift-cloud-ingress-operator referenced by subscription cloud-ingress-operator, subscription cloud-ingress-operator exists openshift-etcd-operator 32m Normal PodCreated deployment/etcd-operator Created Pod/etcd-guard-ip-10-0-140-6.ec2.internal -n openshift-etcd because it was missing openshift-dns 32m Normal Pulled pod/node-resolver-qqhl6 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" in 19.386309069s (19.386325354s including waiting) openshift-controller-manager 32m Normal AddedInterface pod/controller-manager-66b447958d-6mqfl Add eth0 [10.128.0.41/23] from ovn-kubernetes openshift-apiserver 32m Normal Pulled pod/apiserver-5f568869f-wdslz Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" already present on machine openshift-controller-manager 32m Normal Pulled pod/controller-manager-66b447958d-6mqfl Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:03d999968e93d76624e3d10f22591abfbfdaaf2dd8f315abf64cdc590ab64195" already present on machine openshift-multus 32m Normal Pulled pod/multus-additional-cni-plugins-4qmk6 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633" in 19.18116188s (19.181167757s including waiting) openshift-route-controller-manager 32m Normal AddedInterface pod/route-controller-manager-6594987c6f-246st Add eth0 [10.128.0.42/23] from ovn-kubernetes openshift-route-controller-manager 32m Normal Pulled pod/route-controller-manager-6594987c6f-246st Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:884971410b4691a0666287eb389e07a425040d4fc90665c864aa9f4728d4599c" already present on machine openshift-oauth-apiserver 32m Normal AddedInterface pod/apiserver-74455c7c5-m45v9 Add eth0 [10.128.0.44/23] from ovn-kubernetes openshift-oauth-apiserver 32m Normal Pulled pod/apiserver-74455c7c5-m45v9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" already present on machine openshift-oauth-apiserver 32m Normal Created pod/apiserver-74455c7c5-m45v9 Created container fix-audit-permissions openshift-oauth-apiserver 32m Normal Started pod/apiserver-74455c7c5-m45v9 Started container fix-audit-permissions openshift-apiserver 32m Normal AddedInterface pod/apiserver-5f568869f-wdslz Add eth0 [10.128.0.43/23] from ovn-kubernetes openshift-route-controller-manager 32m Normal Created pod/route-controller-manager-6594987c6f-246st Created container route-controller-manager openshift-apiserver 32m Normal Started pod/apiserver-5f568869f-wdslz Started container fix-audit-permissions openshift-route-controller-manager 32m Normal Started pod/route-controller-manager-6594987c6f-246st Started container route-controller-manager openshift-kube-apiserver 32m Normal Started pod/kube-apiserver-guard-ip-10-0-140-6.ec2.internal Started container guard openshift-apiserver 32m Normal Created pod/apiserver-5f568869f-wdslz Created container fix-audit-permissions openshift-kube-apiserver 32m Normal Created pod/kube-apiserver-guard-ip-10-0-140-6.ec2.internal Created container guard openshift-kube-apiserver 32m Normal AddedInterface pod/kube-apiserver-guard-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.40/23] from ovn-kubernetes openshift-kube-apiserver 32m Normal Pulled pod/kube-apiserver-guard-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-controller-manager 32m Normal Started pod/controller-manager-66b447958d-6mqfl Started container controller-manager openshift-etcd 32m Normal Pulled pod/etcd-guard-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd 32m Normal AddedInterface pod/etcd-guard-ip-10-0-140-6.ec2.internal Add eth0 [10.128.0.45/23] from ovn-kubernetes openshift-monitoring 32m Normal Pulling pod/node-exporter-4g9rl Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" openshift-monitoring 32m Normal Started pod/node-exporter-4g9rl Started container node-exporter openshift-etcd 32m Normal Created pod/etcd-guard-ip-10-0-140-6.ec2.internal Created container guard openshift-controller-manager 32m Normal SuccessfulDelete replicaset/controller-manager-c5c84d6f9 Deleted pod: controller-manager-c5c84d6f9-tll5c openshift-controller-manager 32m Normal ScalingReplicaSet deployment/controller-manager Scaled down replica set controller-manager-c5c84d6f9 to 1 from 2 openshift-monitoring 32m Normal Created pod/node-exporter-4g9rl Created container node-exporter openshift-controller-manager 32m Normal ScalingReplicaSet deployment/controller-manager Scaled up replica set controller-manager-66b447958d to 2 from 1 openshift-etcd 32m Normal Started pod/etcd-guard-ip-10-0-140-6.ec2.internal Started container guard openshift-authentication-operator 32m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is waiting in pending apiserver-74455c7c5-m45v9 pod)" openshift-controller-manager 32m Normal SuccessfulCreate replicaset/controller-manager-66b447958d Created pod: controller-manager-66b447958d-6gldq openshift-oauth-apiserver 32m Normal Started pod/apiserver-74455c7c5-m45v9 Started container oauth-apiserver openshift-oauth-apiserver 32m Normal Pulled pod/apiserver-74455c7c5-m45v9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" already present on machine openshift-cluster-csi-drivers 32m Normal Pulled pod/aws-ebs-csi-driver-node-s4chb Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" in 20.134649118s (20.13465587s including waiting) openshift-controller-manager 32m Normal Created pod/controller-manager-66b447958d-6mqfl Created container controller-manager openshift-oauth-apiserver 32m Normal Created pod/apiserver-74455c7c5-m45v9 Created container oauth-apiserver openshift-apiserver 32m Normal Pulled pod/apiserver-5f568869f-wdslz Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" already present on machine openshift-apiserver 32m Normal Created pod/apiserver-5f568869f-wdslz Created container openshift-apiserver openshift-apiserver-operator 32m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-5f568869f-wdslz pod)" openshift-apiserver 32m Normal Started pod/apiserver-5f568869f-wdslz Started container openshift-apiserver openshift-apiserver 32m Normal Pulled pod/apiserver-5f568869f-wdslz Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-machine-config-operator 32m Normal Pulled pod/machine-config-daemon-vlfmm Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" in 20.149246084s (20.149254368s including waiting) openshift-controller-manager 32m Normal Killing pod/controller-manager-c5c84d6f9-tll5c Stopping container controller-manager openshift-apiserver 32m Normal Started pod/apiserver-5f568869f-wdslz Started container openshift-apiserver-check-endpoints openshift-apiserver 32m Normal Created pod/apiserver-5f568869f-wdslz Created container openshift-apiserver-check-endpoints openshift-cloud-credential-operator 32m Normal SuccessfulCreate replicaset/pod-identity-webhook-b645775d7 Created pod: pod-identity-webhook-b645775d7-cmgdm openshift-apiserver 32m Warning FastControllerResync node/ip-10-0-140-6.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling openshift-service-ca-operator 32m Normal SuccessfulCreate replicaset/service-ca-operator-7988896c96 Created pod: service-ca-operator-7988896c96-9vpq6 openshift-apiserver-operator 32m Warning OpenShiftAPICheckFailed deployment/openshift-apiserver-operator "image.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = client rate limiter Wait returned an error: context canceled openshift-cloud-network-config-controller 32m Normal AddedInterface pod/cloud-network-config-controller-7cc55b87d4-7wlrt Add eth0 [10.128.0.49/23] from ovn-kubernetes openshift-cloud-network-config-controller 32m Normal Pulling pod/cloud-network-config-controller-7cc55b87d4-7wlrt Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737fbb45ea282de2eba6ed7c7e0112d62d31a74ed0dc6b9d0b1ad01975227945" openshift-apiserver-operator 32m Warning OpenShiftAPICheckFailed deployment/openshift-apiserver-operator "quota.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = client rate limiter Wait returned an error: context canceled openshift-apiserver-operator 32m Warning OpenShiftAPICheckFailed deployment/openshift-apiserver-operator "route.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = client rate limiter Wait returned an error: context canceled openshift-ingress-operator 32m Normal Killing pod/ingress-operator-6486794b49-42ddh Stopping container ingress-operator openshift-ingress-operator 32m Normal Killing pod/ingress-operator-6486794b49-42ddh Stopping container kube-rbac-proxy openshift-apiserver 32m Warning FastControllerResync node/ip-10-0-140-6.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling openshift-apiserver-operator 32m Warning OpenShiftAPICheckFailed deployment/openshift-apiserver-operator "project.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = client rate limiter Wait returned an error: context canceled openshift-authentication-operator 32m Normal Killing pod/authentication-operator-dbb89644b-tbxcm Stopping container authentication-operator openshift-authentication-operator 32m Normal SuccessfulCreate replicaset/authentication-operator-dbb89644b Created pod: authentication-operator-dbb89644b-4b786 openshift-cluster-csi-drivers 32m Normal Killing pod/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp Stopping container csi-driver openshift-kube-apiserver-operator 32m Warning InstallerPodFailed deployment/kube-apiserver-operator Failed to create installer pod for revision 12 count 0 on node "ip-10-0-140-6.ec2.internal": client rate limiter Wait returned an error: context canceled openshift-cloud-network-config-controller 32m Normal Killing pod/cloud-network-config-controller-7cc55b87d4-drl56 Stopping container controller openshift-service-ca-operator 32m Normal Killing pod/service-ca-operator-7988896c96-5q667 Stopping container service-ca-operator openshift-ingress-operator 32m Normal SuccessfulCreate replicaset/ingress-operator-6486794b49 Created pod: ingress-operator-6486794b49-9zv9g openshift-cluster-csi-drivers 32m Normal Killing pod/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp Stopping container csi-snapshotter openshift-cluster-csi-drivers 32m Normal Killing pod/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp Stopping container resizer-kube-rbac-proxy openshift-apiserver-operator 32m Normal SuccessfulCreate replicaset/openshift-apiserver-operator-67fd94b9d7 Created pod: openshift-apiserver-operator-67fd94b9d7-m22hm openshift-apiserver-operator 32m Normal Killing pod/openshift-apiserver-operator-67fd94b9d7-nvg29 Stopping container openshift-apiserver-operator openshift-cluster-csi-drivers 32m Normal SuccessfulCreate replicaset/aws-ebs-csi-driver-controller-5ff7cf9694 Created pod: aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk openshift-cluster-csi-drivers 32m Normal Killing pod/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp Stopping container snapshotter-kube-rbac-proxy openshift-cluster-storage-operator 32m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Progressing message changed from "AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods" to "AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods" openshift-controller-manager 32m Normal AddedInterface pod/controller-manager-66b447958d-6gldq Add eth0 [10.129.0.45/23] from ovn-kubernetes openshift-controller-manager 32m Normal Pulled pod/controller-manager-66b447958d-6gldq Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:03d999968e93d76624e3d10f22591abfbfdaaf2dd8f315abf64cdc590ab64195" already present on machine openshift-kube-apiserver-operator 32m Normal Killing pod/kube-apiserver-operator-79b598d5b4-dqp95 Stopping container kube-apiserver-operator openshift-kube-apiserver-operator 32m Normal SuccessfulCreate replicaset/kube-apiserver-operator-79b598d5b4 Created pod: kube-apiserver-operator-79b598d5b4-rm6pd openshift-cloud-credential-operator 32m Normal SuccessfulCreate replicaset/cloud-credential-operator-7fffc6cb67 Created pod: cloud-credential-operator-7fffc6cb67-29lts openshift-cloud-credential-operator 32m Normal Killing pod/cloud-credential-operator-7fffc6cb67-gkvnc Stopping container cloud-credential-operator openshift-cloud-credential-operator 32m Normal Killing pod/cloud-credential-operator-7fffc6cb67-gkvnc Stopping container kube-rbac-proxy openshift-cluster-csi-drivers 32m Normal Killing pod/aws-ebs-csi-driver-controller-5ff7cf9694-z9xxp Stopping container csi-liveness-probe openshift-cloud-network-config-controller 32m Normal SuccessfulCreate replicaset/cloud-network-config-controller-7cc55b87d4 Created pod: cloud-network-config-controller-7cc55b87d4-7wlrt openshift-cluster-csi-drivers 32m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk Created container driver-kube-rbac-proxy openshift-kube-apiserver-operator 32m Normal Created pod/kube-apiserver-operator-79b598d5b4-rm6pd Created container kube-apiserver-operator openshift-multus 32m Normal Pulled pod/multus-xqcfd Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" in 23.174246373s (23.174255802s including waiting) openshift-authentication-operator 32m Normal Pulling pod/authentication-operator-dbb89644b-4b786 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8b9deb101306eca89fb04662fd5266a3704ad19d6e54cae5ae79e373c0ec62d" openshift-service-ca-operator 32m Normal AddedInterface pod/service-ca-operator-7988896c96-9vpq6 Add eth0 [10.128.0.54/23] from ovn-kubernetes openshift-service-ca-operator 32m Normal Pulled pod/service-ca-operator-7988896c96-9vpq6 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7f7cb6554c1dc9b5b3b58f162f592062e5c63bf24c5ed90a62074e117be3f743" already present on machine openshift-cluster-csi-drivers 32m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-ovn-kubernetes 32m Normal Pulled pod/ovnkube-node-zzdfn Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" in 23.344703242s (23.344709283s including waiting) openshift-cloud-credential-operator 32m Normal AddedInterface pod/cloud-credential-operator-7fffc6cb67-29lts Add eth0 [10.128.0.53/23] from ovn-kubernetes openshift-service-ca-operator 32m Normal Created pod/service-ca-operator-7988896c96-9vpq6 Created container service-ca-operator openshift-service-ca-operator 32m Normal Started pod/service-ca-operator-7988896c96-9vpq6 Started container service-ca-operator openshift-kube-apiserver-operator 32m Normal AddedInterface pod/kube-apiserver-operator-79b598d5b4-rm6pd Add eth0 [10.128.0.57/23] from ovn-kubernetes openshift-cloud-credential-operator 32m Normal Pulled pod/cloud-credential-operator-7fffc6cb67-29lts Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-authentication-operator 32m Normal AddedInterface pod/authentication-operator-dbb89644b-4b786 Add eth0 [10.128.0.52/23] from ovn-kubernetes openshift-controller-manager 32m Normal Started pod/controller-manager-66b447958d-6gldq Started container controller-manager openshift-controller-manager 32m Normal Created pod/controller-manager-66b447958d-6gldq Created container controller-manager openshift-cluster-machine-approver 32m Normal Killing pod/machine-approver-5cd47987c9-96cvq Stopping container kube-rbac-proxy openshift-controller-manager 32m Normal SuccessfulCreate replicaset/controller-manager-66b447958d Created pod: controller-manager-66b447958d-w97xv openshift-controller-manager 32m Normal Killing pod/controller-manager-c5c84d6f9-qxhsq Stopping container controller-manager openshift-cluster-machine-approver 32m Normal Killing pod/machine-approver-5cd47987c9-96cvq Stopping container machine-approver-controller openshift-kube-apiserver-operator 32m Normal Started pod/kube-apiserver-operator-79b598d5b4-rm6pd Started container kube-apiserver-operator openshift-machine-config-operator 32m Normal Pulling pod/machine-config-daemon-vlfmm Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" openshift-machine-config-operator 32m Normal Started pod/machine-config-daemon-vlfmm Started container machine-config-daemon openshift-ingress-operator 32m Normal AddedInterface pod/ingress-operator-6486794b49-9zv9g Add eth0 [10.128.0.56/23] from ovn-kubernetes openshift-kube-controller-manager-operator 32m Normal SuccessfulCreate replicaset/kube-controller-manager-operator-655bd6977c Created pod: kube-controller-manager-operator-655bd6977c-27c5p openshift-cloud-credential-operator 32m Normal Created pod/cloud-credential-operator-7fffc6cb67-29lts Created container kube-rbac-proxy openshift-cloud-credential-operator 32m Normal Started pod/cloud-credential-operator-7fffc6cb67-29lts Started container kube-rbac-proxy openshift-controller-manager 32m Normal SuccessfulDelete replicaset/controller-manager-c5c84d6f9 Deleted pod: controller-manager-c5c84d6f9-qxhsq openshift-cluster-machine-approver 32m Normal Pulled pod/machine-approver-5cd47987c9-xkqd2 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-cluster-node-tuning-operator 32m Normal Started pod/tuned-9gtgt Started container tuned openshift-cluster-node-tuning-operator 32m Normal Created pod/tuned-9gtgt Created container tuned openshift-cloud-credential-operator 32m Normal Pulling pod/cloud-credential-operator-7fffc6cb67-29lts Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:023392c216b04a82b69315c210827b2776d95583bee16754f55577573553cad4" openshift-cluster-node-tuning-operator 32m Normal Pulled pod/tuned-9gtgt Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" in 23.187408511s (23.187416272s including waiting) openshift-controller-manager 32m Normal ScalingReplicaSet deployment/controller-manager Scaled down replica set controller-manager-c5c84d6f9 to 0 from 1 openshift-controller-manager 32m Normal ScalingReplicaSet deployment/controller-manager Scaled up replica set controller-manager-66b447958d to 3 from 2 openshift-machine-config-operator 32m Normal Created pod/machine-config-daemon-vlfmm Created container machine-config-daemon openshift-kube-controller-manager-operator 32m Normal Killing pod/kube-controller-manager-operator-655bd6977c-z9mb9 Stopping container kube-controller-manager-operator openshift-apiserver-operator 32m Normal AddedInterface pod/openshift-apiserver-operator-67fd94b9d7-m22hm Add eth0 [10.128.0.50/23] from ovn-kubernetes openshift-apiserver-operator 32m Normal Pulling pod/openshift-apiserver-operator-67fd94b9d7-m22hm Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:55b8c96568666d4340d71558c31742bd8b5c02ab0cca7913fa41586d5f2de697" openshift-cloud-credential-operator 32m Normal AddedInterface pod/pod-identity-webhook-b645775d7-cmgdm Add eth0 [10.128.0.51/23] from ovn-kubernetes openshift-multus 32m Normal Created pod/multus-additional-cni-plugins-4qmk6 Created container egress-router-binary-copy openshift-cluster-csi-drivers 32m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk Created container csi-provisioner openshift-cloud-credential-operator 32m Normal Pulled pod/pod-identity-webhook-b645775d7-cmgdm Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e248571068c87bc5b2f69bd4fc2bc3934d8bcd2b2a7fecadc754a30e06ac592" already present on machine openshift-cluster-csi-drivers 32m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77941761aca0cba770d56fcf4d213512b4dd959aa49d3f50c9da02a7aee8d62e" already present on machine openshift-cloud-credential-operator 32m Normal Created pod/pod-identity-webhook-b645775d7-cmgdm Created container pod-identity-webhook openshift-cloud-credential-operator 32m Normal Started pod/pod-identity-webhook-b645775d7-cmgdm Started container pod-identity-webhook openshift-kube-apiserver-operator 32m Normal Pulled pod/kube-apiserver-operator-79b598d5b4-rm6pd Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-cluster-machine-approver 32m Normal Created pod/machine-approver-5cd47987c9-xkqd2 Created container kube-rbac-proxy openshift-cluster-machine-approver 32m Normal Started pod/machine-approver-5cd47987c9-xkqd2 Started container kube-rbac-proxy openshift-cluster-machine-approver 32m Normal Pulling pod/machine-approver-5cd47987c9-xkqd2 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:90fd3983343e366cb4df6f35efa1527e4b5da93e90558f23aa416cb9c453375e" openshift-cluster-csi-drivers 32m Normal AddedInterface pod/aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk Add eth0 [10.128.0.55/23] from ovn-kubernetes openshift-cluster-machine-approver 32m Normal SuccessfulCreate replicaset/machine-approver-5cd47987c9 Created pod: machine-approver-5cd47987c9-xkqd2 openshift-cluster-csi-drivers 32m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk Started container driver-kube-rbac-proxy openshift-ingress-operator 32m Normal Pulling pod/ingress-operator-6486794b49-9zv9g Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6de5d1e6d774c20a0c3bd4d4c544234519bc4528e5535f24d0cf78a6788b4056" openshift-cluster-csi-drivers 32m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" already present on machine openshift-cluster-csi-drivers 32m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk Created container csi-driver openshift-cluster-csi-drivers 32m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk Started container csi-driver openshift-cluster-csi-drivers 32m Normal Started pod/aws-ebs-csi-driver-node-s4chb Started container csi-driver openshift-kube-apiserver-operator 32m Warning FastControllerResync deployment/kube-apiserver-operator Controller "GuardController" resync interval is set to 0s which might lead to client request throttling openshift-dns 32m Normal Started pod/node-resolver-qqhl6 Started container dns-node-resolver openshift-dns 32m Normal Created pod/node-resolver-qqhl6 Created container dns-node-resolver openshift-multus 32m Normal Pulling pod/multus-additional-cni-plugins-4qmk6 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be088cb9218a248cd4a616132c8daf531502c7b85404aadb0b73e76f8dba01df" openshift-service-ca-operator 32m Warning FastControllerResync deployment/service-ca-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-multus 32m Normal Started pod/multus-additional-cni-plugins-4qmk6 Started container egress-router-binary-copy openshift-ovn-kubernetes 32m Normal Pulling pod/ovnkube-node-zzdfn Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" openshift-cluster-csi-drivers 32m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk Started container csi-provisioner openshift-cluster-csi-drivers 32m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-image-registry 32m Normal Created pod/node-ca-5ldj8 Created container node-ca openshift-image-registry 32m Normal Started pod/node-ca-5ldj8 Started container node-ca openshift-service-ca-operator 32m Normal LeaderElection configmap/service-ca-operator-lock service-ca-operator-7988896c96-9vpq6_10cc9ba1-f127-4fe3-bc06-bb5eff0da430 became leader openshift-cluster-csi-drivers 32m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk Created container provisioner-kube-rbac-proxy openshift-ovn-kubernetes 32m Normal Started pod/ovnkube-node-zzdfn Started container ovn-acl-logging openshift-ovn-kubernetes 32m Normal Created pod/ovnkube-node-zzdfn Created container ovn-acl-logging openshift-console 32m Normal Killing pod/console-65cc7f8b45-drq2q Stopping container console openshift-kube-apiserver-operator 32m Warning FastControllerResync deployment/kube-apiserver-operator Controller "EventWatchController" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 32m Normal LeaderElection lease/kube-apiserver-operator-lock kube-apiserver-operator-79b598d5b4-rm6pd_2d57a171-df4d-4f74-8175-86645c0b24cb became leader openshift-kube-apiserver-operator 32m Warning FastControllerResync deployment/kube-apiserver-operator Controller "ConnectivityCheckController" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 32m Warning FastControllerResync deployment/kube-apiserver-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager-operator 32m Normal AddedInterface pod/kube-controller-manager-operator-655bd6977c-27c5p Add eth0 [10.128.0.58/23] from ovn-kubernetes openshift-console 32m Normal SuccessfulCreate replicaset/console-65cc7f8b45 Created pod: console-65cc7f8b45-mbjm9 openshift-service-ca-operator 32m Normal LeaderElection lease/service-ca-operator-lock service-ca-operator-7988896c96-9vpq6_10cc9ba1-f127-4fe3-bc06-bb5eff0da430 became leader openshift-service-ca-operator 32m Warning FastControllerResync deployment/service-ca-operator Controller "ServiceCAOperator" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 32m Warning FastControllerResync deployment/kube-apiserver-operator Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling openshift-cluster-csi-drivers 32m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk Started container provisioner-kube-rbac-proxy openshift-cluster-csi-drivers 32m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:726ed98ed8df6da72ea0aaecf62714470ad60d9e5665b65286271e92e4f46c1d" already present on machine openshift-monitoring 32m Normal Created pod/node-exporter-4g9rl Created container kube-rbac-proxy openshift-monitoring 32m Normal Pulled pod/node-exporter-4g9rl Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" in 4.702198299s (4.702211342s including waiting) openshift-cluster-csi-drivers 32m Normal Created pod/aws-ebs-csi-driver-node-s4chb Created container csi-driver openshift-cluster-csi-drivers 32m Normal Pulling pod/aws-ebs-csi-driver-node-s4chb Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" openshift-kube-controller-manager-operator 32m Normal Created pod/kube-controller-manager-operator-655bd6977c-27c5p Created container kube-controller-manager-operator openshift-ovn-kubernetes 32m Normal Pulled pod/ovnkube-node-zzdfn Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-kube-apiserver-operator 32m Warning FastControllerResync deployment/kube-apiserver-operator Controller "InstallerController" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 32m Warning FastControllerResync deployment/kube-apiserver-operator Controller "FeatureUpgradeableController" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 32m Warning FastControllerResync deployment/kube-apiserver-operator Controller "PruneController" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 32m Warning FastControllerResync deployment/kube-apiserver-operator Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling openshift-multus 32m Normal Started pod/multus-xqcfd Started container kube-multus openshift-kube-apiserver-operator 32m Warning FastControllerResync deployment/kube-apiserver-operator Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling openshift-kube-apiserver-operator 32m Warning FastControllerResync deployment/kube-apiserver-operator Controller "KubeletVersionSkewController" resync interval is set to 0s which might lead to client request throttling openshift-multus 32m Normal Created pod/multus-xqcfd Created container kube-multus openshift-kube-apiserver-operator 32m Warning FastControllerResync deployment/kube-apiserver-operator Controller "webhookSupportabilityController" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 32m Warning FastControllerResync deployment/kube-apiserver-operator Controller "NodeController" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager-operator 32m Normal Pulled pod/kube-controller-manager-operator-655bd6977c-27c5p Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager-operator 32m Normal Started pod/kube-controller-manager-operator-655bd6977c-27c5p Started container kube-controller-manager-operator openshift-kube-apiserver 32m Normal Killing pod/kube-apiserver-guard-ip-10-0-197-197.ec2.internal Stopping container guard openshift-cluster-csi-drivers 32m Normal Pulled pod/aws-ebs-csi-driver-node-s4chb Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" in 1.260792111s (1.260802516s including waiting) openshift-kube-controller-manager-operator 32m Warning FastControllerResync deployment/kube-controller-manager-operator Controller "GuardController" resync interval is set to 0s which might lead to client request throttling openshift-cluster-machine-approver 32m Normal LeaderElection lease/cluster-machine-approver-leader ip-10-0-239-132_1623a866-8b47-4916-8825-6530f1dae19c became leader openshift-console 32m Normal AddedInterface pod/console-65cc7f8b45-mbjm9 Add eth0 [10.128.0.59/23] from ovn-kubernetes openshift-kube-controller-manager-operator 32m Warning FastControllerResync deployment/kube-controller-manager-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-monitoring 32m Normal Started pod/node-exporter-4g9rl Started container kube-rbac-proxy openshift-dns-operator 32m Normal Killing pod/dns-operator-656b9bd9f9-lb9ps Stopping container dns-operator openshift-dns-operator 32m Normal Killing pod/dns-operator-656b9bd9f9-lb9ps Stopping container kube-rbac-proxy openshift-dns-operator 32m Normal AddedInterface pod/dns-operator-656b9bd9f9-rf9q6 Add eth0 [10.129.0.46/23] from ovn-kubernetes openshift-kube-controller-manager-operator 32m Warning FastControllerResync deployment/kube-controller-manager-operator Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager-operator 32m Warning FastControllerResync deployment/kube-controller-manager-operator Controller "NodeController" resync interval is set to 0s which might lead to client request throttling openshift-dns-operator 32m Normal Pulling pod/dns-operator-656b9bd9f9-rf9q6 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ecfd2df94486e0570eeb0f88a5696ecaa0e1e54bc67d342aab3a6167863175fe" openshift-cluster-machine-approver 32m Normal Pulled pod/machine-approver-5cd47987c9-xkqd2 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:90fd3983343e366cb4df6f35efa1527e4b5da93e90558f23aa416cb9c453375e" in 1.732590063s (1.732601999s including waiting) openshift-insights 32m Normal SuccessfulCreate replicaset/insights-operator-6fd65c6b65 Created pod: insights-operator-6fd65c6b65-lh6xj openshift-ovn-kubernetes 32m Normal Pulled pod/ovnkube-node-zzdfn Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-kube-controller-manager-operator 32m Warning FastControllerResync deployment/kube-controller-manager-operator Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling openshift-ovn-kubernetes 32m Normal Started pod/ovnkube-node-zzdfn Started container kube-rbac-proxy openshift-cluster-machine-approver 32m Normal Created pod/machine-approver-5cd47987c9-xkqd2 Created container machine-approver-controller openshift-kube-controller-manager-operator 32m Warning FastControllerResync deployment/kube-controller-manager-operator Controller "InstallerController" resync interval is set to 0s which might lead to client request throttling openshift-cluster-machine-approver 32m Normal Started pod/machine-approver-5cd47987c9-xkqd2 Started container machine-approver-controller openshift-kube-controller-manager-operator 32m Normal LeaderElection lease/kube-controller-manager-operator-lock kube-controller-manager-operator-655bd6977c-27c5p_cf9e4d6d-a481-48fb-8761-3df88efd11e3 became leader openshift-kube-controller-manager-operator 32m Normal LeaderElection configmap/kube-controller-manager-operator-lock kube-controller-manager-operator-655bd6977c-27c5p_cf9e4d6d-a481-48fb-8761-3df88efd11e3 became leader openshift-machine-config-operator 32m Normal Pulled pod/machine-config-daemon-vlfmm Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" in 2.054130594s (2.054137229s including waiting) openshift-ovn-kubernetes 32m Normal Created pod/ovnkube-node-zzdfn Created container kube-rbac-proxy openshift-ovn-kubernetes 32m Normal Pulled pod/ovnkube-node-zzdfn Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" in 1.151953804s (1.151960288s including waiting) openshift-dns-operator 32m Normal SuccessfulCreate replicaset/dns-operator-656b9bd9f9 Created pod: dns-operator-656b9bd9f9-rf9q6 openshift-kube-controller-manager-operator 32m Warning FastControllerResync deployment/kube-controller-manager-operator Controller "PruneController" resync interval is set to 0s which might lead to client request throttling openshift-console 32m Normal Pulled pod/console-65cc7f8b45-mbjm9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2f8ed86b29b0df00f0cfb8b6d170e5fa8d9b0092ee92140788ec5a0a1eb60a10" already present on machine openshift-controller-manager-operator 32m Normal AddedInterface pod/openshift-controller-manager-operator-6548869cc5-xfpsm Add eth0 [10.129.0.47/23] from ovn-kubernetes openshift-cluster-csi-drivers 32m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk Created container csi-attacher openshift-insights 32m Normal Killing pod/insights-operator-6fd65c6b65-vrxhp Stopping container insights-operator openshift-controller-manager-operator 32m Normal Killing pod/openshift-controller-manager-operator-6548869cc5-9kqx5 Stopping container openshift-controller-manager-operator openshift-controller-manager-operator 32m Normal SuccessfulCreate replicaset/openshift-controller-manager-operator-6548869cc5 Created pod: openshift-controller-manager-operator-6548869cc5-xfpsm openshift-cluster-csi-drivers 32m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk Started container csi-attacher default 32m Normal NodeNotSchedulable node/ip-10-0-197-197.ec2.internal Node ip-10-0-197-197.ec2.internal status is now: NodeNotSchedulable openshift-cluster-csi-drivers 32m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-machine-api 32m Normal Killing pod/control-plane-machine-set-operator-77b4c948f8-s7qsh Stopping container control-plane-machine-set-operator openshift-machine-api 32m Normal SuccessfulCreate replicaset/control-plane-machine-set-operator-77b4c948f8 Created pod: control-plane-machine-set-operator-77b4c948f8-7vvdb openshift-dns-operator 32m Normal Created pod/dns-operator-656b9bd9f9-rf9q6 Created container kube-rbac-proxy openshift-dns-operator 32m Normal Pulled pod/dns-operator-656b9bd9f9-rf9q6 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-dns-operator 32m Normal Started pod/dns-operator-656b9bd9f9-rf9q6 Started container kube-rbac-proxy openshift-etcd-operator 32m Normal Killing pod/etcd-operator-775754ddff-xjxrm Stopping container etcd-operator openshift-dns-operator 32m Normal Started pod/dns-operator-656b9bd9f9-rf9q6 Started container dns-operator openshift-cluster-node-tuning-operator 32m Normal Killing pod/cluster-node-tuning-operator-5886c76fd4-7qpt5 Stopping container cluster-node-tuning-operator openshift-cluster-node-tuning-operator 32m Normal SuccessfulCreate replicaset/cluster-node-tuning-operator-5886c76fd4 Created pod: cluster-node-tuning-operator-5886c76fd4-cntr6 openshift-etcd-operator 32m Normal SuccessfulCreate replicaset/etcd-operator-775754ddff Created pod: etcd-operator-775754ddff-tnxcn openshift-dns-operator 32m Normal Pulled pod/dns-operator-656b9bd9f9-rf9q6 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ecfd2df94486e0570eeb0f88a5696ecaa0e1e54bc67d342aab3a6167863175fe" in 1.556600554s (1.556612417s including waiting) openshift-dns-operator 32m Normal Created pod/dns-operator-656b9bd9f9-rf9q6 Created container dns-operator openshift-controller-manager-operator 32m Normal Pulling pod/openshift-controller-manager-operator-6548869cc5-xfpsm Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa8066a640500eaaf14c73b769e8792c0b420a927adb8db98ec47d9440a85d32" openshift-machine-api 32m Normal Killing pod/machine-api-operator-564474f8c6-284hs Stopping container machine-api-operator openshift-insights 32m Normal AddedInterface pod/insights-operator-6fd65c6b65-lh6xj Add eth0 [10.128.0.60/23] from ovn-kubernetes openshift-cluster-storage-operator 32m Normal SuccessfulCreate replicaset/cluster-storage-operator-fb5868667 Created pod: cluster-storage-operator-fb5868667-wn4n8 openshift-machine-api 32m Normal Killing pod/machine-api-operator-564474f8c6-284hs Stopping container kube-rbac-proxy openshift-cluster-storage-operator 32m Normal AddedInterface pod/cluster-storage-operator-fb5868667-wn4n8 Add eth0 [10.129.0.48/23] from ovn-kubernetes openshift-kube-controller-manager 32m Normal Killing pod/kube-controller-manager-guard-ip-10-0-197-197.ec2.internal Stopping container guard openshift-kube-scheduler-operator 32m Normal SuccessfulCreate replicaset/openshift-kube-scheduler-operator-c98d57874 Created pod: openshift-kube-scheduler-operator-c98d57874-t6vzp openshift-machine-config-operator 32m Normal SuccessfulCreate replicaset/machine-config-operator-7fd9cd8968 Created pod: machine-config-operator-7fd9cd8968-sbt2v openshift-machine-api 32m Normal SuccessfulCreate replicaset/machine-api-operator-564474f8c6 Created pod: machine-api-operator-564474f8c6-nlqm9 openshift-cluster-storage-operator 32m Normal Killing pod/cluster-storage-operator-fb5868667-cclnx Stopping container cluster-storage-operator openshift-kube-controller-manager-operator 32m Normal SATokenSignerControllerOK deployment/kube-controller-manager-operator found expected kube-apiserver endpoints openshift-cluster-node-tuning-operator 32m Normal Pulled pod/cluster-node-tuning-operator-5886c76fd4-cntr6 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" already present on machine openshift-cluster-csi-drivers 32m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk Created container csi-resizer openshift-image-registry 32m Normal Killing pod/cluster-image-registry-operator-868788f8c6-frhj8 Stopping container cluster-image-registry-operator openshift-cluster-csi-drivers 32m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk Started container csi-resizer openshift-console 32m Normal Started pod/console-65cc7f8b45-mbjm9 Started container console openshift-image-registry 32m Normal SuccessfulCreate replicaset/cluster-image-registry-operator-868788f8c6 Created pod: cluster-image-registry-operator-868788f8c6-9j6mj openshift-machine-config-operator 32m Normal Started pod/machine-config-operator-7fd9cd8968-sbt2v Started container machine-config-operator openshift-ingress-operator 32m Normal Created pod/ingress-operator-6486794b49-9zv9g Created container ingress-operator openshift-cluster-csi-drivers 32m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-cloud-network-config-controller 32m Normal Started pod/cloud-network-config-controller-7cc55b87d4-7wlrt Started container controller openshift-cloud-network-config-controller 32m Normal Created pod/cloud-network-config-controller-7cc55b87d4-7wlrt Created container controller openshift-machine-config-operator 32m Normal Created pod/machine-config-operator-7fd9cd8968-sbt2v Created container machine-config-operator openshift-cluster-csi-drivers 32m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk Started container attacher-kube-rbac-proxy openshift-machine-config-operator 32m Normal Pulled pod/machine-config-operator-7fd9cd8968-sbt2v Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" already present on machine openshift-machine-config-operator 32m Normal AddedInterface pod/machine-config-operator-7fd9cd8968-sbt2v Add eth0 [10.129.0.49/23] from ovn-kubernetes openshift-cluster-csi-drivers 32m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66daa08f96501fa939342eafe2de7be5307656a3ff3ec9bde82664905c695bb6" already present on machine openshift-cluster-csi-drivers 32m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk Created container attacher-kube-rbac-proxy openshift-authentication-operator 32m Normal Started pod/authentication-operator-dbb89644b-4b786 Started container authentication-operator openshift-operator-lifecycle-manager 32m Normal AddedInterface pod/catalog-operator-567d5cdcc9-zvdz6 Add eth0 [10.129.0.50/23] from ovn-kubernetes openshift-operator-lifecycle-manager 32m Normal Pulled pod/catalog-operator-567d5cdcc9-zvdz6 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:497121bfdb3b293af2c939af393bfbda39aeace9d754f7288a244cd7ec2662d7" already present on machine openshift-ingress-operator 32m Normal Started pod/ingress-operator-6486794b49-9zv9g Started container ingress-operator openshift-authentication-operator 32m Normal Created pod/authentication-operator-dbb89644b-4b786 Created container authentication-operator openshift-cloud-credential-operator 32m Normal Pulled pod/cloud-credential-operator-7fffc6cb67-29lts Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:023392c216b04a82b69315c210827b2776d95583bee16754f55577573553cad4" in 5.636124911s (5.636133067s including waiting) openshift-authentication-operator 32m Normal Pulled pod/authentication-operator-dbb89644b-4b786 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8b9deb101306eca89fb04662fd5266a3704ad19d6e54cae5ae79e373c0ec62d" in 5.851767973s (5.851775247s including waiting) openshift-console 32m Normal Created pod/console-65cc7f8b45-mbjm9 Created container console openshift-cloud-credential-operator 32m Normal Started pod/cloud-credential-operator-7fffc6cb67-29lts Started container cloud-credential-operator openshift-controller-manager-operator 32m Normal Created pod/openshift-controller-manager-operator-6548869cc5-xfpsm Created container openshift-controller-manager-operator openshift-operator-lifecycle-manager 32m Normal SuccessfulCreate replicaset/catalog-operator-567d5cdcc9 Created pod: catalog-operator-567d5cdcc9-zvdz6 openshift-insights 32m Normal Pulling pod/insights-operator-6fd65c6b65-lh6xj Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7cb4c45f3e100ceddafee4c6ccd57d79f5a6627686484aba625c1486c2ffc1c8" openshift-operator-lifecycle-manager 32m Normal Killing pod/olm-operator-647f89bf4f-rgnx9 Stopping container olm-operator openshift-controller-manager-operator 32m Normal Started pod/openshift-controller-manager-operator-6548869cc5-xfpsm Started container openshift-controller-manager-operator openshift-ingress-operator 32m Normal Pulled pod/ingress-operator-6486794b49-9zv9g Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-marketplace 32m Normal Killing pod/marketplace-operator-554c77d6df-2q9k5 Stopping container marketplace-operator openshift-cluster-storage-operator 32m Normal Pulling pod/cluster-storage-operator-fb5868667-wn4n8 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2a4719dd49c67aa02ad187264977e0b64ad2b0d6725e99b1d460567663961ef4" openshift-marketplace 32m Normal SuccessfulCreate replicaset/marketplace-operator-554c77d6df Created pod: marketplace-operator-554c77d6df-pn29n openshift-operator-lifecycle-manager 32m Normal SuccessfulCreate replicaset/olm-operator-647f89bf4f Created pod: olm-operator-647f89bf4f-bl8lz openshift-apiserver-operator 32m Normal Started pod/openshift-apiserver-operator-67fd94b9d7-m22hm Started container openshift-apiserver-operator openshift-apiserver-operator 32m Normal Created pod/openshift-apiserver-operator-67fd94b9d7-m22hm Created container openshift-apiserver-operator openshift-machine-api 32m Normal AddedInterface pod/machine-api-operator-564474f8c6-nlqm9 Add eth0 [10.128.0.64/23] from ovn-kubernetes openshift-cloud-network-config-controller 32m Normal Pulled pod/cloud-network-config-controller-7cc55b87d4-7wlrt Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737fbb45ea282de2eba6ed7c7e0112d62d31a74ed0dc6b9d0b1ad01975227945" in 6.365669716s (6.365679483s including waiting) openshift-controller-manager-operator 32m Normal Pulled pod/openshift-controller-manager-operator-6548869cc5-xfpsm Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa8066a640500eaaf14c73b769e8792c0b420a927adb8db98ec47d9440a85d32" in 2.073751971s (2.073765385s including waiting) openshift-apiserver-operator 32m Normal Pulled pod/openshift-apiserver-operator-67fd94b9d7-m22hm Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:55b8c96568666d4340d71558c31742bd8b5c02ab0cca7913fa41586d5f2de697" in 6.113004231s (6.113010991s including waiting) openshift-cloud-credential-operator 32m Normal Created pod/cloud-credential-operator-7fffc6cb67-29lts Created container cloud-credential-operator openshift-ingress-operator 32m Normal Pulled pod/ingress-operator-6486794b49-9zv9g Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6de5d1e6d774c20a0c3bd4d4c544234519bc4528e5535f24d0cf78a6788b4056" in 5.859533931s (5.859541797s including waiting) openshift-cluster-node-tuning-operator 32m Normal AddedInterface pod/cluster-node-tuning-operator-5886c76fd4-cntr6 Add eth0 [10.128.0.61/23] from ovn-kubernetes openshift-authentication-operator 32m Warning FastControllerResync deployment/authentication-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-cluster-node-tuning-operator 32m Normal Created pod/cluster-node-tuning-operator-5886c76fd4-cntr6 Created container cluster-node-tuning-operator openshift-cluster-node-tuning-operator 32m Normal Started pod/cluster-node-tuning-operator-5886c76fd4-cntr6 Started container cluster-node-tuning-operator openshift-etcd 32m Normal Killing pod/etcd-guard-ip-10-0-197-197.ec2.internal Stopping container guard openshift-cluster-storage-operator 32m Normal OperatorStatusChanged deployment/csi-snapshot-controller-operator Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from False to True ("CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods") openshift-machine-api 32m Normal Pulled pod/machine-api-operator-564474f8c6-nlqm9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fae409a0e6467f2d4e5e1cd0974a33f71fddf6f3b567c278b3a9aad56aa0f089" already present on machine openshift-image-registry 32m Normal AddedInterface pod/cluster-image-registry-operator-868788f8c6-9j6mj Add eth0 [10.128.0.66/23] from ovn-kubernetes openshift-monitoring 32m Normal SuccessfulCreate replicaset/cluster-monitoring-operator-78777bc588 Created pod: cluster-monitoring-operator-78777bc588-fps2r openshift-kube-scheduler-operator 32m Normal AddedInterface pod/openshift-kube-scheduler-operator-c98d57874-t6vzp Add eth0 [10.128.0.65/23] from ovn-kubernetes openshift-authentication-operator 32m Warning FastControllerResync deployment/authentication-operator Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler-operator 32m Normal Created pod/openshift-kube-scheduler-operator-c98d57874-t6vzp Created container kube-scheduler-operator-container openshift-monitoring 32m Normal Killing pod/cluster-monitoring-operator-78777bc588-rhggh Stopping container cluster-monitoring-operator openshift-authentication-operator 32m Warning FastControllerResync deployment/authentication-operator Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling openshift-multus 32m Normal SuccessfulCreate replicaset/multus-admission-controller-757b6fbf74 Created pod: multus-admission-controller-757b6fbf74-g2kdg openshift-apiserver-operator 32m Warning FastControllerResync deployment/openshift-apiserver-operator Controller "ConnectivityCheckController" resync interval is set to 0s which might lead to client request throttling openshift-machine-api 32m Normal Pulling pod/control-plane-machine-set-operator-77b4c948f8-7vvdb Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:278a7aba8f50daaaa56984563a5ca591493989e3353eda2da9516f45a35ee7ed" openshift-authentication-operator 32m Warning FastControllerResync deployment/authentication-operator Controller "SecretRevisionPruneController" resync interval is set to 0s which might lead to client request throttling openshift-ingress-operator 32m Normal Started pod/ingress-operator-6486794b49-9zv9g Started container kube-rbac-proxy openshift-authentication-operator 32m Warning FastControllerResync deployment/authentication-operator Controller "OAuthAPIServerControllerWorkloadController" resync interval is set to 0s which might lead to client request throttling openshift-apiserver-operator 32m Warning FastControllerResync deployment/openshift-apiserver-operator Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling openshift-etcd-operator 32m Normal AddedInterface pod/etcd-operator-775754ddff-tnxcn Add eth0 [10.128.0.62/23] from ovn-kubernetes openshift-etcd-operator 32m Normal Pulled pod/etcd-operator-775754ddff-tnxcn Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-authentication-operator 32m Normal LeaderElection configmap/cluster-authentication-operator-lock authentication-operator-dbb89644b-4b786_c0dcf5f3-1844-4cdc-81e3-a784e5f8cb55 became leader openshift-authentication-operator 32m Warning FastControllerResync deployment/authentication-operator Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling openshift-apiserver-operator 32m Warning FastControllerResync deployment/openshift-apiserver-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-apiserver-operator 32m Normal LeaderElection lease/openshift-apiserver-operator-lock openshift-apiserver-operator-67fd94b9d7-m22hm_d2efce05-ca4c-4a39-a7d9-a9be96b7dfa4 became leader openshift-apiserver-operator 32m Warning FastControllerResync deployment/openshift-apiserver-operator Controller "OpenShiftAPIServerWorkloadController" resync interval is set to 0s which might lead to client request throttling openshift-ingress-operator 32m Normal Created pod/ingress-operator-6486794b49-9zv9g Created container kube-rbac-proxy openshift-machine-api 32m Normal Pulled pod/machine-api-operator-564474f8c6-nlqm9 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-machine-api 32m Normal Created pod/machine-api-operator-564474f8c6-nlqm9 Created container kube-rbac-proxy openshift-marketplace 32m Normal AddedInterface pod/marketplace-operator-554c77d6df-pn29n Add eth0 [10.129.0.52/23] from ovn-kubernetes openshift-operator-lifecycle-manager 32m Normal Started pod/olm-operator-647f89bf4f-bl8lz Started container olm-operator openshift-authentication-operator 32m Warning FastControllerResync deployment/authentication-operator Controller "OAuthServerWorkloadController" resync interval is set to 0s which might lead to client request throttling openshift-operator-lifecycle-manager 32m Normal Started pod/catalog-operator-567d5cdcc9-zvdz6 Started container catalog-operator openshift-marketplace 32m Normal Pulling pod/marketplace-operator-554c77d6df-pn29n Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e8bda93aae5c360f971e4532706ab6a95eb260e026a6704f837016cab6525fb" openshift-operator-lifecycle-manager 32m Normal Created pod/catalog-operator-567d5cdcc9-zvdz6 Created container catalog-operator openshift-operator-lifecycle-manager 32m Normal Created pod/olm-operator-647f89bf4f-bl8lz Created container olm-operator openshift-authentication-operator 32m Warning FastControllerResync deployment/authentication-operator Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling openshift-apiserver-operator 32m Warning FastControllerResync deployment/openshift-apiserver-operator Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling openshift-machine-api 32m Normal Started pod/machine-api-operator-564474f8c6-nlqm9 Started container kube-rbac-proxy openshift-cluster-storage-operator 32m Normal Killing pod/csi-snapshot-controller-f58c44499-rnqw9 Stopping container snapshot-controller openshift-operator-lifecycle-manager 32m Normal AddedInterface pod/olm-operator-647f89bf4f-bl8lz Add eth0 [10.129.0.51/23] from ovn-kubernetes openshift-machine-api 32m Normal AddedInterface pod/control-plane-machine-set-operator-77b4c948f8-7vvdb Add eth0 [10.128.0.63/23] from ovn-kubernetes openshift-apiserver-operator 32m Warning FastControllerResync deployment/openshift-apiserver-operator Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling openshift-apiserver-operator 32m Warning FastControllerResync deployment/openshift-apiserver-operator Controller "SecretRevisionPruneController" resync interval is set to 0s which might lead to client request throttling openshift-machine-config-operator 32m Normal Killing pod/machine-config-operator-7fd9cd8968-9vg57 Stopping container machine-config-operator openshift-apiserver-operator 32m Warning FastControllerResync deployment/openshift-apiserver-operator Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling openshift-operator-lifecycle-manager 32m Normal Killing pod/catalog-operator-567d5cdcc9-gwwnx Stopping container catalog-operator openshift-cluster-csi-drivers 32m Normal Pulled pod/aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:281a8f2be0f5afefc956d758928a761a97ac9a5b3e1f4f5785717906d791a5e3" already present on machine openshift-cluster-csi-drivers 32m Normal Started pod/aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk Started container resizer-kube-rbac-proxy openshift-kube-scheduler-operator 32m Normal Killing pod/openshift-kube-scheduler-operator-c98d57874-wj7tl Stopping container kube-scheduler-operator-container openshift-apiserver-operator 32m Normal LeaderElection configmap/openshift-apiserver-operator-lock openshift-apiserver-operator-67fd94b9d7-m22hm_d2efce05-ca4c-4a39-a7d9-a9be96b7dfa4 became leader openshift-authentication-operator 32m Normal LeaderElection lease/cluster-authentication-operator-lock authentication-operator-dbb89644b-4b786_c0dcf5f3-1844-4cdc-81e3-a784e5f8cb55 became leader openshift-multus 32m Normal Killing pod/multus-admission-controller-757b6fbf74-5hdn7 Stopping container kube-rbac-proxy openshift-multus 32m Normal Killing pod/multus-admission-controller-757b6fbf74-5hdn7 Stopping container multus-admission-controller openshift-cluster-csi-drivers 32m Normal Created pod/aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk Created container resizer-kube-rbac-proxy openshift-operator-lifecycle-manager 32m Normal Pulled pod/olm-operator-647f89bf4f-bl8lz Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:497121bfdb3b293af2c939af393bfbda39aeace9d754f7288a244cd7ec2662d7" already present on machine openshift-kube-scheduler-operator 32m Normal Pulled pod/openshift-kube-scheduler-operator-c98d57874-t6vzp Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-cluster-storage-operator 32m Normal SuccessfulCreate replicaset/csi-snapshot-controller-f58c44499 Created pod: csi-snapshot-controller-f58c44499-svdlt openshift-monitoring 32m Normal Pulling pod/cluster-monitoring-operator-78777bc588-fps2r Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9a1e35d8ae26fad862261135aaaa0658befbaccf9ffba55291dc4e8a95c20546" openshift-etcd-operator 32m Warning FastControllerResync deployment/etcd-operator Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling openshift-etcd-operator 32m Warning FastControllerResync deployment/etcd-operator Controller "InstallerController" resync interval is set to 0s which might lead to client request throttling openshift-etcd-operator 32m Warning ReportEtcdMembersErrorUpdatingStatus deployment/etcd-operator etcds.operator.openshift.io "cluster" not found openshift-etcd-operator 32m Warning FastControllerResync deployment/etcd-operator Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling openshift-cluster-storage-operator 32m Normal Created pod/csi-snapshot-controller-f58c44499-svdlt Created container snapshot-controller openshift-etcd-operator 32m Normal Started pod/etcd-operator-775754ddff-tnxcn Started container etcd-operator openshift-etcd-operator 32m Normal Created pod/etcd-operator-775754ddff-tnxcn Created container etcd-operator openshift-etcd-operator 32m Warning FastControllerResync deployment/etcd-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-cluster-storage-operator 32m Normal Pulled pod/cluster-storage-operator-fb5868667-wn4n8 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2a4719dd49c67aa02ad187264977e0b64ad2b0d6725e99b1d460567663961ef4" in 2.830755825s (2.830767512s including waiting) openshift-cluster-storage-operator 32m Normal Pulled pod/csi-snapshot-controller-f58c44499-svdlt Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f6985210e2dec2b96cd8cd1dc6965ce2710b23b2c515d9ae67a694245bd41082" already present on machine openshift-etcd-operator 32m Warning FastControllerResync deployment/etcd-operator Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling openshift-etcd-operator 32m Warning FastControllerResync deployment/etcd-operator Controller "NodeController" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler-operator 32m Normal Started pod/openshift-kube-scheduler-operator-c98d57874-t6vzp Started container kube-scheduler-operator-container openshift-route-controller-manager 32m Normal SuccessfulCreate replicaset/route-controller-manager-6594987c6f Created pod: route-controller-manager-6594987c6f-q7rdv openshift-etcd-operator 32m Warning FastControllerResync deployment/etcd-operator Controller "PruneController" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler-operator 32m Warning FastControllerResync deployment/openshift-kube-scheduler-operator Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling openshift-route-controller-manager 32m Normal Killing pod/route-controller-manager-6594987c6f-qfkcc Stopping container route-controller-manager openshift-kube-scheduler-operator 32m Warning FastControllerResync deployment/openshift-kube-scheduler-operator Controller "PruneController" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler-operator 32m Warning FastControllerResync deployment/openshift-kube-scheduler-operator Controller "NodeController" resync interval is set to 0s which might lead to client request throttling openshift-cluster-storage-operator 32m Normal Killing pod/csi-snapshot-controller-operator-c9586b974-wk85s Stopping container csi-snapshot-controller-operator openshift-kube-scheduler-operator 32m Warning FastControllerResync deployment/openshift-kube-scheduler-operator Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler-operator 32m Normal LeaderElection lease/openshift-cluster-kube-scheduler-operator-lock openshift-kube-scheduler-operator-c98d57874-t6vzp_3c3e7513-5ff4-4bc1-a4d6-f5269375efbe became leader openshift-kube-scheduler-operator 32m Warning FastControllerResync deployment/openshift-kube-scheduler-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler-operator 32m Warning FastControllerResync deployment/openshift-kube-scheduler-operator Controller "GuardController" resync interval is set to 0s which might lead to client request throttling openshift-cluster-storage-operator 32m Normal SuccessfulCreate replicaset/csi-snapshot-controller-operator-c9586b974 Created pod: csi-snapshot-controller-operator-c9586b974-k2tdv openshift-kube-scheduler-operator 32m Warning FastControllerResync deployment/openshift-kube-scheduler-operator Controller "InstallerController" resync interval is set to 0s which might lead to client request throttling openshift-etcd-operator 32m Normal LeaderElection configmap/openshift-cluster-etcd-operator-lock etcd-operator-775754ddff-tnxcn_927569d5-1fc1-4916-bfd0-8da4c8e1956d became leader openshift-machine-api 32m Normal Created pod/machine-api-operator-564474f8c6-nlqm9 Created container machine-api-operator openshift-cluster-storage-operator 32m Normal Killing pod/csi-snapshot-webhook-75476bf784-sfhhx Stopping container webhook openshift-etcd-operator 32m Normal LeaderElection lease/openshift-cluster-etcd-operator-lock etcd-operator-775754ddff-tnxcn_927569d5-1fc1-4916-bfd0-8da4c8e1956d became leader openshift-machine-api 32m Normal Started pod/machine-api-operator-564474f8c6-nlqm9 Started container machine-api-operator openshift-kube-scheduler-operator 32m Normal LeaderElection configmap/openshift-cluster-kube-scheduler-operator-lock openshift-kube-scheduler-operator-c98d57874-t6vzp_3c3e7513-5ff4-4bc1-a4d6-f5269375efbe became leader openshift-etcd-operator 32m Warning FastControllerResync deployment/etcd-operator Controller "GuardController" resync interval is set to 0s which might lead to client request throttling openshift-cluster-storage-operator 32m Normal SuccessfulCreate replicaset/csi-snapshot-webhook-75476bf784 Created pod: csi-snapshot-webhook-75476bf784-bhnwx openshift-monitoring 32m Normal AddedInterface pod/cluster-monitoring-operator-78777bc588-fps2r Add eth0 [10.129.0.53/23] from ovn-kubernetes openshift-multus 32m Normal AddedInterface pod/multus-admission-controller-757b6fbf74-g2kdg Add eth0 [10.128.0.67/23] from ovn-kubernetes openshift-kube-apiserver 32m Normal AddedInterface pod/revision-pruner-12-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.43/23] from ovn-kubernetes openshift-kube-apiserver 32m Normal Pulled pod/revision-pruner-12-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-cluster-storage-operator 32m Normal Started pod/csi-snapshot-controller-f58c44499-svdlt Started container snapshot-controller openshift-multus 32m Normal Pulled pod/multus-admission-controller-757b6fbf74-g2kdg Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3c3cca6e2da92a6cd38e7f20f77bffc675895bd800157fdb50261b7f7ea9fc90" already present on machine openshift-cluster-storage-operator 32m Normal AddedInterface pod/csi-snapshot-controller-f58c44499-svdlt Add eth0 [10.128.0.68/23] from ovn-kubernetes openshift-image-registry 32m Normal Pulling pod/cluster-image-registry-operator-868788f8c6-9j6mj Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d049299956446154ed1d1c21e5d4561bb452b41f6c3bf17a48f3550a2c998cbe" openshift-config-operator 32m Normal SuccessfulCreate replicaset/openshift-config-operator-67bdbffb68 Created pod: openshift-config-operator-67bdbffb68-9f2m6 openshift-oauth-apiserver 32m Normal SuccessfulCreate replicaset/apiserver-74455c7c5 Created pod: apiserver-74455c7c5-6zb4s openshift-operator-lifecycle-manager 32m Normal Killing pod/packageserver-7c998868c6-wnqfz Stopping container packageserver openshift-machine-api 32m Normal SuccessfulCreate replicaset/cluster-autoscaler-operator-7fcffdb7c8 Created pod: cluster-autoscaler-operator-7fcffdb7c8-hswcn openshift-etcd-operator 32m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [secrets: etcd-all-certs, secrets: etcd-all-certs-7]\nEtcdMembersDegraded: No unhealthy members found" to "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [secrets: etcd-all-certs, secrets: etcd-all-certs-7]\nEtcdMembersDegraded: No unhealthy members found" openshift-cluster-storage-operator 32m Normal Started pod/cluster-storage-operator-fb5868667-wn4n8 Started container cluster-storage-operator openshift-machine-api 32m Normal Killing pod/cluster-autoscaler-operator-7fcffdb7c8-g4w4m Stopping container kube-rbac-proxy openshift-operator-lifecycle-manager 32m Normal SuccessfulCreate replicaset/packageserver-7c998868c6 Created pod: packageserver-7c998868c6-fzz2h openshift-cluster-storage-operator 32m Normal Pulling pod/csi-snapshot-controller-operator-c9586b974-k2tdv Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85e377fa5f92f13c07ca57eeaa575f7ef80ed954ae231f70ca70bfbe173b070b" openshift-cluster-storage-operator 32m Normal AddedInterface pod/csi-snapshot-controller-operator-c9586b974-k2tdv Add eth0 [10.129.0.54/23] from ovn-kubernetes openshift-etcd-operator 32m Warning RequiredInstallerResourcesMissing deployment/etcd-operator secrets: etcd-all-certs, secrets: etcd-all-certs-7 openshift-etcd-operator 32m Normal OperatorLogLevelChange deployment/etcd-operator Operator log level changed from "Debug" to "Normal" openshift-oauth-apiserver 32m Normal Killing pod/apiserver-74455c7c5-rpzl9 Stopping container oauth-apiserver openshift-etcd-operator 32m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy" openshift-etcd-operator 32m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [secrets: etcd-all-certs, secrets: etcd-all-certs-7]\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy" openshift-etcd-operator 32m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [secrets: etcd-all-certs, secrets: etcd-all-certs-7]\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [secrets: etcd-all-certs, secrets: etcd-all-certs-7]\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy" openshift-etcd-operator 32m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [secrets: etcd-all-certs, secrets: etcd-all-certs-7]\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: configmaps lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [secrets: etcd-all-certs, secrets: etcd-all-certs-7]\nEtcdMembersDegraded: No unhealthy members found" openshift-etcd-operator 32m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 2 of 3 members are available, ip-10-0-140-6.ec2.internal is unhealthy" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available" openshift-machine-api 32m Normal Killing pod/cluster-autoscaler-operator-7fcffdb7c8-g4w4m Stopping container cluster-autoscaler-operator openshift-multus 32m Normal Pulled pod/multus-admission-controller-757b6fbf74-g2kdg Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-multus 32m Normal Started pod/multus-admission-controller-757b6fbf74-g2kdg Started container multus-admission-controller openshift-multus 32m Normal Created pod/multus-admission-controller-757b6fbf74-g2kdg Created container multus-admission-controller openshift-kube-apiserver 32m Normal Started pod/revision-pruner-12-ip-10-0-197-197.ec2.internal Started container pruner openshift-cluster-storage-operator 32m Normal Created pod/cluster-storage-operator-fb5868667-wn4n8 Created container cluster-storage-operator openshift-kube-apiserver 32m Normal Created pod/revision-pruner-12-ip-10-0-197-197.ec2.internal Created container pruner openshift-machine-api 32m Normal Killing pod/cluster-baremetal-operator-cb6794dd9-8bqk2 Stopping container baremetal-kube-rbac-proxy openshift-marketplace 32m Normal Started pod/marketplace-operator-554c77d6df-pn29n Started container marketplace-operator openshift-operator-lifecycle-manager 32m Normal SuccessfulCreate replicaset/package-server-manager-fc98f8f64 Created pod: package-server-manager-fc98f8f64-h9b5w openshift-marketplace 32m Normal Pulled pod/marketplace-operator-554c77d6df-pn29n Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e8bda93aae5c360f971e4532706ab6a95eb260e026a6704f837016cab6525fb" in 2.348566359s (2.34857595s including waiting) openshift-marketplace 32m Normal Created pod/marketplace-operator-554c77d6df-pn29n Created container marketplace-operator openshift-config-operator 32m Normal AddedInterface pod/openshift-config-operator-67bdbffb68-9f2m6 Add eth0 [10.129.0.55/23] from ovn-kubernetes openshift-kube-storage-version-migrator-operator 32m Normal Killing pod/kube-storage-version-migrator-operator-7f8b95cf5f-x5hzl Stopping container kube-storage-version-migrator-operator openshift-kube-storage-version-migrator-operator 32m Normal SuccessfulCreate replicaset/kube-storage-version-migrator-operator-7f8b95cf5f Created pod: kube-storage-version-migrator-operator-7f8b95cf5f-dvvp5 openshift-config-operator 32m Normal Pulling pod/openshift-config-operator-67bdbffb68-9f2m6 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6eca04bc4045ccf6694e6e0c94453e9c1d8dcbb669a58419603b3c2aab18488b" openshift-machine-api 32m Warning FailedToUpdateEndpoint endpoints/cluster-baremetal-operator-service Failed to update endpoint openshift-machine-api/cluster-baremetal-operator-service: Operation cannot be fulfilled on endpoints "cluster-baremetal-operator-service": the object has been modified; please apply your changes to the latest version and try again openshift-machine-api 32m Normal AddedInterface pod/cluster-autoscaler-operator-7fcffdb7c8-hswcn Add eth0 [10.129.0.56/23] from ovn-kubernetes openshift-operator-lifecycle-manager 32m Normal Killing pod/package-server-manager-fc98f8f64-l2df9 Stopping container package-server-manager openshift-machine-api 32m Normal SuccessfulCreate replicaset/cluster-baremetal-operator-cb6794dd9 Created pod: cluster-baremetal-operator-cb6794dd9-h8ch4 openshift-machine-api 32m Normal Killing pod/cluster-baremetal-operator-cb6794dd9-8bqk2 Stopping container cluster-baremetal-operator openshift-cluster-storage-operator 32m Normal AddedInterface pod/csi-snapshot-webhook-75476bf784-bhnwx Add eth0 [10.128.0.69/23] from ovn-kubernetes openshift-insights 32m Normal Created pod/insights-operator-6fd65c6b65-lh6xj Created container insights-operator openshift-multus 32m Normal Started pod/multus-admission-controller-757b6fbf74-g2kdg Started container kube-rbac-proxy openshift-image-registry 32m Warning FastControllerResync deployment/cluster-image-registry-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-image-registry 32m Normal Started pod/cluster-image-registry-operator-868788f8c6-9j6mj Started container cluster-image-registry-operator openshift-config-operator 32m Normal Killing pod/openshift-config-operator-67bdbffb68-sdgx7 Stopping container openshift-config-operator openshift-image-registry 32m Normal Created pod/cluster-image-registry-operator-868788f8c6-9j6mj Created container cluster-image-registry-operator openshift-etcd-operator 32m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "ScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [secrets: etcd-all-certs, secrets: etcd-all-certs-7]\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [secrets: etcd-all-certs, secrets: etcd-all-certs-7]\nEtcdMembersDegraded: No unhealthy members found" openshift-operator-lifecycle-manager 32m Warning ProbeError pod/packageserver-7c998868c6-wnqfz Readiness probe error: Get "https://10.130.0.62:5443/healthz": dial tcp 10.130.0.62:5443: connect: connection refused... openshift-image-registry 32m Normal Pulled pod/cluster-image-registry-operator-868788f8c6-9j6mj Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d049299956446154ed1d1c21e5d4561bb452b41f6c3bf17a48f3550a2c998cbe" in 3.147120936s (3.147149317s including waiting) openshift-machine-api 32m Normal Started pod/control-plane-machine-set-operator-77b4c948f8-7vvdb Started container control-plane-machine-set-operator openshift-multus 32m Normal Created pod/multus-admission-controller-757b6fbf74-g2kdg Created container kube-rbac-proxy openshift-cluster-storage-operator 32m Normal Pulled pod/csi-snapshot-webhook-75476bf784-bhnwx Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e7a32310238d69d56d35be8f7de426bdbedf96ff73edcd198698ac174c6d3c34" already present on machine openshift-image-registry 32m Normal LeaderElection configmap/openshift-master-controllers cluster-image-registry-operator-868788f8c6-9j6mj_01abe253-881e-40f6-949f-1433aef79681 became leader openshift-machine-api 32m Normal Pulled pod/control-plane-machine-set-operator-77b4c948f8-7vvdb Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:278a7aba8f50daaaa56984563a5ca591493989e3353eda2da9516f45a35ee7ed" in 3.994511659s (3.994519047s including waiting) openshift-operator-lifecycle-manager 32m Normal AddedInterface pod/packageserver-7c998868c6-fzz2h Add eth0 [10.128.0.70/23] from ovn-kubernetes openshift-operator-lifecycle-manager 32m Normal Pulled pod/packageserver-7c998868c6-fzz2h Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:497121bfdb3b293af2c939af393bfbda39aeace9d754f7288a244cd7ec2662d7" already present on machine openshift-insights 32m Normal Pulled pod/insights-operator-6fd65c6b65-lh6xj Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7cb4c45f3e100ceddafee4c6ccd57d79f5a6627686484aba625c1486c2ffc1c8" in 5.000327903s (5.000339029s including waiting) openshift-insights 32m Normal Started pod/insights-operator-6fd65c6b65-lh6xj Started container insights-operator openshift-operator-lifecycle-manager 32m Normal Created pod/packageserver-7c998868c6-fzz2h Created container packageserver openshift-apiserver 32m Normal Killing pod/apiserver-5f568869f-b9bw5 Stopping container openshift-apiserver openshift-apiserver 32m Normal Killing pod/apiserver-5f568869f-b9bw5 Stopping container openshift-apiserver-check-endpoints openshift-machine-api 32m Normal Created pod/control-plane-machine-set-operator-77b4c948f8-7vvdb Created container control-plane-machine-set-operator openshift-machine-api 32m Normal Pulled pod/cluster-autoscaler-operator-7fcffdb7c8-hswcn Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-kube-scheduler 32m Normal Killing pod/openshift-kube-scheduler-guard-ip-10-0-197-197.ec2.internal Stopping container guard openshift-apiserver 32m Normal SuccessfulCreate replicaset/apiserver-5f568869f Created pod: apiserver-5f568869f-kw7fx openshift-operator-lifecycle-manager 32m Warning Unhealthy pod/packageserver-7c998868c6-wnqfz Readiness probe failed: Get "https://10.130.0.62:5443/healthz": dial tcp 10.130.0.62:5443: connect: connection refused openshift-image-registry 32m Normal LeaderElection lease/openshift-master-controllers cluster-image-registry-operator-868788f8c6-9j6mj_01abe253-881e-40f6-949f-1433aef79681 became leader openshift-cluster-storage-operator 32m Normal Created pod/csi-snapshot-webhook-75476bf784-bhnwx Created container webhook openshift-operator-lifecycle-manager 32m Normal Started pod/packageserver-7c998868c6-fzz2h Started container packageserver openshift-cluster-storage-operator 32m Normal Pulled pod/csi-snapshot-controller-operator-c9586b974-k2tdv Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85e377fa5f92f13c07ca57eeaa575f7ef80ed954ae231f70ca70bfbe173b070b" in 3.348515744s (3.348523396s including waiting) openshift-cloud-credential-operator 32m Normal MutatingWebhookConfigurationUpdated deployment/cloud-credential-operator Updated MutatingWebhookConfiguration.admissionregistration.k8s.io/pod-identity-webhook because it changed openshift-operator-lifecycle-manager 32m Normal AddedInterface pod/package-server-manager-fc98f8f64-h9b5w Add eth0 [10.128.0.72/23] from ovn-kubernetes openshift-kube-storage-version-migrator-operator 32m Normal AddedInterface pod/kube-storage-version-migrator-operator-7f8b95cf5f-dvvp5 Add eth0 [10.128.0.71/23] from ovn-kubernetes openshift-kube-storage-version-migrator-operator 32m Normal Pulling pod/kube-storage-version-migrator-operator-7f8b95cf5f-dvvp5 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc8e1a30ec145b1e91f862880b9866d48abe8056fe69edd94d760739137b6d4a" openshift-cloud-credential-operator 32m Normal DeploymentUpdated deployment/cloud-credential-operator Updated Deployment.apps/pod-identity-webhook -n openshift-cloud-credential-operator because it changed openshift-operator-lifecycle-manager 32m Warning ProbeError pod/packageserver-7c998868c6-fzz2h Readiness probe error: Get "https://10.128.0.70:5443/healthz": dial tcp 10.128.0.70:5443: connect: connection refused... openshift-operator-lifecycle-manager 32m Normal Pulled pod/package-server-manager-fc98f8f64-h9b5w Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:497121bfdb3b293af2c939af393bfbda39aeace9d754f7288a244cd7ec2662d7" already present on machine openshift-operator-lifecycle-manager 32m Normal Started pod/package-server-manager-fc98f8f64-h9b5w Started container package-server-manager openshift-cluster-storage-operator 32m Normal Started pod/csi-snapshot-webhook-75476bf784-bhnwx Started container webhook openshift-marketplace 32m Warning Unhealthy pod/marketplace-operator-554c77d6df-pn29n Readiness probe failed: Get "http://10.129.0.52:8080/healthz": dial tcp 10.129.0.52:8080: connect: connection refused openshift-marketplace 32m Warning ProbeError pod/marketplace-operator-554c77d6df-pn29n Readiness probe error: Get "http://10.129.0.52:8080/healthz": dial tcp 10.129.0.52:8080: connect: connection refused... openshift-machine-api 32m Normal Pulling pod/cluster-baremetal-operator-cb6794dd9-h8ch4 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a4f74849d28d578b213bb837750fbc967abe1cf433ad7611dde27be1f15baf36" openshift-machine-api 32m Normal AddedInterface pod/cluster-baremetal-operator-cb6794dd9-h8ch4 Add eth0 [10.128.0.73/23] from ovn-kubernetes openshift-monitoring 32m Normal Pulled pod/cluster-monitoring-operator-78777bc588-fps2r Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9a1e35d8ae26fad862261135aaaa0658befbaccf9ffba55291dc4e8a95c20546" in 4.037536016s (4.037544995s including waiting) openshift-operator-lifecycle-manager 32m Warning Unhealthy pod/packageserver-7c998868c6-fzz2h Readiness probe failed: Get "https://10.128.0.70:5443/healthz": dial tcp 10.128.0.70:5443: connect: connection refused openshift-operator-lifecycle-manager 32m Normal Created pod/package-server-manager-fc98f8f64-h9b5w Created container package-server-manager openshift-cluster-storage-operator 32m Normal Created pod/csi-snapshot-controller-operator-c9586b974-k2tdv Created container csi-snapshot-controller-operator openshift-machine-api 32m Normal Started pod/cluster-autoscaler-operator-7fcffdb7c8-hswcn Started container kube-rbac-proxy openshift-apiserver-operator 32m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-5f568869f-wdslz pod)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-5f568869f-b9bw5 pod)" openshift-kube-scheduler 32m Normal Killing pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Stopping container kube-scheduler openshift-machine-api 32m Normal Created pod/cluster-autoscaler-operator-7fcffdb7c8-hswcn Created container kube-rbac-proxy openshift-monitoring 32m Normal Created pod/cluster-monitoring-operator-78777bc588-fps2r Created container cluster-monitoring-operator openshift-kube-scheduler 32m Normal StaticPodInstallerCompleted pod/installer-9-retry-1-ip-10-0-140-6.ec2.internal Successfully installed revision 9 openshift-cluster-storage-operator 32m Normal Started pod/csi-snapshot-controller-operator-c9586b974-k2tdv Started container csi-snapshot-controller-operator openshift-kube-scheduler 32m Normal Killing pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Stopping container kube-scheduler-cert-syncer openshift-machine-api 32m Normal Pulling pod/cluster-autoscaler-operator-7fcffdb7c8-hswcn Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b696ffc14cdc67e31403d1a6308c7448d7970ed7f872ec18fea9c2017029814" openshift-kube-scheduler 32m Normal Killing pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Stopping container kube-scheduler-recovery-controller openshift-monitoring 32m Normal Started pod/cluster-monitoring-operator-78777bc588-fps2r Started container cluster-monitoring-operator openshift-machine-api 32m Normal Pulled pod/cluster-baremetal-operator-cb6794dd9-h8ch4 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a4f74849d28d578b213bb837750fbc967abe1cf433ad7611dde27be1f15baf36" in 2.414003143s (2.414018104s including waiting) openshift-kube-storage-version-migrator-operator 32m Normal Started pod/kube-storage-version-migrator-operator-7f8b95cf5f-dvvp5 Started container kube-storage-version-migrator-operator openshift-kube-scheduler 32m Warning Unhealthy pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Readiness probe failed: Get "https://10.0.140.6:10259/healthz": dial tcp 10.0.140.6:10259: connect: connection refused openshift-kube-scheduler 32m Warning ProbeError pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Readiness probe error: Get "https://10.0.140.6:10259/healthz": dial tcp 10.0.140.6:10259: connect: connection refused... openshift-config-operator 32m Normal Started pod/openshift-config-operator-67bdbffb68-9f2m6 Started container openshift-config-operator openshift-kube-storage-version-migrator-operator 32m Normal Created pod/kube-storage-version-migrator-operator-7f8b95cf5f-dvvp5 Created container kube-storage-version-migrator-operator openshift-kube-storage-version-migrator-operator 32m Normal Pulled pod/kube-storage-version-migrator-operator-7f8b95cf5f-dvvp5 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fc8e1a30ec145b1e91f862880b9866d48abe8056fe69edd94d760739137b6d4a" in 2.502444263s (2.502456021s including waiting) openshift-machine-api 32m Normal Started pod/cluster-baremetal-operator-cb6794dd9-h8ch4 Started container baremetal-kube-rbac-proxy openshift-machine-api 32m Normal Created pod/cluster-baremetal-operator-cb6794dd9-h8ch4 Created container baremetal-kube-rbac-proxy openshift-machine-api 32m Normal Pulled pod/cluster-baremetal-operator-cb6794dd9-h8ch4 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-machine-api 32m Normal Started pod/cluster-baremetal-operator-cb6794dd9-h8ch4 Started container cluster-baremetal-operator openshift-machine-api 32m Normal Created pod/cluster-baremetal-operator-cb6794dd9-h8ch4 Created container cluster-baremetal-operator openshift-config-operator 32m Normal Pulled pod/openshift-config-operator-67bdbffb68-9f2m6 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6eca04bc4045ccf6694e6e0c94453e9c1d8dcbb669a58419603b3c2aab18488b" in 4.15357574s (4.153595737s including waiting) openshift-config-operator 32m Normal Created pod/openshift-config-operator-67bdbffb68-9f2m6 Created container openshift-config-operator openshift-kube-scheduler 32m Normal Pulled pod/revision-pruner-9-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 32m Normal AddedInterface pod/revision-pruner-9-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.44/23] from ovn-kubernetes openshift-config-operator 32m Normal LeaderElection configmap/config-operator-lock openshift-config-operator-67bdbffb68-9f2m6_0f185082-83dd-4745-b75c-1393bd7c0f8f became leader openshift-kube-controller-manager 32m Normal Killing pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Stopping container kube-controller-manager openshift-kube-controller-manager 32m Normal Killing pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Stopping container kube-controller-manager-recovery-controller openshift-kube-scheduler 32m Normal SandboxChanged pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Pod sandbox changed, it will be killed and re-created. openshift-kube-scheduler 32m Normal Started pod/revision-pruner-9-ip-10-0-197-197.ec2.internal Started container pruner openshift-config-operator 32m Warning FastControllerResync deployment/openshift-config-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager 32m Normal StaticPodInstallerCompleted pod/installer-8-ip-10-0-140-6.ec2.internal Successfully installed revision 8 openshift-machine-api 32m Normal Pulled pod/cluster-autoscaler-operator-7fcffdb7c8-hswcn Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b696ffc14cdc67e31403d1a6308c7448d7970ed7f872ec18fea9c2017029814" in 2.635566335s (2.635577732s including waiting) openshift-config-operator 32m Warning FastControllerResync deployment/openshift-config-operator Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling openshift-config-operator 32m Normal LeaderElection lease/config-operator-lock openshift-config-operator-67bdbffb68-9f2m6_0f185082-83dd-4745-b75c-1393bd7c0f8f became leader openshift-kube-controller-manager 32m Normal Killing pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Stopping container cluster-policy-controller openshift-kube-scheduler 32m Normal Created pod/revision-pruner-9-ip-10-0-197-197.ec2.internal Created container pruner openshift-kube-controller-manager 32m Normal Killing pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Stopping container kube-controller-manager-cert-syncer openshift-etcd 32m Normal Pulled pod/revision-pruner-7-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd 32m Normal AddedInterface pod/revision-pruner-7-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.45/23] from ovn-kubernetes openshift-machine-api 32m Normal Started pod/cluster-autoscaler-operator-7fcffdb7c8-hswcn Started container cluster-autoscaler-operator openshift-machine-api 32m Normal Created pod/cluster-autoscaler-operator-7fcffdb7c8-hswcn Created container cluster-autoscaler-operator openshift-kube-controller-manager 32m Normal Pulled pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fec19020202ba3b7f09460bc0cb52fc001941fbb3a3fa7d1ca34a2e9ebf0e9" already present on machine openshift-kube-controller-manager 32m Normal Started pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Started container kube-controller-manager openshift-kube-controller-manager 32m Normal Created pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Created container kube-controller-manager openshift-authentication-operator 32m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is waiting in pending apiserver-74455c7c5-m45v9 pod)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-74455c7c5-rpzl9 pod)" openshift-kube-controller-manager 32m Normal SandboxChanged pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Pod sandbox changed, it will be killed and re-created. openshift-kube-controller-manager 32m Warning ProbeError pod/kube-controller-manager-guard-ip-10-0-140-6.ec2.internal Readiness probe error: Get "https://10.0.140.6:10257/healthz": dial tcp 10.0.140.6:10257: connect: connection refused... openshift-kube-controller-manager 32m Normal Pulled pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-controller-manager 32m Warning Unhealthy pod/kube-controller-manager-guard-ip-10-0-140-6.ec2.internal Readiness probe failed: Get "https://10.0.140.6:10257/healthz": dial tcp 10.0.140.6:10257: connect: connection refused openshift-etcd 32m Normal Created pod/revision-pruner-7-ip-10-0-197-197.ec2.internal Created container pruner openshift-kube-scheduler-operator 32m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 9:\nNodeInstallerDegraded: installer: ing) (len=1) \"9\",\nNodeInstallerDegraded: NodeName: (string) \"\",\nNodeInstallerDegraded: Namespace: (string) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:37:22.118786 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.207089 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.210497 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:33.595654 1 cmd.go:124] Received SIGTERM or SIGINT signal, shutting down the process.\nNodeInstallerDegraded: I0321 12:37:52.210904 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:37:52.211019 1 cmd.go:106] client rate limiter Wait returned an error: context canceled\nNodeInstallerDegraded: " to "NodeInstallerDegraded: 1 nodes are failing on revision 9:\nNodeInstallerDegraded: installer: ing) (len=1) \"9\",\nNodeInstallerDegraded: NodeName: (string) \"\",\nNodeInstallerDegraded: Namespace: (string) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:37:22.118786 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.207089 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.210497 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:33.595654 1 cmd.go:124] Received SIGTERM or SIGINT signal, shutting down the process.\nNodeInstallerDegraded: I0321 12:37:52.210904 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:37:52.211019 1 cmd.go:106] client rate limiter Wait returned an error: context canceled\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal container \"kube-scheduler\" is terminated: Completed: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal container \"kube-scheduler-cert-syncer\" is terminated: Error: mestamp:time.Date(2023, time.March, 21, 12, 41, 6, 12503035, time.Local), LastTimestamp:time.Date(2023, time.March, 21, 12, 41, 6, 12503035, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events is forbidden: User \"system:serviceaccount:openshift-kube-scheduler:localhost-recovery-client\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"' (will not retry!)\nStaticPodsDegraded: W0321 12:41:13.409969 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: secrets is forbidden: User \"system:serviceaccount:openshift-kube-scheduler:localhost-recovery-client\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-kube-scheduler\"\nStaticPodsDegraded: E0321 12:41:13.409990 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: secrets is forbidden: User \"system:serviceaccount:openshift-kube-scheduler:localhost-recovery-client\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-kube-scheduler\"\nStaticPodsDegraded: I0321 12:41:19.924913 1 base_controller.go:73] Caches are synced for CertSyncController \nStaticPodsDegraded: I0321 12:41:19.924931 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ...\nStaticPodsDegraded: I0321 12:41:19.924974 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:41:19.924979 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:41:34.426103 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:41:34.426122 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:41:39.836850 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:41:39.836928 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal container \"kube-scheduler-recovery-controller\" is terminated: Completed: " openshift-etcd 32m Normal Started pod/revision-pruner-7-ip-10-0-197-197.ec2.internal Started container pruner openshift-kube-controller-manager 32m Warning FastControllerResync pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager 32m Normal Created pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Created container cluster-policy-controller openshift-kube-controller-manager 32m Normal Pulled pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 32m Normal Created pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Created container kube-controller-manager-cert-syncer openshift-kube-controller-manager 32m Normal Started pod/kube-controller-manager-ip-10-0-140-6.ec2.internal Started container cluster-policy-controller openshift-kube-controller-manager-operator 32m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"cluster-policy-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager-cert-syncer\" is terminated: Error: eference{Kind:\"Pod\", Namespace:\"openshift-kube-controller-manager\", Name:\"kube-controller-manager-ip-10-0-140-6.ec2.internal\", UID:\"\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}, Reason:\"FastControllerResync\", Message:\"Controller \\\"CertSyncController\\\" resync interval is set to 0s which might lead to client request throttling\", Source:v1.EventSource{Component:\"cert-syncer-certsynccontroller\", Host:\"\"}, FirstTimestamp:time.Date(2023, time.March, 21, 12, 41, 5, 992891680, time.Local), LastTimestamp:time.Date(2023, time.March, 21, 12, 41, 5, 992891680, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'Post \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/events\": dial tcp [::1]:6443: connect: connection refused'(may retry after sleeping)\nStaticPodsDegraded: I0321 12:41:15.221440 1 base_controller.go:73] Caches are synced for CertSyncController \nStaticPodsDegraded: I0321 12:41:15.221460 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ...\nStaticPodsDegraded: I0321 12:41:15.221542 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:41:15.221796 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:41:31.348007 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:41:31.357422 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:41:40.540188 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:41:40.540457 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" openshift-etcd-operator 32m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [secrets: etcd-all-certs, secrets: etcd-all-certs-7]\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" openshift-kube-apiserver 32m Normal StaticPodInstallerCompleted pod/installer-12-ip-10-0-140-6.ec2.internal Successfully installed revision 12 openshift-kube-controller-manager-operator 32m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"cluster-policy-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager-cert-syncer\" is terminated: Error: eference{Kind:\"Pod\", Namespace:\"openshift-kube-controller-manager\", Name:\"kube-controller-manager-ip-10-0-140-6.ec2.internal\", UID:\"\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}, Reason:\"FastControllerResync\", Message:\"Controller \\\"CertSyncController\\\" resync interval is set to 0s which might lead to client request throttling\", Source:v1.EventSource{Component:\"cert-syncer-certsynccontroller\", Host:\"\"}, FirstTimestamp:time.Date(2023, time.March, 21, 12, 41, 5, 992891680, time.Local), LastTimestamp:time.Date(2023, time.March, 21, 12, 41, 5, 992891680, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'Post \"https://localhost:6443/api/v1/namespaces/openshift-kube-controller-manager/events\": dial tcp [::1]:6443: connect: connection refused'(may retry after sleeping)\nStaticPodsDegraded: I0321 12:41:15.221440 1 base_controller.go:73] Caches are synced for CertSyncController \nStaticPodsDegraded: I0321 12:41:15.221460 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ...\nStaticPodsDegraded: I0321 12:41:15.221542 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:41:15.221796 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:41:31.348007 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:41:31.357422 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:41:40.540188 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:41:40.540457 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-140-6.ec2.internal container \"kube-controller-manager-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" openshift-apiserver 32m Warning ProbeError pod/apiserver-5f568869f-b9bw5 Readiness probe error: HTTP probe failed with statuscode: 500... openshift-apiserver 32m Warning Unhealthy pod/apiserver-5f568869f-b9bw5 Readiness probe failed: HTTP probe failed with statuscode: 500 openshift-cluster-csi-drivers 32m Normal Created pod/aws-ebs-csi-driver-node-s4chb Created container csi-node-driver-registrar openshift-ovn-kubernetes 32m Normal Created pod/ovnkube-node-zzdfn Created container kube-rbac-proxy-ovn-metrics openshift-ovn-kubernetes 32m Normal Started pod/ovnkube-node-zzdfn Started container kube-rbac-proxy-ovn-metrics openshift-cluster-csi-drivers 32m Normal Started pod/aws-ebs-csi-driver-node-s4chb Started container csi-node-driver-registrar openshift-cluster-csi-drivers 32m Normal Pulling pod/aws-ebs-csi-driver-node-s4chb Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" openshift-machine-config-operator 32m Normal Created pod/machine-config-daemon-vlfmm Created container oauth-proxy openshift-machine-config-operator 32m Normal Started pod/machine-config-daemon-vlfmm Started container oauth-proxy openshift-multus 32m Normal Pulled pod/multus-additional-cni-plugins-4qmk6 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be088cb9218a248cd4a616132c8daf531502c7b85404aadb0b73e76f8dba01df" in 25.613345208s (25.61335239s including waiting) openshift-ovn-kubernetes 32m Normal Pulled pod/ovnkube-node-zzdfn Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-cluster-csi-drivers 32m Normal Created pod/aws-ebs-csi-driver-node-s4chb Created container csi-liveness-probe openshift-cluster-csi-drivers 32m Normal Started pod/aws-ebs-csi-driver-node-s4chb Started container csi-liveness-probe openshift-multus 32m Normal Pulling pod/multus-additional-cni-plugins-4qmk6 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dec1af3c24148179ab45f9137ba74876f4287121f07aa32dc7a3643da749a9" openshift-multus 32m Normal Started pod/multus-additional-cni-plugins-4qmk6 Started container cni-plugins openshift-multus 32m Normal Created pod/multus-additional-cni-plugins-4qmk6 Created container cni-plugins openshift-cluster-csi-drivers 32m Normal Pulled pod/aws-ebs-csi-driver-node-s4chb Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" in 2.131946108s (2.131962458s including waiting) openshift-ovn-kubernetes 32m Normal Created pod/ovnkube-node-zzdfn Created container ovnkube-node openshift-ovn-kubernetes 32m Normal Started pod/ovnkube-node-zzdfn Started container ovnkube-node openshift-ovn-kubernetes 32m Normal Started pod/ovnkube-node-zzdfn Started container ovn-controller openshift-ovn-kubernetes 32m Normal Created pod/ovnkube-node-zzdfn Created container ovn-controller openshift-multus 32m Normal Created pod/multus-additional-cni-plugins-4qmk6 Created container bond-cni-plugin openshift-multus 32m Normal Pulling pod/multus-additional-cni-plugins-4qmk6 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4903dfa59fc0fcc3521f6431d51c6ed0ef563b0c63614a2df44ee4d447ab6c5d" openshift-multus 32m Normal Started pod/multus-additional-cni-plugins-4qmk6 Started container bond-cni-plugin openshift-multus 32m Normal Pulled pod/multus-additional-cni-plugins-4qmk6 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dec1af3c24148179ab45f9137ba74876f4287121f07aa32dc7a3643da749a9" in 654.602357ms (654.615412ms including waiting) openshift-ovn-kubernetes 32m Normal Pulled pod/ovnkube-node-zzdfn Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-network-diagnostics 32m Warning FailedCreatePodSandBox pod/network-check-target-v468t Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-v468t_openshift-network-diagnostics_c4ea31f6-ffde-4f33-82ca-18f1f2160161_0(031b3c3c9681273510a80955f7c9679b102e086339313d8d3acb9a5a43e263a3): error adding pod openshift-network-diagnostics_network-check-target-v468t to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-network-diagnostics/network-check-target-v468t/c4ea31f6-ffde-4f33-82ca-18f1f2160161]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition openshift-monitoring 32m Warning FailedCreatePodSandBox pod/sre-dns-latency-exporter-hm6bk Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_sre-dns-latency-exporter-hm6bk_openshift-monitoring_01d76d14-ee4d-4677-a738-93fc86570731_0(026db2c160abe5be4fbd2fd708fade4e364d9370dcb9f4d30fc9d26ca3980dbe): error adding pod openshift-monitoring_sre-dns-latency-exporter-hm6bk to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-monitoring/sre-dns-latency-exporter-hm6bk/01d76d14-ee4d-4677-a738-93fc86570731]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition openshift-multus 32m Normal Pulled pod/multus-additional-cni-plugins-4qmk6 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4903dfa59fc0fcc3521f6431d51c6ed0ef563b0c63614a2df44ee4d447ab6c5d" in 939.590051ms (939.605198ms including waiting) openshift-ingress-canary 32m Warning FailedCreatePodSandBox pod/ingress-canary-zwpz2 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-zwpz2_openshift-ingress-canary_3d85c6bf-1b2d-41d8-94ba-5e5b6c23aae0_0(479ef4d7af920112bffb4b12c87577bd5e941e7b64e945d6c6f03a784fe3b66a): error adding pod openshift-ingress-canary_ingress-canary-zwpz2 to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-ingress-canary/ingress-canary-zwpz2/3d85c6bf-1b2d-41d8-94ba-5e5b6c23aae0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition openshift-multus 32m Normal Started pod/multus-additional-cni-plugins-4qmk6 Started container routeoverride-cni openshift-multus 32m Warning FailedCreatePodSandBox pod/network-metrics-daemon-lbxjr Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-lbxjr_openshift-multus_76919757-e655-408b-95dd-275673f2c388_0(ba548cf75902793486816bb7fbd9a7f2b89f834a27365efe988a28940f6e201d): error adding pod openshift-multus_network-metrics-daemon-lbxjr to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-multus/network-metrics-daemon-lbxjr/76919757-e655-408b-95dd-275673f2c388]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition openshift-multus 32m Normal Created pod/multus-additional-cni-plugins-4qmk6 Created container routeoverride-cni openshift-multus 32m Normal Pulling pod/multus-additional-cni-plugins-4qmk6 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" openshift-cloud-credential-operator 32m Normal Killing pod/pod-identity-webhook-b645775d7-bhp9j Stopping container pod-identity-webhook default 32m Normal NodeSchedulable node/ip-10-0-187-75.ec2.internal Node ip-10-0-187-75.ec2.internal status is now: NodeSchedulable openshift-multus 32m Normal Created pod/multus-additional-cni-plugins-4qmk6 Created container whereabouts-cni-bincopy openshift-multus 32m Normal Pulled pod/multus-additional-cni-plugins-4qmk6 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" in 1.88092929s (1.880937573s including waiting) openshift-multus 32m Normal Started pod/multus-additional-cni-plugins-4qmk6 Started container whereabouts-cni-bincopy openshift-multus 32m Normal Started pod/multus-additional-cni-plugins-4qmk6 Started container whereabouts-cni openshift-multus 32m Normal Pulled pod/multus-additional-cni-plugins-4qmk6 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" already present on machine openshift-multus 32m Normal Created pod/multus-additional-cni-plugins-4qmk6 Created container whereabouts-cni openshift-multus 32m Normal Pulled pod/multus-additional-cni-plugins-4qmk6 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" already present on machine openshift-multus 32m Normal Created pod/multus-additional-cni-plugins-4qmk6 Created container kube-multus-additional-cni-plugins default 32m Normal NodeDone node/ip-10-0-187-75.ec2.internal Setting node ip-10-0-187-75.ec2.internal, currentConfig rendered-worker-c37c7a9e551f049d382df8406f11fe9b to Done default 32m Normal ConfigDriftMonitorStarted node/ip-10-0-187-75.ec2.internal Config Drift Monitor started, watching against rendered-worker-c37c7a9e551f049d382df8406f11fe9b default 32m Normal SetDesiredConfig machineconfigpool/worker Targeted node ip-10-0-195-121.ec2.internal to config rendered-worker-c37c7a9e551f049d382df8406f11fe9b default 32m Normal Uncordon node/ip-10-0-187-75.ec2.internal Update completed for config rendered-worker-c37c7a9e551f049d382df8406f11fe9b and node has been uncordoned openshift-monitoring 31m Normal Pulling pod/prometheus-adapter-8467ff79fd-xg97t Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbc27b4ea8b6ed06d8490b60e95b36bda21f09f15ec3f25f901c8dffc32292d9" openshift-controller-manager 31m Normal LeaderElection configmap/openshift-master-controllers controller-manager-66b447958d-6mqfl became leader openshift-controller-manager 31m Normal LeaderElection lease/openshift-master-controllers controller-manager-66b447958d-6mqfl became leader openshift-monitoring 31m Normal AddedInterface pod/prometheus-adapter-8467ff79fd-xg97t Add eth0 [10.129.2.16/23] from ovn-kubernetes default 31m Normal ConfigDriftMonitorStopped node/ip-10-0-195-121.ec2.internal Config Drift Monitor stopped openshift-monitoring 31m Normal Pulling pod/thanos-querier-6566ccfdd9-vppqt Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a898acb346b7aae546bb9dc31e43266415a096fa755241b56acb5969190584f9" default 31m Normal Cordon node/ip-10-0-195-121.ec2.internal Cordoned node to apply update openshift-monitoring 31m Normal AddedInterface pod/thanos-querier-6566ccfdd9-vppqt Add eth0 [10.129.2.17/23] from ovn-kubernetes default 31m Normal Drain node/ip-10-0-195-121.ec2.internal Draining node to update config. openshift-monitoring 31m Normal Created pod/prometheus-adapter-8467ff79fd-xg97t Created container prometheus-adapter openshift-monitoring 31m Normal Pulled pod/prometheus-adapter-8467ff79fd-xg97t Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbc27b4ea8b6ed06d8490b60e95b36bda21f09f15ec3f25f901c8dffc32292d9" in 1.655920033s (1.655934899s including waiting) openshift-monitoring 31m Normal Pulling pod/prometheus-operator-admission-webhook-5c9b9d98cc-dvsqk Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e2218fd1d860bdb72a28d8fc34e1d5e7c3674bf1d0005583d70800dcd79d2" openshift-ingress 31m Normal Pulling pod/router-default-7cf4c94d4-klqtt Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0743d54d3acaf6558295618248ff446b4352dde0234d52465d7578c7c261e6fd" openshift-ingress 31m Normal AddedInterface pod/router-default-7cf4c94d4-klqtt Add eth0 [10.129.2.18/23] from ovn-kubernetes openshift-monitoring 31m Normal Started pod/prometheus-adapter-8467ff79fd-xg97t Started container prometheus-adapter openshift-monitoring 31m Normal AddedInterface pod/prometheus-operator-admission-webhook-5c9b9d98cc-dvsqk Add eth0 [10.129.2.19/23] from ovn-kubernetes openshift-monitoring 31m Normal Started pod/thanos-querier-6566ccfdd9-vppqt Started container oauth-proxy openshift-monitoring 31m Normal Killing pod/telemeter-client-5c9599c744-827bg Stopping container telemeter-client openshift-ingress 31m Normal Started pod/router-default-7cf4c94d4-klqtt Started container router openshift-monitoring 31m Normal Started pod/prometheus-operator-admission-webhook-5c9b9d98cc-dvsqk Started container prometheus-operator-admission-webhook openshift-monitoring 31m Normal Created pod/prometheus-operator-admission-webhook-5c9b9d98cc-dvsqk Created container prometheus-operator-admission-webhook openshift-monitoring 31m Normal Pulled pod/prometheus-operator-admission-webhook-5c9b9d98cc-dvsqk Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e2218fd1d860bdb72a28d8fc34e1d5e7c3674bf1d0005583d70800dcd79d2" in 2.405206436s (2.405217995s including waiting) openshift-ingress 31m Normal Created pod/router-default-7cf4c94d4-klqtt Created container router openshift-monitoring 31m Normal Killing pod/telemeter-client-5c9599c744-827bg Stopping container reload openshift-monitoring 31m Normal Pulled pod/thanos-querier-6566ccfdd9-vppqt Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-monitoring 31m Normal Pulled pod/thanos-querier-6566ccfdd9-vppqt Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 31m Normal SuccessfulCreate replicaset/openshift-state-metrics-8757cbbb4 Created pod: openshift-state-metrics-8757cbbb4-gqxjm openshift-image-registry 31m Normal Killing pod/image-registry-55b7d998b9-479fl Stopping container registry openshift-monitoring 31m Normal Killing pod/openshift-state-metrics-8757cbbb4-lk7sd Stopping container openshift-state-metrics openshift-ingress 31m Normal Pulled pod/router-default-7cf4c94d4-klqtt Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0743d54d3acaf6558295618248ff446b4352dde0234d52465d7578c7c261e6fd" in 2.335193484s (2.335201027s including waiting) openshift-monitoring 31m Normal Killing pod/openshift-state-metrics-8757cbbb4-lk7sd Stopping container kube-rbac-proxy-self openshift-monitoring 31m Normal Killing pod/telemeter-client-5c9599c744-827bg Stopping container kube-rbac-proxy openshift-monitoring 31m Normal Killing pod/openshift-state-metrics-8757cbbb4-lk7sd Stopping container kube-rbac-proxy-main openshift-monitoring 31m Normal Pulled pod/thanos-querier-6566ccfdd9-vppqt Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a898acb346b7aae546bb9dc31e43266415a096fa755241b56acb5969190584f9" in 3.483925518s (3.483935472s including waiting) openshift-monitoring 31m Normal Killing pod/kube-state-metrics-7d7b86bb68-l675w Stopping container kube-state-metrics openshift-monitoring 31m Normal Started pod/thanos-querier-6566ccfdd9-vppqt Started container thanos-query openshift-multus 31m Normal Pulling pod/network-metrics-daemon-lbxjr Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5e01ac462952a60254c8a9a0f43cce8b323da60014d37a49250a2e586142a5" openshift-multus 31m Normal AddedInterface pod/network-metrics-daemon-lbxjr Add eth0 [10.129.2.5/23] from ovn-kubernetes openshift-monitoring 31m Normal Pulling pod/thanos-querier-6566ccfdd9-vppqt Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8541443d5c6c0d31959a8e5f8d6a525d43bb4df66be563603d282f48bdeb304" openshift-monitoring 31m Normal Started pod/thanos-querier-6566ccfdd9-vppqt Started container kube-rbac-proxy openshift-monitoring 31m Normal Created pod/thanos-querier-6566ccfdd9-vppqt Created container kube-rbac-proxy openshift-monitoring 31m Normal Killing pod/kube-state-metrics-7d7b86bb68-l675w Stopping container kube-rbac-proxy-main openshift-monitoring 31m Normal Created pod/thanos-querier-6566ccfdd9-vppqt Created container thanos-query openshift-monitoring 31m Normal Killing pod/kube-state-metrics-7d7b86bb68-l675w Stopping container kube-rbac-proxy-self openshift-monitoring 31m Normal Created pod/thanos-querier-6566ccfdd9-vppqt Created container oauth-proxy openshift-monitoring 31m Normal SuccessfulCreate replicaset/kube-state-metrics-7d7b86bb68 Created pod: kube-state-metrics-7d7b86bb68-kpmhh openshift-monitoring 31m Normal Killing pod/prometheus-operator-7f64545d8-j6vlm Stopping container kube-rbac-proxy openshift-monitoring 31m Normal SuccessfulCreate replicaset/prometheus-operator-7f64545d8 Created pod: prometheus-operator-7f64545d8-7h6fd openshift-monitoring 31m Normal Pulling pod/sre-dns-latency-exporter-hm6bk Pulling image "quay.io/app-sre/managed-prometheus-exporter-base:latest" openshift-monitoring 31m Normal AddedInterface pod/sre-dns-latency-exporter-hm6bk Add eth0 [10.129.2.4/23] from ovn-kubernetes openshift-network-diagnostics 31m Normal AddedInterface pod/network-check-target-v468t Add eth0 [10.129.2.6/23] from ovn-kubernetes openshift-monitoring 31m Normal SuccessfulCreate replicaset/telemeter-client-5c9599c744 Created pod: telemeter-client-5c9599c744-rlt2c openshift-monitoring 31m Normal Killing pod/prometheus-operator-7f64545d8-j6vlm Stopping container prometheus-operator openshift-image-registry 31m Normal SuccessfulCreate replicaset/image-registry-55b7d998b9 Created pod: image-registry-55b7d998b9-pq262 openshift-monitoring 31m Normal AddedInterface pod/openshift-state-metrics-8757cbbb4-gqxjm Add eth0 [10.129.2.22/23] from ovn-kubernetes openshift-monitoring 31m Normal Pulling pod/kube-state-metrics-7d7b86bb68-kpmhh Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:48772f8b25db5f426c168026f3e89252389ea1c6bf3e508f670bffb24ee6e8e7" openshift-monitoring 31m Normal Pulled pod/openshift-state-metrics-8757cbbb4-gqxjm Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 31m Normal AddedInterface pod/kube-state-metrics-7d7b86bb68-kpmhh Add eth0 [10.129.2.23/23] from ovn-kubernetes openshift-monitoring 31m Normal AddedInterface pod/prometheus-operator-7f64545d8-7h6fd Add eth0 [10.129.2.26/23] from ovn-kubernetes openshift-monitoring 31m Normal Pulling pod/prometheus-operator-7f64545d8-7h6fd Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f0c9dc9888697e244d61cd89f8fe5a61dcb09dc100889be738db21b2fc5bbf7" openshift-apiserver 31m Warning Unhealthy pod/apiserver-5f568869f-b9bw5 Readiness probe failed: Get "https://10.130.0.42:8443/readyz": dial tcp 10.130.0.42:8443: connect: connection refused openshift-image-registry 31m Normal AddedInterface pod/image-registry-55b7d998b9-pq262 Add eth0 [10.129.2.25/23] from ovn-kubernetes openshift-monitoring 31m Normal Pulling pod/telemeter-client-5c9599c744-rlt2c Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:942a1ba76f95d02ba681afbb7d1aea28d457fb2a9d967cacc2233bb243588990" openshift-monitoring 31m Normal AddedInterface pod/telemeter-client-5c9599c744-rlt2c Add eth0 [10.129.2.24/23] from ovn-kubernetes openshift-monitoring 31m Normal Started pod/openshift-state-metrics-8757cbbb4-gqxjm Started container kube-rbac-proxy-main openshift-monitoring 31m Normal Pulled pod/openshift-state-metrics-8757cbbb4-gqxjm Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-kube-apiserver-operator 31m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-140-6.ec2.internal container \"kube-apiserver-check-endpoints\" started at 2023-03-21 12:41:12 +0000 UTC is still not ready" openshift-apiserver 31m Warning ProbeError pod/apiserver-5f568869f-b9bw5 Readiness probe error: Get "https://10.130.0.42:8443/readyz": dial tcp 10.130.0.42:8443: connect: connection refused... openshift-image-registry 31m Normal Pulled pod/image-registry-55b7d998b9-pq262 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" already present on machine openshift-monitoring 31m Normal Created pod/openshift-state-metrics-8757cbbb4-gqxjm Created container kube-rbac-proxy-main openshift-network-diagnostics 31m Normal Pulling pod/network-check-target-v468t Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" openshift-image-registry 31m Normal Started pod/image-registry-55b7d998b9-pq262 Started container registry openshift-monitoring 31m Normal Started pod/openshift-state-metrics-8757cbbb4-gqxjm Started container kube-rbac-proxy-self openshift-monitoring 31m Normal Created pod/openshift-state-metrics-8757cbbb4-gqxjm Created container kube-rbac-proxy-self openshift-multus 31m Normal Started pod/network-metrics-daemon-lbxjr Started container network-metrics-daemon openshift-multus 31m Normal Created pod/network-metrics-daemon-lbxjr Created container network-metrics-daemon openshift-monitoring 31m Normal Pulled pod/thanos-querier-6566ccfdd9-vppqt Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8541443d5c6c0d31959a8e5f8d6a525d43bb4df66be563603d282f48bdeb304" in 2.194426145s (2.194433827s including waiting) openshift-multus 31m Normal Pulled pod/network-metrics-daemon-lbxjr Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5e01ac462952a60254c8a9a0f43cce8b323da60014d37a49250a2e586142a5" in 2.190304237s (2.190316559s including waiting) openshift-monitoring 31m Normal Started pod/thanos-querier-6566ccfdd9-vppqt Started container prom-label-proxy openshift-monitoring 31m Normal Pulled pod/thanos-querier-6566ccfdd9-vppqt Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-image-registry 31m Normal Created pod/image-registry-55b7d998b9-pq262 Created container registry openshift-monitoring 31m Normal Created pod/thanos-querier-6566ccfdd9-vppqt Created container prom-label-proxy openshift-monitoring 31m Normal Pulling pod/openshift-state-metrics-8757cbbb4-gqxjm Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:907363827442bc34c33be580ea3ac30198ca65f46a95eb80b2c5255e24d173f3" openshift-monitoring 31m Normal Started pod/thanos-querier-6566ccfdd9-vppqt Started container kube-rbac-proxy-metrics openshift-monitoring 31m Normal Created pod/thanos-querier-6566ccfdd9-vppqt Created container kube-rbac-proxy-metrics openshift-monitoring 31m Normal Created pod/thanos-querier-6566ccfdd9-vppqt Created container kube-rbac-proxy-rules openshift-ingress-canary 31m Normal Pulling pod/ingress-canary-zwpz2 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6de5d1e6d774c20a0c3bd4d4c544234519bc4528e5535f24d0cf78a6788b4056" openshift-ingress-canary 31m Normal AddedInterface pod/ingress-canary-zwpz2 Add eth0 [10.129.2.9/23] from ovn-kubernetes openshift-monitoring 31m Normal Started pod/thanos-querier-6566ccfdd9-vppqt Started container kube-rbac-proxy-rules openshift-multus 31m Normal Created pod/network-metrics-daemon-lbxjr Created container kube-rbac-proxy openshift-multus 31m Normal Started pod/network-metrics-daemon-lbxjr Started container kube-rbac-proxy openshift-monitoring 31m Normal Pulled pod/thanos-querier-6566ccfdd9-vppqt Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-multus 31m Normal Pulled pod/network-metrics-daemon-lbxjr Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-ingress 31m Normal Killing pod/router-default-7cf4c94d4-zs7xj Stopping container router openshift-ingress 31m Normal SuccessfulCreate replicaset/router-default-7cf4c94d4 Created pod: router-default-7cf4c94d4-tqmcb openshift-monitoring 31m Normal Killing pod/prometheus-operator-admission-webhook-5c9b9d98cc-4mv5m Stopping container prometheus-operator-admission-webhook openshift-kube-scheduler 31m Warning BackOff pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Back-off restarting failed container wait-for-host-port in pod openshift-kube-scheduler-ip-10-0-140-6.ec2.internal_openshift-kube-scheduler(160da36d3ffc2889ad115aa75251cac6) openshift-monitoring 31m Normal SuccessfulCreate replicaset/prometheus-operator-admission-webhook-5c9b9d98cc Created pod: prometheus-operator-admission-webhook-5c9b9d98cc-9qkgr openshift-oauth-apiserver 31m Warning Unhealthy pod/apiserver-74455c7c5-rpzl9 Readiness probe failed: Get "https://10.130.0.67:8443/readyz": dial tcp 10.130.0.67:8443: connect: connection refused default 31m Normal NodeNotSchedulable node/ip-10-0-195-121.ec2.internal Node ip-10-0-195-121.ec2.internal status is now: NodeNotSchedulable openshift-monitoring 31m Normal SuccessfulCreate replicaset/thanos-querier-6566ccfdd9 Created pod: thanos-querier-6566ccfdd9-lkbh6 openshift-monitoring 31m Normal Killing pod/thanos-querier-6566ccfdd9-7cwhk Stopping container kube-rbac-proxy-metrics openshift-monitoring 31m Normal Killing pod/thanos-querier-6566ccfdd9-7cwhk Stopping container thanos-query openshift-network-diagnostics 31m Normal Pulled pod/network-check-target-v468t Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" in 8.582990106s (8.582999173s including waiting) openshift-monitoring 31m Normal Killing pod/thanos-querier-6566ccfdd9-7cwhk Stopping container prom-label-proxy openshift-monitoring 31m Normal Pulled pod/kube-state-metrics-7d7b86bb68-kpmhh Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:48772f8b25db5f426c168026f3e89252389ea1c6bf3e508f670bffb24ee6e8e7" in 7.719383588s (7.719392864s including waiting) openshift-monitoring 31m Normal Killing pod/thanos-querier-6566ccfdd9-7cwhk Stopping container kube-rbac-proxy openshift-monitoring 31m Normal Pulled pod/prometheus-operator-7f64545d8-7h6fd Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f0c9dc9888697e244d61cd89f8fe5a61dcb09dc100889be738db21b2fc5bbf7" in 7.952844943s (7.952851751s including waiting) openshift-monitoring 31m Normal Killing pod/thanos-querier-6566ccfdd9-7cwhk Stopping container kube-rbac-proxy-rules openshift-monitoring 31m Normal Pulled pod/openshift-state-metrics-8757cbbb4-gqxjm Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:907363827442bc34c33be580ea3ac30198ca65f46a95eb80b2c5255e24d173f3" in 7.519518378s (7.519528139s including waiting) openshift-monitoring 31m Normal Killing pod/thanos-querier-6566ccfdd9-7cwhk Stopping container oauth-proxy openshift-monitoring 31m Normal Started pod/telemeter-client-5c9599c744-rlt2c Started container telemeter-client openshift-monitoring 31m Normal Created pod/openshift-state-metrics-8757cbbb4-gqxjm Created container openshift-state-metrics openshift-monitoring 31m Normal Created pod/prometheus-operator-7f64545d8-7h6fd Created container prometheus-operator openshift-ingress-canary 31m Normal Pulled pod/ingress-canary-zwpz2 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6de5d1e6d774c20a0c3bd4d4c544234519bc4528e5535f24d0cf78a6788b4056" in 7.328697465s (7.328717296s including waiting) openshift-monitoring 31m Normal Created pod/sre-dns-latency-exporter-hm6bk Created container main openshift-monitoring 31m Normal Started pod/openshift-state-metrics-8757cbbb4-gqxjm Started container openshift-state-metrics openshift-ingress-canary 31m Normal Created pod/ingress-canary-zwpz2 Created container serve-healthcheck-canary openshift-monitoring 31m Normal Pulled pod/prometheus-operator-7f64545d8-7h6fd Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-network-diagnostics 31m Normal Started pod/network-check-target-v468t Started container network-check-target-container openshift-monitoring 31m Normal Created pod/kube-state-metrics-7d7b86bb68-kpmhh Created container kube-state-metrics openshift-monitoring 31m Normal Pulled pod/kube-state-metrics-7d7b86bb68-kpmhh Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 31m Normal Pulled pod/telemeter-client-5c9599c744-rlt2c Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:942a1ba76f95d02ba681afbb7d1aea28d457fb2a9d967cacc2233bb243588990" in 8.560271451s (8.560280787s including waiting) openshift-monitoring 31m Normal Started pod/kube-state-metrics-7d7b86bb68-kpmhh Started container kube-state-metrics openshift-ingress-canary 31m Normal Started pod/ingress-canary-zwpz2 Started container serve-healthcheck-canary openshift-monitoring 31m Normal Created pod/telemeter-client-5c9599c744-rlt2c Created container telemeter-client openshift-monitoring 31m Normal Pulling pod/telemeter-client-5c9599c744-rlt2c Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" openshift-monitoring 31m Normal Started pod/prometheus-operator-7f64545d8-7h6fd Started container prometheus-operator openshift-network-diagnostics 31m Normal Created pod/network-check-target-v468t Created container network-check-target-container openshift-monitoring 31m Normal Pulled pod/sre-dns-latency-exporter-hm6bk Successfully pulled image "quay.io/app-sre/managed-prometheus-exporter-base:latest" in 9.635192186s (9.635199927s including waiting) openshift-monitoring 31m Normal Started pod/kube-state-metrics-7d7b86bb68-kpmhh Started container kube-rbac-proxy-self openshift-monitoring 31m Normal Created pod/prometheus-operator-7f64545d8-7h6fd Created container kube-rbac-proxy openshift-monitoring 31m Normal Created pod/kube-state-metrics-7d7b86bb68-kpmhh Created container kube-rbac-proxy-main openshift-monitoring 31m Normal Created pod/kube-state-metrics-7d7b86bb68-kpmhh Created container kube-rbac-proxy-self openshift-monitoring 31m Normal Started pod/sre-dns-latency-exporter-hm6bk Started container main openshift-oauth-apiserver 31m Warning ProbeError pod/apiserver-74455c7c5-rpzl9 Readiness probe error: Get "https://10.130.0.67:8443/readyz": dial tcp 10.130.0.67:8443: connect: connection refused... openshift-monitoring 31m Normal Started pod/kube-state-metrics-7d7b86bb68-kpmhh Started container kube-rbac-proxy-main openshift-monitoring 31m Normal Started pod/prometheus-operator-7f64545d8-7h6fd Started container kube-rbac-proxy openshift-monitoring 31m Normal Pulled pod/kube-state-metrics-7d7b86bb68-kpmhh Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 31m Normal Started pod/telemeter-client-5c9599c744-rlt2c Started container reload openshift-monitoring 31m Normal Created pod/telemeter-client-5c9599c744-rlt2c Created container kube-rbac-proxy openshift-monitoring 31m Normal Started pod/telemeter-client-5c9599c744-rlt2c Started container kube-rbac-proxy openshift-monitoring 31m Normal Pulled pod/telemeter-client-5c9599c744-rlt2c Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" in 1.105366098s (1.105374825s including waiting) openshift-monitoring 31m Normal Created pod/telemeter-client-5c9599c744-rlt2c Created container reload openshift-monitoring 31m Normal Pulled pod/telemeter-client-5c9599c744-rlt2c Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-kube-scheduler 31m Normal Started pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Started container wait-for-host-port openshift-kube-scheduler 31m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-scheduler 31m Normal Created pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Created container wait-for-host-port openshift-kube-scheduler 31m Warning ProbeError pod/openshift-kube-scheduler-guard-ip-10-0-140-6.ec2.internal Readiness probe error: Get "https://10.0.140.6:10259/healthz": dial tcp 10.0.140.6:10259: connect: connection refused... openshift-kube-scheduler 31m Warning Unhealthy pod/openshift-kube-scheduler-guard-ip-10-0-140-6.ec2.internal Readiness probe failed: Get "https://10.0.140.6:10259/healthz": dial tcp 10.0.140.6:10259: connect: connection refused openshift-kube-apiserver-operator 31m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-140-6.ec2.internal container \"kube-apiserver-check-endpoints\" started at 2023-03-21 12:41:12 +0000 UTC is still not ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-140-6.ec2.internal container \"kube-apiserver\" started at 2023-03-21 12:40:57 +0000 UTC is still not ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-140-6.ec2.internal container \"kube-apiserver-check-endpoints\" started at 2023-03-21 12:41:12 +0000 UTC is still not ready" openshift-kube-scheduler 31m Warning FastControllerResync pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver 31m Warning ProbeError pod/kube-apiserver-guard-ip-10-0-140-6.ec2.internal Readiness probe error: HTTP probe failed with statuscode: 500... openshift-kube-scheduler-operator 31m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 9:\nNodeInstallerDegraded: installer: ing) (len=1) \"9\",\nNodeInstallerDegraded: NodeName: (string) \"\",\nNodeInstallerDegraded: Namespace: (string) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:37:22.118786 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.207089 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.210497 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:33.595654 1 cmd.go:124] Received SIGTERM or SIGINT signal, shutting down the process.\nNodeInstallerDegraded: I0321 12:37:52.210904 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:37:52.211019 1 cmd.go:106] client rate limiter Wait returned an error: context canceled\nNodeInstallerDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal container \"kube-scheduler\" is terminated: Completed: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal container \"kube-scheduler-cert-syncer\" is terminated: Error: mestamp:time.Date(2023, time.March, 21, 12, 41, 6, 12503035, time.Local), LastTimestamp:time.Date(2023, time.March, 21, 12, 41, 6, 12503035, time.Local), Count:1, Type:\"Warning\", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:\"\", Related:(*v1.ObjectReference)(nil), ReportingController:\"\", ReportingInstance:\"\"}': 'events is forbidden: User \"system:serviceaccount:openshift-kube-scheduler:localhost-recovery-client\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"' (will not retry!)\nStaticPodsDegraded: W0321 12:41:13.409969 1 reflector.go:424] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Secret: secrets is forbidden: User \"system:serviceaccount:openshift-kube-scheduler:localhost-recovery-client\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-kube-scheduler\"\nStaticPodsDegraded: E0321 12:41:13.409990 1 reflector.go:140] k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Secret: failed to list *v1.Secret: secrets is forbidden: User \"system:serviceaccount:openshift-kube-scheduler:localhost-recovery-client\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-kube-scheduler\"\nStaticPodsDegraded: I0321 12:41:19.924913 1 base_controller.go:73] Caches are synced for CertSyncController \nStaticPodsDegraded: I0321 12:41:19.924931 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ...\nStaticPodsDegraded: I0321 12:41:19.924974 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:41:19.924979 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:41:34.426103 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:41:34.426122 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:41:39.836850 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:41:39.836928 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-140-6.ec2.internal container \"kube-scheduler-recovery-controller\" is terminated: Completed: " to "NodeInstallerDegraded: 1 nodes are failing on revision 9:\nNodeInstallerDegraded: installer: ing) (len=1) \"9\",\nNodeInstallerDegraded: NodeName: (string) \"\",\nNodeInstallerDegraded: Namespace: (string) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) ,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) ,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0321 12:37:22.118786 1 cmd.go:410] Getting controller reference for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.207089 1 cmd.go:423] Waiting for installer revisions to settle for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:22.210497 1 cmd.go:515] Waiting additional period after revisions have settled for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: I0321 12:37:33.595654 1 cmd.go:124] Received SIGTERM or SIGINT signal, shutting down the process.\nNodeInstallerDegraded: I0321 12:37:52.210904 1 cmd.go:521] Getting installer pods for node ip-10-0-140-6.ec2.internal\nNodeInstallerDegraded: F0321 12:37:52.211019 1 cmd.go:106] client rate limiter Wait returned an error: context canceled\nNodeInstallerDegraded: " openshift-kube-apiserver 31m Warning Unhealthy pod/kube-apiserver-guard-ip-10-0-140-6.ec2.internal Readiness probe failed: HTTP probe failed with statuscode: 500 openshift-image-registry 31m Warning Unhealthy pod/image-registry-55b7d998b9-479fl Readiness probe failed: Get "https://10.130.2.8:5000/healthz": dial tcp 10.130.2.8:5000: connect: connection refused openshift-kube-controller-manager 31m Warning ClusterInfrastructureStatus pod/kube-controller-manager-ip-10-0-140-6.ec2.internal unable to get cluster infrastructure status, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope openshift-image-registry 31m Warning ProbeError pod/image-registry-55b7d998b9-479fl Readiness probe error: Get "https://10.130.2.8:5000/healthz": dial tcp 10.130.2.8:5000: connect: connection refused... openshift-authentication-operator 31m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-74455c7c5-rpzl9 pod)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()" openshift-monitoring 31m Normal SuccessfulCreate replicaset/prometheus-adapter-8467ff79fd Created pod: prometheus-adapter-8467ff79fd-cth85 openshift-monitoring 31m Warning ProbeError pod/prometheus-adapter-8467ff79fd-szs4l Readiness probe error: Get "https://10.130.2.13:6443/readyz": dial tcp 10.130.2.13:6443: connect: connection refused... openshift-monitoring 31m Warning Unhealthy pod/prometheus-adapter-8467ff79fd-szs4l Readiness probe failed: Get "https://10.130.2.13:6443/readyz": dial tcp 10.130.2.13:6443: connect: connection refused openshift-monitoring 31m Normal Killing pod/prometheus-adapter-8467ff79fd-szs4l Stopping container prometheus-adapter openshift-kube-controller-manager 31m Warning ClusterInfrastructureStatus pod/kube-controller-manager-ip-10-0-140-6.ec2.internal unable to get cluster infrastructure status, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope openshift-apiserver-operator 31m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-5f568869f-b9bw5 pod)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()" openshift-ingress 31m Warning ProbeError pod/router-default-7cf4c94d4-zs7xj Readiness probe error: HTTP probe failed with statuscode: 500... openshift-ingress 31m Warning Unhealthy pod/router-default-7cf4c94d4-zs7xj Readiness probe failed: HTTP probe failed with statuscode: 500 openshift-kube-controller-manager-operator 31m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 7; 1 nodes are at revision 8" to "NodeInstallerProgressing: 1 nodes are at revision 7; 2 nodes are at revision 8",Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 7; 1 nodes are at revision 8" to "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 7; 2 nodes are at revision 8" openshift-kube-controller-manager-operator 31m Normal NodeCurrentRevisionChanged deployment/kube-controller-manager-operator Updated node "ip-10-0-140-6.ec2.internal" from revision 7 to 8 because static pod is ready default 30m Normal OSUpdateStaged node/ip-10-0-197-197.ec2.internal Changes to OS staged default 30m Normal PendingConfig node/ip-10-0-197-197.ec2.internal Written pending config rendered-master-d273453f5fe4894c22cd393f5c0dbfa3 default 30m Normal OSUpdateStarted node/ip-10-0-197-197.ec2.internal default 30m Normal Reboot node/ip-10-0-197-197.ec2.internal Node will reboot into config rendered-master-d273453f5fe4894c22cd393f5c0dbfa3 openshift-kube-controller-manager-operator 30m Normal NodeTargetRevisionChanged deployment/kube-controller-manager-operator Updating node "ip-10-0-197-197.ec2.internal" from revision 7 to 8 because node ip-10-0-197-197.ec2.internal with revision 7 is the oldest openshift-kube-controller-manager-operator 30m Normal PodCreated deployment/kube-controller-manager-operator Created Pod/installer-8-ip-10-0-197-197.ec2.internal -n openshift-kube-controller-manager because it was missing kube-system 30m Normal LeaderElection configmap/kube-controller-manager ip-10-0-239-132_6da329b0-e11f-4c1a-a70b-1fc5ea1bd5f3 became leader kube-system 30m Normal LeaderElection lease/kube-controller-manager ip-10-0-239-132_6da329b0-e11f-4c1a-a70b-1fc5ea1bd5f3 became leader openshift-kube-scheduler-operator 30m Normal NodeCurrentRevisionChanged deployment/openshift-kube-scheduler-operator Updated node "ip-10-0-140-6.ec2.internal" from revision 7 to 9 because static pod is ready openshift-kube-scheduler-operator 30m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready"),Progressing message changed from "NodeInstallerProgressing: 2 nodes are at revision 7; 1 nodes are at revision 8; 0 nodes have achieved new revision 9" to "NodeInstallerProgressing: 1 nodes are at revision 7; 1 nodes are at revision 8; 1 nodes are at revision 9",Available message changed from "StaticPodsAvailable: 3 nodes are active; 2 nodes are at revision 7; 1 nodes are at revision 8; 0 nodes have achieved new revision 9" to "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 7; 1 nodes are at revision 8; 1 nodes are at revision 9" default 30m Warning ResolutionFailed namespace/openshift-ocm-agent-operator constraints not satisfiable: subscription ocm-agent-operator exists, no operators found from catalog ocm-agent-operator-registry in namespace openshift-ocm-agent-operator referenced by subscription ocm-agent-operator openshift-kube-scheduler-operator 30m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/revision-pruner-9-ip-10-0-197-197.ec2.internal -n openshift-kube-scheduler because it was missing openshift-kube-scheduler-operator 30m Normal NodeTargetRevisionChanged deployment/openshift-kube-scheduler-operator Updating node "ip-10-0-197-197.ec2.internal" from revision 7 to 9 because node ip-10-0-197-197.ec2.internal with revision 7 is the oldest openshift-network-diagnostics 30m Warning ConnectivityOutageDetected node/ip-10-0-160-152.ec2.internal Connectivity outage detected: kubernetes-default-service-cluster-0: failed to establish a TCP connection to 172.30.0.1:443: dial tcp 172.30.0.1:443: i/o timeout openshift-network-diagnostics 30m Normal ConnectivityRestored node/ip-10-0-160-152.ec2.internal Connectivity restored after 59.999611473s: kubernetes-default-service-cluster-0: tcp connection to 172.30.0.1:443 succeeded openshift-network-diagnostics 30m Warning ConnectivityOutageDetected node/ip-10-0-160-152.ec2.internal Connectivity outage detected: kubernetes-apiserver-endpoint-ip-10-0-140-6: failed to establish a TCP connection to 10.0.140.6:6443: dial tcp 10.0.140.6:6443: i/o timeout openshift-network-diagnostics 30m Normal ConnectivityRestored node/ip-10-0-160-152.ec2.internal Connectivity restored after 59.99989579s: kubernetes-apiserver-endpoint-ip-10-0-140-6: tcp connection to 10.0.140.6:6443 succeeded openshift-network-diagnostics 30m Warning ConnectivityOutageDetected node/ip-10-0-160-152.ec2.internal Connectivity outage detected: kubernetes-apiserver-endpoint-ip-10-0-197-197: failed to establish a TCP connection to 10.0.197.197:6443: dial tcp 10.0.197.197:6443: connect: connection refused openshift-network-diagnostics 30m Normal ConnectivityRestored node/ip-10-0-160-152.ec2.internal Connectivity restored after 1m59.999157214s: openshift-apiserver-endpoint-ip-10-0-140-6: tcp connection to 10.128.0.57:8443 succeeded openshift-network-diagnostics 30m Warning ConnectivityOutageDetected node/ip-10-0-160-152.ec2.internal Connectivity outage detected: openshift-apiserver-endpoint-ip-10-0-140-6: failed to establish a TCP connection to 10.128.0.57:8443: dial tcp 10.128.0.57:8443: connect: connection refused openshift-kube-scheduler-operator 30m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/installer-9-ip-10-0-197-197.ec2.internal -n openshift-kube-scheduler because it was missing openshift-network-diagnostics 30m Warning ConnectivityOutageDetected node/ip-10-0-160-152.ec2.internal Connectivity outage detected: network-check-target-ip-10-0-140-6: failed to establish a TCP connection to 10.128.0.3:8080: dial tcp 10.128.0.3:8080: i/o timeout openshift-network-diagnostics 30m Warning ConnectivityOutageDetected node/ip-10-0-160-152.ec2.internal Connectivity outage detected: network-check-target-ip-10-0-232-8: failed to establish a TCP connection to 10.128.2.5:8080: dial tcp 10.128.2.5:8080: i/o timeout openshift-network-diagnostics 30m Normal ConnectivityRestored node/ip-10-0-160-152.ec2.internal Connectivity restored after 1m0.000730817s: network-check-target-ip-10-0-140-6: tcp connection to 10.128.0.3:8080 succeeded openshift-network-diagnostics 30m Normal ConnectivityRestored node/ip-10-0-160-152.ec2.internal Connectivity restored after 1m59.999746023s: network-check-target-ip-10-0-232-8: tcp connection to 10.128.2.5:8080 succeeded openshift-network-diagnostics 30m Normal ConnectivityRestored node/ip-10-0-160-152.ec2.internal Connectivity restored after 1m59.999284855s: network-check-target-ip-10-0-187-75: tcp connection to 10.129.2.6:8080 succeeded openshift-network-diagnostics 30m Warning ConnectivityOutageDetected node/ip-10-0-160-152.ec2.internal Connectivity outage detected: network-check-target-ip-10-0-187-75: failed to establish a TCP connection to 10.129.2.6:8080: dial tcp 10.129.2.6:8080: i/o timeout openshift-kube-controller-manager 30m Normal Pulled pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-controller-manager 30m Normal Created pod/kube-controller-manager-ip-10-0-239-132.ec2.internal Created container kube-controller-manager default 30m Normal NodeHasSufficientMemory node/ip-10-0-197-197.ec2.internal Node ip-10-0-197-197.ec2.internal status is now: NodeHasSufficientMemory openshift-kube-controller-manager-operator 30m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-197-197.ec2.internal\" not ready since 2023-03-21 12:43:40 +0000 UTC because KubeletNotReady (PLEG is not healthy: pleg has yet to be successful)" default 30m Normal NodeAllocatableEnforced node/ip-10-0-197-197.ec2.internal Updated Node Allocatable limit across pods default 30m Warning Rebooted node/ip-10-0-197-197.ec2.internal Node ip-10-0-197-197.ec2.internal has been rebooted, boot id: 3e3e8b9b-2f8a-4983-a022-9e66ac7c9c40 default 30m Normal NodeNotReady node/ip-10-0-197-197.ec2.internal Node ip-10-0-197-197.ec2.internal status is now: NodeNotReady openshift-kube-controller-manager 30m Warning Unhealthy pod/kube-controller-manager-guard-ip-10-0-239-132.ec2.internal Readiness probe failed: Get "https://10.0.239.132:10257/healthz": dial tcp 10.0.239.132:10257: connect: connection refused default 30m Normal Starting node/ip-10-0-197-197.ec2.internal Starting kubelet. default 30m Normal NodeHasSufficientPID node/ip-10-0-197-197.ec2.internal Node ip-10-0-197-197.ec2.internal status is now: NodeHasSufficientPID default 30m Normal NodeHasNoDiskPressure node/ip-10-0-197-197.ec2.internal Node ip-10-0-197-197.ec2.internal status is now: NodeHasNoDiskPressure default 30m Normal NodeNotSchedulable node/ip-10-0-197-197.ec2.internal Node ip-10-0-197-197.ec2.internal status is now: NodeNotSchedulable openshift-etcd-operator 30m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-197-197.ec2.internal\" not ready since 2023-03-21 12:43:40 +0000 UTC because KubeletNotReady (PLEG is not healthy: pleg has yet to be successful)\nEtcdMembersDegraded: No unhealthy members found" default 30m Normal NodeSchedulable node/ip-10-0-197-197.ec2.internal Node ip-10-0-197-197.ec2.internal status is now: NodeSchedulable openshift-kube-controller-manager 30m Warning ProbeError pod/kube-controller-manager-guard-ip-10-0-239-132.ec2.internal Readiness probe error: Get "https://10.0.239.132:10257/healthz": dial tcp 10.0.239.132:10257: connect: connection refused... openshift-kube-apiserver-operator 30m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-140-6.ec2.internal container \"kube-apiserver\" started at 2023-03-21 12:40:57 +0000 UTC is still not ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-140-6.ec2.internal container \"kube-apiserver-check-endpoints\" started at 2023-03-21 12:41:12 +0000 UTC is still not ready" to "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-197-197.ec2.internal\" not ready since 2023-03-21 12:43:40 +0000 UTC because KubeletNotReady (PLEG is not healthy: pleg has yet to be successful)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-140-6.ec2.internal container \"kube-apiserver\" started at 2023-03-21 12:40:57 +0000 UTC is still not ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-140-6.ec2.internal container \"kube-apiserver-check-endpoints\" started at 2023-03-21 12:41:12 +0000 UTC is still not ready" openshift-kube-scheduler-operator 30m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-197-197.ec2.internal\" not ready since 2023-03-21 12:43:40 +0000 UTC because KubeletNotReady (PLEG is not healthy: pleg has yet to be successful)" openshift-kube-apiserver-operator 30m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-197-197.ec2.internal\" not ready since 2023-03-21 12:43:40 +0000 UTC because KubeletNotReady (PLEG is not healthy: pleg has yet to be successful)\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-140-6.ec2.internal container \"kube-apiserver\" started at 2023-03-21 12:40:57 +0000 UTC is still not ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-140-6.ec2.internal container \"kube-apiserver-check-endpoints\" started at 2023-03-21 12:41:12 +0000 UTC is still not ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-140-6.ec2.internal container \"kube-apiserver\" started at 2023-03-21 12:40:57 +0000 UTC is still not ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-140-6.ec2.internal container \"kube-apiserver-check-endpoints\" started at 2023-03-21 12:41:12 +0000 UTC is still not ready" openshift-kube-apiserver-operator 30m Normal PodCreated deployment/kube-apiserver-operator Created Pod/revision-pruner-12-ip-10-0-197-197.ec2.internal -n openshift-kube-apiserver because it was missing openshift-kube-controller-manager-operator 30m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-197-197.ec2.internal\" not ready since 2023-03-21 12:43:40 +0000 UTC because KubeletNotReady (PLEG is not healthy: pleg has yet to be successful)" to "NodeControllerDegraded: All master nodes are ready" openshift-kube-controller-manager-operator 30m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager\" is terminated: Error: troller-manager?timeout=6s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nStaticPodsDegraded: I0321 12:43:22.550872 1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager\nStaticPodsDegraded: I0321 12:43:22.551024 1 event.go:294] \"Event occurred\" object=\"kube-system/kube-controller-manager\" fieldPath=\"\" kind=\"ConfigMap\" apiVersion=\"v1\" type=\"Normal\" reason=\"LeaderElection\" message=\"ip-10-0-239-132_6da329b0-e11f-4c1a-a70b-1fc5ea1bd5f3 became leader\"\nStaticPodsDegraded: I0321 12:43:22.551048 1 event.go:294] \"Event occurred\" object=\"kube-system/kube-controller-manager\" fieldPath=\"\" kind=\"Lease\" apiVersion=\"coordination.k8s.io/v1\" type=\"Normal\" reason=\"LeaderElection\" message=\"ip-10-0-239-132_6da329b0-e11f-4c1a-a70b-1fc5ea1bd5f3 became leader\"\nStaticPodsDegraded: W0321 12:43:22.570189 1 plugins.go:131] WARNING: aws built-in cloud provider is now deprecated. The AWS provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes/cloud-provider-aws\nStaticPodsDegraded: I0321 12:43:22.570215 1 aws.go:1226] Get AWS region from metadata client\nStaticPodsDegraded: I0321 12:43:22.570358 1 aws.go:1269] Zone not specified in configuration file; querying AWS metadata service\nStaticPodsDegraded: I0321 12:43:22.571941 1 aws.go:1309] Building AWS cloudprovider\nStaticPodsDegraded: I0321 12:43:22.769975 1 tags.go:80] AWS cloud filtering on ClusterID: qeaisrhods-c13-28wr5\nStaticPodsDegraded: I0321 12:43:22.769996 1 aws.go:814] Setting up informers for Cloud\nStaticPodsDegraded: I0321 12:43:22.770689 1 shared_informer.go:273] Waiting for caches to sync for tokens\nStaticPodsDegraded: I0321 12:43:22.774709 1 controllermanager.go:645] Starting \"resourcequota\"\nStaticPodsDegraded: I0321 12:43:22.871097 1 shared_informer.go:280] Caches are synced for tokens\nStaticPodsDegraded: E0321 12:43:38.368518 1 controllermanager.go:648] Error starting \"resourcequota\"\nStaticPodsDegraded: F0321 12:43:38.368532 1 controllermanager.go:259] error starting controllers: failed to discover resources: Get \"https://api-int.qeaisrhods-c13.abmw.s1.devshift.org:6443/api\": dial tcp 10.0.209.0:6443: connect: connection refused\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nNodeControllerDegraded: All master nodes are ready" openshift-kube-scheduler-operator 30m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-197-197.ec2.internal\" not ready since 2023-03-21 12:43:40 +0000 UTC because KubeletNotReady (PLEG is not healthy: pleg has yet to be successful)" to "NodeControllerDegraded: All master nodes are ready" openshift-etcd-operator 30m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-197-197.ec2.internal\" not ready since 2023-03-21 12:43:40 +0000 UTC because KubeletNotReady (PLEG is not healthy: pleg has yet to be successful)\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" openshift-etcd-operator 30m Normal PodCreated deployment/etcd-operator Created Pod/revision-pruner-7-ip-10-0-197-197.ec2.internal -n openshift-etcd because it was missing openshift-kube-controller-manager-operator 30m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-239-132.ec2.internal container \"kube-controller-manager\" is terminated: Error: troller-manager?timeout=6s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\nStaticPodsDegraded: I0321 12:43:22.550872 1 leaderelection.go:258] successfully acquired lease kube-system/kube-controller-manager\nStaticPodsDegraded: I0321 12:43:22.551024 1 event.go:294] \"Event occurred\" object=\"kube-system/kube-controller-manager\" fieldPath=\"\" kind=\"ConfigMap\" apiVersion=\"v1\" type=\"Normal\" reason=\"LeaderElection\" message=\"ip-10-0-239-132_6da329b0-e11f-4c1a-a70b-1fc5ea1bd5f3 became leader\"\nStaticPodsDegraded: I0321 12:43:22.551048 1 event.go:294] \"Event occurred\" object=\"kube-system/kube-controller-manager\" fieldPath=\"\" kind=\"Lease\" apiVersion=\"coordination.k8s.io/v1\" type=\"Normal\" reason=\"LeaderElection\" message=\"ip-10-0-239-132_6da329b0-e11f-4c1a-a70b-1fc5ea1bd5f3 became leader\"\nStaticPodsDegraded: W0321 12:43:22.570189 1 plugins.go:131] WARNING: aws built-in cloud provider is now deprecated. The AWS provider is deprecated and will be removed in a future release. Please use https://github.com/kubernetes/cloud-provider-aws\nStaticPodsDegraded: I0321 12:43:22.570215 1 aws.go:1226] Get AWS region from metadata client\nStaticPodsDegraded: I0321 12:43:22.570358 1 aws.go:1269] Zone not specified in configuration file; querying AWS metadata service\nStaticPodsDegraded: I0321 12:43:22.571941 1 aws.go:1309] Building AWS cloudprovider\nStaticPodsDegraded: I0321 12:43:22.769975 1 tags.go:80] AWS cloud filtering on ClusterID: qeaisrhods-c13-28wr5\nStaticPodsDegraded: I0321 12:43:22.769996 1 aws.go:814] Setting up informers for Cloud\nStaticPodsDegraded: I0321 12:43:22.770689 1 shared_informer.go:273] Waiting for caches to sync for tokens\nStaticPodsDegraded: I0321 12:43:22.774709 1 controllermanager.go:645] Starting \"resourcequota\"\nStaticPodsDegraded: I0321 12:43:22.871097 1 shared_informer.go:280] Caches are synced for tokens\nStaticPodsDegraded: E0321 12:43:38.368518 1 controllermanager.go:648] Error starting \"resourcequota\"\nStaticPodsDegraded: F0321 12:43:38.368532 1 controllermanager.go:259] error starting controllers: failed to discover resources: Get \"https://api-int.qeaisrhods-c13.abmw.s1.devshift.org:6443/api\": dial tcp 10.0.209.0:6443: connect: connection refused\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nNodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nNodeControllerDegraded: All master nodes are ready" openshift-network-diagnostics 30m Warning ConnectivityOutageDetected node/ip-10-0-160-152.ec2.internal Connectivity outage detected: openshift-apiserver-endpoint-ip-10-0-197-197: failed to establish a TCP connection to 10.130.0.42:8443: dial tcp 10.130.0.42:8443: connect: connection refused openshift-network-diagnostics 30m Warning ConnectivityOutageDetected node/ip-10-0-160-152.ec2.internal Connectivity outage detected: openshift-apiserver-endpoint-ip-10-0-197-197: failed to establish a TCP connection to 10.130.0.50:8443: dial tcp 10.130.0.50:8443: connect: connection refused openshift-network-diagnostics 30m Normal ConnectivityRestored node/ip-10-0-160-152.ec2.internal Connectivity restored after 59.99993261s: openshift-apiserver-endpoint-ip-10-0-197-197: tcp connection to 10.130.0.42:8443 succeeded openshift-network-diagnostics 30m Warning ConnectivityOutageDetected node/ip-10-0-160-152.ec2.internal Connectivity outage detected: network-check-target-ip-10-0-197-197: failed to establish a TCP connection to 10.130.0.3:8080: dial tcp 10.130.0.3:8080: i/o timeout openshift-etcd-operator 30m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 2 of 3 members are available, ip-10-0-197-197.ec2.internal is unhealthy" openshift-etcd-operator 30m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-197-197.ec2.internal is unhealthy" openshift-etcd-operator 30m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-197-197.ec2.internal is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:true Took:1.016132ms Error:} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.197.197:2379]: context deadline exceeded} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:true Took:779.301µs Error:}]\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-197-197.ec2.internal is unhealthy" openshift-etcd-operator 30m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:true Took:1.016132ms Error:} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.197.197:2379]: context deadline exceeded} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:true Took:779.301µs Error:}]\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-197-197.ec2.internal is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:true Took:1.016132ms Error:} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.197.197:2379]: context deadline exceeded} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:true Took:779.301µs Error:}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-197-197.ec2.internal is unhealthy" kube-system 30m Normal LeaderElection configmap/kube-controller-manager ip-10-0-140-6_b4fdaa3a-1a7c-4c0a-ab48-12b781bae938 became leader kube-system 30m Normal LeaderElection lease/kube-controller-manager ip-10-0-140-6_b4fdaa3a-1a7c-4c0a-ab48-12b781bae938 became leader default 30m Normal RegisteredNode node/ip-10-0-160-152.ec2.internal Node ip-10-0-160-152.ec2.internal event: Registered Node ip-10-0-160-152.ec2.internal in Controller default 30m Normal RegisteredNode node/ip-10-0-187-75.ec2.internal Node ip-10-0-187-75.ec2.internal event: Registered Node ip-10-0-187-75.ec2.internal in Controller default 30m Normal RegisteredNode node/ip-10-0-197-197.ec2.internal Node ip-10-0-197-197.ec2.internal event: Registered Node ip-10-0-197-197.ec2.internal in Controller default 30m Normal RegisteredNode node/ip-10-0-140-6.ec2.internal Node ip-10-0-140-6.ec2.internal event: Registered Node ip-10-0-140-6.ec2.internal in Controller openshift-ingress 30m Normal EnsuringLoadBalancer service/router-default Ensuring load balancer default 30m Normal RegisteredNode node/ip-10-0-239-132.ec2.internal Node ip-10-0-239-132.ec2.internal event: Registered Node ip-10-0-239-132.ec2.internal in Controller default 30m Normal RegisteredNode node/ip-10-0-195-121.ec2.internal Node ip-10-0-195-121.ec2.internal event: Registered Node ip-10-0-195-121.ec2.internal in Controller default 30m Normal RegisteredNode node/ip-10-0-232-8.ec2.internal Node ip-10-0-232-8.ec2.internal event: Registered Node ip-10-0-232-8.ec2.internal in Controller default 30m Warning ResolutionFailed namespace/openshift-custom-domains-operator Get "https://172.30.0.1:443/apis/operators.coreos.com/v1alpha1/namespaces/openshift-custom-domains-operator/subscriptions": dial tcp 172.30.0.1:443: connect: connection refused openshift-ingress 30m Normal EnsuredLoadBalancer service/router-default Ensured load balancer openshift-cluster-csi-drivers 30m Normal LeaderElection lease/external-attacher-leader-ebs-csi-aws-com aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk became leader openshift-kube-apiserver-operator 30m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-140-6.ec2.internal container \"kube-apiserver\" started at 2023-03-21 12:40:57 +0000 UTC is still not ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-140-6.ec2.internal container \"kube-apiserver-check-endpoints\" started at 2023-03-21 12:41:12 +0000 UTC is still not ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-140-6.ec2.internal container \"kube-apiserver\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-140-6.ec2.internal container \"kube-apiserver-cert-regeneration-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-140-6.ec2.internal container \"kube-apiserver-cert-syncer\" is terminated: Error: ricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:image-puller\" not found]\nStaticPodsDegraded: I0321 12:41:23.235625 1 base_controller.go:73] Caches are synced for CertSyncController \nStaticPodsDegraded: I0321 12:41:23.235643 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ...\nStaticPodsDegraded: I0321 12:41:23.235694 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}]\nStaticPodsDegraded: I0321 12:41:23.236424 1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-140-6.ec2.internal container \"kube-apiserver-check-endpoints\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-140-6.ec2.internal container \"kube-apiserver-insecure-readyz\" is terminated: Completed: " openshift-monitoring 30m Normal SuccessfulAttachVolume pod/alertmanager-main-1 AttachVolume.Attach succeeded for volume "pvc-88f77b77-893b-4d58-9c84-470da77b4262" openshift-monitoring 30m Normal SuccessfulAttachVolume pod/prometheus-k8s-0 AttachVolume.Attach succeeded for volume "pvc-7d81aae0-58a0-4040-982d-7f7c86fa6c88" openshift-monitoring 30m Normal AddedInterface pod/alertmanager-main-1 Add eth0 [10.129.2.21/23] from ovn-kubernetes openshift-monitoring 30m Normal Pulling pod/alertmanager-main-1 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1a766de06530f7008fc155e6239f6e3d63b816deeb24c6d520ef1980d748050" openshift-monitoring 30m Normal Created pod/prometheus-k8s-0 Created container init-config-reloader openshift-monitoring 30m Normal Started pod/prometheus-k8s-0 Started container init-config-reloader openshift-monitoring 30m Normal AddedInterface pod/prometheus-k8s-0 Add eth0 [10.129.2.20/23] from ovn-kubernetes openshift-monitoring 30m Normal Pulled pod/prometheus-k8s-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" already present on machine openshift-monitoring 30m Normal Pulling pod/prometheus-k8s-0 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1cdf8e55288584a04e1a403133d4508bac27808a20427a18aee85a92c367316" openshift-monitoring 30m Normal Pulled pod/alertmanager-main-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8541443d5c6c0d31959a8e5f8d6a525d43bb4df66be563603d282f48bdeb304" already present on machine openshift-monitoring 30m Normal Created pod/alertmanager-main-1 Created container prom-label-proxy openshift-monitoring 30m Normal Created pod/alertmanager-main-1 Created container kube-rbac-proxy-metric openshift-cluster-csi-drivers 30m Normal LeaderElection lease/external-snapshotter-leader-ebs-csi-aws-com aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk became leader openshift-monitoring 30m Normal Created pod/alertmanager-main-1 Created container alertmanager-proxy openshift-monitoring 30m Normal Pulled pod/alertmanager-main-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-monitoring 30m Normal Started pod/alertmanager-main-1 Started container prom-label-proxy openshift-monitoring 30m Normal Created pod/alertmanager-main-1 Created container alertmanager openshift-monitoring 30m Normal Pulled pod/alertmanager-main-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 30m Normal Pulled pod/alertmanager-main-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 30m Normal Created pod/alertmanager-main-1 Created container kube-rbac-proxy openshift-monitoring 30m Normal Started pod/alertmanager-main-1 Started container kube-rbac-proxy openshift-monitoring 30m Normal Pulled pod/alertmanager-main-1 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1a766de06530f7008fc155e6239f6e3d63b816deeb24c6d520ef1980d748050" in 1.452062175s (1.452074346s including waiting) openshift-monitoring 30m Normal Started pod/alertmanager-main-1 Started container kube-rbac-proxy-metric openshift-monitoring 30m Normal Created pod/alertmanager-main-1 Created container config-reloader openshift-monitoring 30m Normal Started pod/alertmanager-main-1 Started container config-reloader openshift-monitoring 30m Normal Started pod/alertmanager-main-1 Started container alertmanager openshift-monitoring 30m Normal Pulled pod/alertmanager-main-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" already present on machine openshift-monitoring 30m Normal Started pod/alertmanager-main-1 Started container alertmanager-proxy openshift-machine-config-operator 29m Normal Pulling pod/machine-config-daemon-ll5kq Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" openshift-kube-controller-manager 29m Warning Failed pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Error: ErrImagePull openshift-kube-controller-manager 29m Warning Failed pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f": pull QPS exceeded openshift-kube-controller-manager 29m Normal Pulling pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" openshift-machine-config-operator 29m Warning Failed pod/machine-config-daemon-ll5kq Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6": pull QPS exceeded openshift-kube-controller-manager 29m Warning Failed pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Error: ErrImagePull openshift-kube-controller-manager 29m Warning Failed pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fec19020202ba3b7f09460bc0cb52fc001941fbb3a3fa7d1ca34a2e9ebf0e9": pull QPS exceeded openshift-kube-controller-manager 29m Normal Pulling pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fec19020202ba3b7f09460bc0cb52fc001941fbb3a3fa7d1ca34a2e9ebf0e9" openshift-cluster-node-tuning-operator 29m Normal Pulling pod/tuned-x9jkg Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" openshift-kube-controller-manager 29m Warning Failed pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Error: ErrImagePull openshift-multus 29m Warning Failed pod/multus-486wq Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a": pull QPS exceeded openshift-operator-lifecycle-manager 29m Normal LeaderElection lease/packageserver-controller-lock package-server-manager-fc98f8f64-h9b5w_0fe78db5-a32e-41cb-8642-dae4c81db536 became leader openshift-cluster-csi-drivers 29m Warning Failed pod/aws-ebs-csi-driver-node-q9lmf Error: ErrImagePull openshift-cluster-csi-drivers 29m Warning Failed pod/aws-ebs-csi-driver-node-q9lmf Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487": pull QPS exceeded openshift-kube-controller-manager 29m Warning Failed pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865": pull QPS exceeded openshift-kube-controller-manager 29m Normal Pulling pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" openshift-ovn-kubernetes 29m Normal Pulling pod/ovnkube-node-x8pqn Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" openshift-kube-scheduler 29m Normal Pulling pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" openshift-machine-config-operator 29m Warning Failed pod/machine-config-daemon-ll5kq Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154": pull QPS exceeded openshift-machine-config-operator 29m Warning Failed pod/machine-config-daemon-ll5kq Error: ErrImagePull openshift-dns 29m Warning Failed pod/node-resolver-t57dw Error: ErrImagePull openshift-dns 29m Warning Failed pod/node-resolver-t57dw Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe": pull QPS exceeded openshift-cluster-csi-drivers 29m Warning Failed pod/aws-ebs-csi-driver-node-q9lmf Error: ErrImagePull openshift-cluster-csi-drivers 29m Warning Failed pod/aws-ebs-csi-driver-node-q9lmf Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821": pull QPS exceeded openshift-machine-config-operator 29m Warning Failed pod/machine-config-daemon-ll5kq Error: ErrImagePull openshift-multus 29m Warning Failed pod/multus-486wq Error: ErrImagePull openshift-etcd 29m Normal Pulling pod/etcd-ip-10-0-197-197.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" openshift-image-registry 29m Normal Pulling pod/node-ca-rz7r5 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" openshift-kube-apiserver 29m Normal Pulling pod/kube-apiserver-ip-10-0-197-197.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" openshift-multus 29m Normal Pulling pod/multus-additional-cni-plugins-hg7bc Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633" openshift-monitoring 29m Normal Pulling pod/node-exporter-ztvgk Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" openshift-cluster-csi-drivers 29m Warning Failed pod/aws-ebs-csi-driver-node-q9lmf Failed to pull image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140": pull QPS exceeded openshift-cluster-storage-operator 29m Normal LeaderElection lease/snapshot-controller-leader csi-snapshot-controller-f58c44499-svdlt became leader openshift-machine-config-operator 29m Normal Pulling pod/machine-config-server-4bmnx Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" openshift-cluster-csi-drivers 29m Warning Failed pod/aws-ebs-csi-driver-node-q9lmf Error: ErrImagePull openshift-ovn-kubernetes 29m Normal Pulling pod/ovnkube-master-kzdhz Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" openshift-monitoring 29m Normal Created pod/prometheus-k8s-0 Created container prometheus openshift-monitoring 29m Normal Killing pod/alertmanager-main-0 Stopping container kube-rbac-proxy openshift-monitoring 29m Normal Pulled pod/prometheus-k8s-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" already present on machine openshift-monitoring 29m Normal Killing pod/alertmanager-main-0 Stopping container kube-rbac-proxy-metric openshift-monitoring 29m Normal Killing pod/alertmanager-main-0 Stopping container prom-label-proxy openshift-monitoring 29m Normal Killing pod/alertmanager-main-0 Stopping container config-reloader openshift-monitoring 29m Normal Pulled pod/prometheus-k8s-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a898acb346b7aae546bb9dc31e43266415a096fa755241b56acb5969190584f9" already present on machine openshift-monitoring 29m Normal Started pod/prometheus-k8s-0 Started container config-reloader openshift-monitoring 29m Normal Created pod/prometheus-k8s-0 Created container config-reloader openshift-monitoring 29m Normal Pulled pod/prometheus-k8s-0 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1cdf8e55288584a04e1a403133d4508bac27808a20427a18aee85a92c367316" in 3.309209782s (3.309225765s including waiting) openshift-monitoring 29m Normal Started pod/prometheus-k8s-0 Started container prometheus openshift-monitoring 29m Normal Started pod/prometheus-k8s-0 Started container prometheus-proxy openshift-monitoring 29m Normal Created pod/prometheus-k8s-0 Created container kube-rbac-proxy openshift-monitoring 29m Normal Pulled pod/prometheus-k8s-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-cluster-storage-operator 29m Warning FastControllerResync deployment/csi-snapshot-controller-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-monitoring 29m Normal Created pod/prometheus-k8s-0 Created container thanos-sidecar openshift-monitoring 29m Normal Started pod/prometheus-k8s-0 Started container thanos-sidecar openshift-monitoring 29m Normal Started pod/prometheus-k8s-0 Started container kube-rbac-proxy openshift-cluster-storage-operator 29m Normal LeaderElection configmap/csi-snapshot-controller-operator-lock csi-snapshot-controller-operator-c9586b974-k2tdv_8730e4bd-e404-4090-aecb-132c0b1dbeb5 became leader openshift-monitoring 29m Normal Pulled pod/prometheus-k8s-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-monitoring 29m Normal Pulled pod/prometheus-k8s-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 29m Normal Created pod/prometheus-k8s-0 Created container kube-rbac-proxy-thanos openshift-monitoring 29m Normal Created pod/prometheus-k8s-0 Created container prometheus-proxy openshift-monitoring 29m Normal Started pod/prometheus-k8s-0 Started container kube-rbac-proxy-thanos openshift-cluster-storage-operator 29m Normal LeaderElection lease/csi-snapshot-controller-operator-lock csi-snapshot-controller-operator-c9586b974-k2tdv_8730e4bd-e404-4090-aecb-132c0b1dbeb5 became leader openshift-cluster-storage-operator 29m Normal ValidatingWebhookConfigurationUpdated deployment/csi-snapshot-controller-operator Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/snapshot.storage.k8s.io because it changed openshift-cluster-csi-drivers 29m Normal BackOff pod/aws-ebs-csi-driver-node-q9lmf Back-off pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" openshift-monitoring 29m Normal SuccessfulCreate statefulset/alertmanager-main create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful openshift-cluster-csi-drivers 29m Warning Failed pod/aws-ebs-csi-driver-node-q9lmf Error: ImagePullBackOff openshift-cluster-csi-drivers 29m Normal BackOff pod/aws-ebs-csi-driver-node-q9lmf Back-off pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" openshift-etcd-operator 29m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:true Took:1.016132ms Error:} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.197.197:2379]: context deadline exceeded} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:true Took:779.301µs Error:}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-197-197.ec2.internal is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:true Took:936.763µs Error:} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.197.197:2379]: context deadline exceeded} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:true Took:10.861014ms Error:}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-197-197.ec2.internal is unhealthy" openshift-cluster-csi-drivers 29m Warning Failed pod/aws-ebs-csi-driver-node-q9lmf Error: ImagePullBackOff openshift-cluster-storage-operator 29m Normal OperatorStatusChanged deployment/csi-snapshot-controller-operator Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well") openshift-ovn-kubernetes 29m Normal LeaderElection lease/ovn-kubernetes-cluster-manager ip-10-0-140-6.ec2.internal became leader openshift-cluster-csi-drivers 29m Normal BackOff pod/aws-ebs-csi-driver-node-q9lmf Back-off pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" openshift-cluster-csi-drivers 29m Warning Failed pod/aws-ebs-csi-driver-node-q9lmf Error: ImagePullBackOff openshift-machine-config-operator 29m Warning Failed pod/machine-config-daemon-ll5kq Error: ImagePullBackOff openshift-kube-controller-manager 29m Normal BackOff pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Back-off pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" openshift-kube-controller-manager 29m Warning Failed pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Error: ImagePullBackOff openshift-machine-config-operator 29m Normal BackOff pod/machine-config-daemon-ll5kq Back-off pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" openshift-dns 29m Warning Failed pod/node-resolver-t57dw Error: ImagePullBackOff openshift-machine-config-operator 29m Warning Failed pod/machine-config-daemon-ll5kq Error: ImagePullBackOff openshift-multus 29m Normal BackOff pod/multus-486wq Back-off pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" openshift-machine-config-operator 29m Normal BackOff pod/machine-config-daemon-ll5kq Back-off pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" openshift-multus 29m Warning Failed pod/multus-486wq Error: ImagePullBackOff openshift-dns 29m Normal BackOff pod/node-resolver-t57dw Back-off pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" openshift-kube-controller-manager 29m Normal BackOff pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Back-off pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fec19020202ba3b7f09460bc0cb52fc001941fbb3a3fa7d1ca34a2e9ebf0e9" openshift-kube-controller-manager 29m Warning Failed pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Error: ImagePullBackOff openshift-kube-storage-version-migrator-operator 29m Warning FastControllerResync deployment/kube-storage-version-migrator-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager 29m Normal BackOff pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Back-off pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" openshift-kube-controller-manager 29m Warning Failed pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Error: ImagePullBackOff openshift-kube-storage-version-migrator-operator 29m Warning FastControllerResync deployment/kube-storage-version-migrator-operator Controller "StaticConditionsController" resync interval is set to 0s which might lead to client request throttling openshift-kube-storage-version-migrator-operator 29m Normal LeaderElection configmap/openshift-kube-storage-version-migrator-operator-lock kube-storage-version-migrator-operator-7f8b95cf5f-dvvp5_88ca6f19-475f-4201-b22e-449208bd96ac became leader openshift-kube-controller-manager 29m Normal BackOff pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Back-off pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" openshift-kube-controller-manager 29m Warning Failed pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Error: ImagePullBackOff openshift-kube-controller-manager-operator 29m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: \nNodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"cluster-policy-controller\" is waiting: ImagePullBackOff: Back-off pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fec19020202ba3b7f09460bc0cb52fc001941fbb3a3fa7d1ca34a2e9ebf0e9\"\nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager\" is waiting: ImagePullBackOff: Back-off pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865\"\nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-cert-syncer\" is waiting: ImagePullBackOff: Back-off pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f\"\nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-recovery-controller\" is waiting: ImagePullBackOff: Back-off pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f\"\nNodeControllerDegraded: All master nodes are ready" openshift-cluster-csi-drivers 29m Normal LeaderElection lease/ebs-csi-aws-com 1679402245089-8081-ebs-csi-aws-com became leader openshift-kube-apiserver 29m Normal Pulled pod/kube-apiserver-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-apiserver 29m Normal Created pod/kube-apiserver-ip-10-0-140-6.ec2.internal Created container setup openshift-kube-apiserver-operator 29m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-140-6.ec2.internal container \"kube-apiserver\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-140-6.ec2.internal container \"kube-apiserver-cert-regeneration-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-140-6.ec2.internal container \"kube-apiserver-cert-syncer\" is terminated: Error: ricted-v2\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"helm-chartrepos-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:image-puller\" not found]\nStaticPodsDegraded: I0321 12:41:23.235625 1 base_controller.go:73] Caches are synced for CertSyncController \nStaticPodsDegraded: I0321 12:41:23.235643 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ...\nStaticPodsDegraded: I0321 12:41:23.235694 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}]\nStaticPodsDegraded: I0321 12:41:23.236424 1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-140-6.ec2.internal container \"kube-apiserver-check-endpoints\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-140-6.ec2.internal container \"kube-apiserver-insecure-readyz\" is terminated: Completed: " to "NodeControllerDegraded: All master nodes are ready" openshift-kube-apiserver 29m Normal Started pod/kube-apiserver-ip-10-0-140-6.ec2.internal Started container setup openshift-kube-apiserver 29m Normal Created pod/kube-apiserver-ip-10-0-140-6.ec2.internal Created container kube-apiserver openshift-kube-apiserver 29m Normal Started pod/kube-apiserver-ip-10-0-140-6.ec2.internal Started container kube-apiserver openshift-kube-apiserver 29m Normal Pulled pod/kube-apiserver-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 29m Normal Pulled pod/kube-apiserver-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-apiserver 29m Normal Created pod/kube-apiserver-ip-10-0-140-6.ec2.internal Created container kube-apiserver-cert-syncer openshift-kube-apiserver 29m Normal Started pod/kube-apiserver-ip-10-0-140-6.ec2.internal Started container kube-apiserver-cert-regeneration-controller openshift-kube-apiserver 29m Normal Created pod/kube-apiserver-ip-10-0-140-6.ec2.internal Created container kube-apiserver-cert-regeneration-controller openshift-kube-apiserver 29m Normal Pulled pod/kube-apiserver-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 29m Normal Started pod/kube-apiserver-ip-10-0-140-6.ec2.internal Started container kube-apiserver-insecure-readyz openshift-kube-apiserver 29m Normal Pulled pod/kube-apiserver-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 29m Normal Started pod/kube-apiserver-ip-10-0-140-6.ec2.internal Started container kube-apiserver-cert-syncer openshift-kube-apiserver 29m Normal Created pod/kube-apiserver-ip-10-0-140-6.ec2.internal Created container kube-apiserver-check-endpoints openshift-monitoring 29m Normal Pulled pod/node-exporter-ztvgk Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" in 11.133059782s (11.13306934s including waiting) openshift-kube-apiserver 29m Normal Started pod/kube-apiserver-ip-10-0-140-6.ec2.internal Started container kube-apiserver-check-endpoints openshift-kube-apiserver 29m Normal Pulled pod/kube-apiserver-ip-10-0-140-6.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 29m Normal Created pod/kube-apiserver-ip-10-0-140-6.ec2.internal Created container kube-apiserver-insecure-readyz openshift-kube-apiserver 29m Warning FastControllerResync node/ip-10-0-140-6.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver 29m Warning FastControllerResync node/ip-10-0-140-6.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling openshift-cluster-storage-operator 29m Warning FastControllerResync deployment/cluster-storage-operator Controller "VSphereProblemDetectorStarter" resync interval is set to 0s which might lead to client request throttling openshift-cluster-storage-operator 29m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Degraded changed from False to True ("AWSEBSCSIDriverOperatorStaticControllerDegraded: \"csidriveroperators/aws-ebs/standalone/07_role_aws_config.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-config-managed/roles/aws-ebs-csi-driver-operator-aws-config-role\": context canceled\nAWSEBSCSIDriverOperatorStaticControllerDegraded: \"csidriveroperators/aws-ebs/standalone/08_rolebinding_aws_config.yaml\" (string): client rate limiter Wait returned an error: context canceled\nAWSEBSCSIDriverOperatorStaticControllerDegraded: ") openshift-cluster-storage-operator 29m Normal LeaderElection lease/cluster-storage-operator-lock cluster-storage-operator-fb5868667-wn4n8_7a67a697-9d80-4286-8152-ed9c28f59003 became leader openshift-cluster-storage-operator 29m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Degraded changed from True to False ("AWSEBSCSIDriverOperatorCRDegraded: All is well") openshift-cluster-storage-operator 29m Warning FastControllerResync deployment/cluster-storage-operator Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling openshift-cluster-storage-operator 29m Normal LeaderElection configmap/cluster-storage-operator-lock cluster-storage-operator-fb5868667-wn4n8_7a67a697-9d80-4286-8152-ed9c28f59003 became leader openshift-cluster-storage-operator 29m Warning FastControllerResync deployment/cluster-storage-operator Controller "CSIDriverStarter" resync interval is set to 0s which might lead to client request throttling openshift-cluster-storage-operator 29m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Progressing message changed from "AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverControllerServiceControllerProgressing: Waiting for Deployment to deploy pods\nAWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods" to "AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods" openshift-cluster-storage-operator 29m Warning FastControllerResync deployment/cluster-storage-operator Controller "SnapshotCRDController" resync interval is set to 0s which might lead to client request throttling openshift-cluster-storage-operator 29m Warning FastControllerResync deployment/cluster-storage-operator Controller "DefaultStorageClassController" resync interval is set to 0s which might lead to client request throttling openshift-monitoring 29m Normal Killing pod/prometheus-k8s-1 Stopping container thanos-sidecar openshift-monitoring 29m Normal Killing pod/prometheus-k8s-1 Stopping container kube-rbac-proxy-thanos openshift-monitoring 29m Normal Killing pod/prometheus-k8s-1 Stopping container prometheus openshift-image-registry 29m Normal Pulled pod/node-ca-rz7r5 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" in 15.596803683s (15.596816606s including waiting) openshift-multus 29m Normal Pulling pod/multus-486wq Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" openshift-multus 29m Normal Pulled pod/multus-additional-cni-plugins-hg7bc Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633" in 16.474546084s (16.474553723s including waiting) openshift-etcd 29m Normal Pulled pod/etcd-ip-10-0-197-197.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" in 16.458155104s (16.458163225s including waiting) openshift-machine-config-operator 29m Normal Pulled pod/machine-config-server-4bmnx Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" in 16.468918633s (16.468923494s including waiting) openshift-etcd-operator 29m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:true Took:936.763µs Error:} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.197.197:2379]: context deadline exceeded} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:true Took:10.861014ms Error:}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-197-197.ec2.internal is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:true Took:2.202757ms Error:} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.197.197:2379]: context deadline exceeded} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:true Took:849.358µs Error:}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-197-197.ec2.internal is unhealthy" openshift-cluster-csi-drivers 29m Normal Pulling pod/aws-ebs-csi-driver-node-q9lmf Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" openshift-dns 29m Normal Pulling pod/node-resolver-t57dw Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" openshift-kube-scheduler 29m Normal Created pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Created container wait-for-host-port openshift-ovn-kubernetes 29m Normal Pulled pod/ovnkube-node-x8pqn Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" in 19.792697573s (19.792704877s including waiting) openshift-multus 29m Normal Started pod/multus-additional-cni-plugins-hg7bc Started container egress-router-binary-copy openshift-kube-apiserver 29m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container setup openshift-kube-apiserver 29m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" in 19.55742208s (19.557431648s including waiting) openshift-kube-scheduler 29m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" in 19.543198935s (19.543206841s including waiting) openshift-multus 29m Normal Created pod/multus-additional-cni-plugins-hg7bc Created container egress-router-binary-copy openshift-cluster-node-tuning-operator 29m Normal Created pod/tuned-x9jkg Created container tuned openshift-etcd 29m Normal Created pod/etcd-ip-10-0-197-197.ec2.internal Created container setup openshift-machine-config-operator 29m Normal Created pod/machine-config-daemon-ll5kq Created container machine-config-daemon openshift-machine-config-operator 29m Normal Pulled pod/machine-config-daemon-ll5kq Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" already present on machine openshift-cluster-node-tuning-operator 29m Normal Pulled pod/tuned-x9jkg Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" in 19.528250432s (19.528260644s including waiting) openshift-machine-config-operator 29m Normal Created pod/machine-config-server-4bmnx Created container machine-config-server openshift-image-registry 29m Normal Created pod/node-ca-rz7r5 Created container node-ca openshift-ovn-kubernetes 29m Normal Pulled pod/ovnkube-master-kzdhz Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" in 19.79197782s (19.79199289s including waiting) openshift-ovn-kubernetes 29m Normal Created pod/ovnkube-master-kzdhz Created container northd openshift-monitoring 29m Normal Created pod/node-exporter-ztvgk Created container init-textfile openshift-multus 29m Normal Pulling pod/multus-additional-cni-plugins-hg7bc Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be088cb9218a248cd4a616132c8daf531502c7b85404aadb0b73e76f8dba01df" openshift-controller-manager-operator 29m Normal OperatorStatusChanged deployment/openshift-controller-manager-operator Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well") openshift-monitoring 29m Normal Started pod/node-exporter-ztvgk Started container init-textfile openshift-controller-manager-operator 29m Normal LeaderElection lease/openshift-controller-manager-operator-lock openshift-controller-manager-operator-6548869cc5-xfpsm_61b3d353-345b-40cd-9408-c196068ba8e7 became leader openshift-kube-scheduler 29m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-machine-config-operator 29m Normal Pulling pod/machine-config-daemon-ll5kq Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" openshift-kube-scheduler 29m Normal Started pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Started container wait-for-host-port openshift-etcd 29m Normal Pulled pod/etcd-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 29m Normal Started pod/etcd-ip-10-0-197-197.ec2.internal Started container setup openshift-ovn-kubernetes 29m Normal Started pod/ovnkube-master-kzdhz Started container northd openshift-kube-apiserver 29m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-cluster-node-tuning-operator 29m Normal Started pod/tuned-x9jkg Started container tuned openshift-kube-apiserver 29m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container setup openshift-monitoring 29m Normal Pulled pod/node-exporter-ztvgk Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" already present on machine openshift-monitoring 29m Normal Created pod/node-exporter-ztvgk Created container node-exporter openshift-ovn-kubernetes 29m Normal Pulled pod/ovnkube-master-kzdhz Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-machine-config-operator 29m Normal Started pod/machine-config-server-4bmnx Started container machine-config-server openshift-monitoring 29m Normal Started pod/node-exporter-ztvgk Started container node-exporter openshift-monitoring 29m Normal Pulling pod/node-exporter-ztvgk Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" openshift-ovn-kubernetes 29m Normal Pulled pod/ovnkube-node-x8pqn Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-image-registry 29m Normal Started pod/node-ca-rz7r5 Started container node-ca openshift-ovn-kubernetes 29m Normal Created pod/ovnkube-node-x8pqn Created container ovn-acl-logging openshift-ovn-kubernetes 29m Normal Started pod/ovnkube-node-x8pqn Started container ovn-acl-logging openshift-controller-manager-operator 29m Normal LeaderElection configmap/openshift-controller-manager-operator-lock openshift-controller-manager-operator-6548869cc5-xfpsm_61b3d353-345b-40cd-9408-c196068ba8e7 became leader openshift-ovn-kubernetes 29m Normal Pulling pod/ovnkube-node-x8pqn Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" openshift-ovn-kubernetes 29m Normal Created pod/ovnkube-master-kzdhz Created container nbdb openshift-ovn-kubernetes 29m Normal Started pod/ovnkube-master-kzdhz Started container nbdb openshift-machine-config-operator 29m Normal Started pod/machine-config-daemon-ll5kq Started container machine-config-daemon openshift-kube-scheduler 29m Normal Pulling pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" openshift-etcd 29m Normal Created pod/etcd-ip-10-0-197-197.ec2.internal Created container etcd-ensure-env-vars openshift-kube-scheduler 29m Normal Started pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Started container kube-scheduler openshift-kube-apiserver 29m Normal Pulling pod/kube-apiserver-ip-10-0-197-197.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" openshift-etcd 29m Normal Pulled pod/etcd-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 29m Normal Started pod/etcd-ip-10-0-197-197.ec2.internal Started container etcd-ensure-env-vars openshift-kube-apiserver 29m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver openshift-kube-apiserver 29m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver openshift-ovn-kubernetes 29m Normal Pulling pod/ovnkube-master-kzdhz Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" openshift-kube-scheduler 29m Normal Created pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Created container kube-scheduler openshift-cluster-node-tuning-operator 29m Normal LeaderElection lease/node-tuning-operator-lock cluster-node-tuning-operator-5886c76fd4-cntr6_c7d90fcd-7823-49b6-9536-5c5fc09075f3 became leader openshift-cluster-node-tuning-operator 29m Normal LeaderElection configmap/node-tuning-operator-lock cluster-node-tuning-operator-5886c76fd4-cntr6_c7d90fcd-7823-49b6-9536-5c5fc09075f3 became leader openshift-ovn-kubernetes 29m Normal LeaderElection lease/ovn-kubernetes-master ip-10-0-140-6.ec2.internal became leader default 29m Warning ErrorReconcilingNode node/ip-10-0-197-197.ec2.internal error creating gateway for node ip-10-0-197-197.ec2.internal: failed to init shared interface gateway: failed to sync stale SNATs on node ip-10-0-197-197.ec2.internal: unable to fetch podIPs for pod openshift-kube-scheduler/revision-pruner-9-ip-10-0-197-197.ec2.internal openshift-etcd-operator 29m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:true Took:2.202757ms Error:} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.197.197:2379]: context deadline exceeded} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:true Took:849.358µs Error:}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-197-197.ec2.internal is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:true Took:928.049µs Error:} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.197.197:2379]: context deadline exceeded} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:true Took:879.218µs Error:}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-197-197.ec2.internal is unhealthy" openshift-machine-api 29m Normal LeaderElection lease/control-plane-machine-set-leader control-plane-machine-set-operator-77b4c948f8-7vvdb_13750e9d-c49f-4eff-a78e-a2ffc1fe3a77 became leader openshift-cluster-csi-drivers 29m Normal LeaderElection lease/external-resizer-ebs-csi-aws-com aws-ebs-csi-driver-controller-5ff7cf9694-f4pwk became leader default 29m Warning ResolutionFailed namespace/openshift-managed-node-metadata-operator constraints not satisfiable: subscription managed-node-metadata-operator exists, no operators found from catalog managed-node-metadata-operator-registry in namespace openshift-managed-node-metadata-operator referenced by subscription managed-node-metadata-operator openshift-machine-api 29m Normal LeaderElection lease/cluster-autoscaler-operator-leader cluster-autoscaler-operator-7fcffdb7c8-hswcn_591a380c-5ecc-40f1-9884-59d48ab3ab81 became leader openshift-etcd-operator 29m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:true Took:928.049µs Error:} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.197.197:2379]: context deadline exceeded} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:true Took:879.218µs Error:}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-197-197.ec2.internal is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:true Took:1.027026ms Error:} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.197.197:2379]: context deadline exceeded} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:true Took:887.443µs Error:}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-197-197.ec2.internal is unhealthy" default 29m Warning ResolutionFailed namespace/openshift-custom-domains-operator constraints not satisfiable: subscription custom-domains-operator exists, no operators found from catalog custom-domains-operator-registry in namespace openshift-custom-domains-operator referenced by subscription custom-domains-operator openshift-etcd-operator 29m Warning UnhealthyEtcdMember deployment/etcd-operator unhealthy members: ip-10-0-197-197.ec2.internal openshift-etcd 29m Normal Pulled pod/etcd-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-etcd 29m Normal Started pod/etcd-ip-10-0-197-197.ec2.internal Started container etcd-resources-copy openshift-etcd 29m Normal Created pod/etcd-ip-10-0-197-197.ec2.internal Created container etcd-resources-copy openshift-cluster-csi-drivers 29m Normal Pulled pod/aws-ebs-csi-driver-node-q9lmf Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" in 33.053976792s (33.053986967s including waiting) openshift-machine-config-operator 29m Normal Pulled pod/machine-config-daemon-ll5kq Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" in 31.46705761s (31.467069809s including waiting) openshift-dns 29m Normal Pulled pod/node-resolver-t57dw Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" in 33.083628096s (33.083636156s including waiting) openshift-machine-config-operator 29m Normal Created pod/machine-config-daemon-ll5kq Created container oauth-proxy openshift-ovn-kubernetes 29m Normal Started pod/ovnkube-master-kzdhz Started container sbdb openshift-ovn-kubernetes 29m Normal Pulled pod/ovnkube-master-kzdhz Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 29m Normal Created pod/ovnkube-master-kzdhz Created container sbdb openshift-ovn-kubernetes 29m Normal Pulled pod/ovnkube-master-kzdhz Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-etcd 29m Normal Pulled pod/etcd-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-kube-scheduler 29m Warning FastControllerResync pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler 29m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-ovn-kubernetes 29m Normal Pulled pod/ovnkube-node-x8pqn Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 29m Normal Started pod/ovnkube-node-x8pqn Started container kube-rbac-proxy-ovn-metrics openshift-ovn-kubernetes 29m Normal Created pod/ovnkube-node-x8pqn Created container kube-rbac-proxy-ovn-metrics openshift-etcd 29m Normal Started pod/etcd-ip-10-0-197-197.ec2.internal Started container etcdctl openshift-ovn-kubernetes 29m Normal Pulled pod/ovnkube-node-x8pqn Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-ovn-kubernetes 29m Normal Started pod/ovnkube-node-x8pqn Started container kube-rbac-proxy openshift-ovn-kubernetes 29m Normal Created pod/ovnkube-node-x8pqn Created container kube-rbac-proxy openshift-ovn-kubernetes 29m Normal Pulled pod/ovnkube-node-x8pqn Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" in 31.208522004s (31.208530972s including waiting) openshift-ovn-kubernetes 29m Normal Started pod/ovnkube-master-kzdhz Started container kube-rbac-proxy openshift-ovn-kubernetes 29m Normal Created pod/ovnkube-master-kzdhz Created container kube-rbac-proxy openshift-kube-scheduler 29m Normal Started pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Started container kube-scheduler-cert-syncer openshift-ovn-kubernetes 29m Normal Pulled pod/ovnkube-master-kzdhz Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" in 30.990949699s (30.99095774s including waiting) openshift-multus 29m Normal Pulled pod/multus-additional-cni-plugins-hg7bc Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be088cb9218a248cd4a616132c8daf531502c7b85404aadb0b73e76f8dba01df" in 31.361227263s (31.361234648s including waiting) openshift-etcd 29m Normal Created pod/etcd-ip-10-0-197-197.ec2.internal Created container etcd openshift-cluster-csi-drivers 29m Normal Started pod/aws-ebs-csi-driver-node-q9lmf Started container csi-driver openshift-cluster-csi-drivers 29m Normal Created pod/aws-ebs-csi-driver-node-q9lmf Created container csi-driver openshift-etcd 29m Normal Started pod/etcd-ip-10-0-197-197.ec2.internal Started container etcd openshift-etcd 29m Normal Created pod/etcd-ip-10-0-197-197.ec2.internal Created container etcdctl openshift-multus 29m Normal Pulled pod/multus-486wq Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" in 35.567347609s (35.56736246s including waiting) openshift-etcd 29m Normal Pulled pod/etcd-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61f249fe0adadafbd49133d1be1ef71228f0a3ecadf2b182c8343e94ecb3dc7b" already present on machine openshift-multus 29m Normal Started pod/multus-486wq Started container kube-multus openshift-cluster-csi-drivers 29m Normal Pulling pod/aws-ebs-csi-driver-node-q9lmf Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" openshift-dns 29m Normal Started pod/node-resolver-t57dw Started container dns-node-resolver openshift-kube-scheduler 29m Normal Created pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Created container kube-scheduler-cert-syncer openshift-monitoring 29m Normal Started pod/node-exporter-ztvgk Started container kube-rbac-proxy openshift-multus 29m Normal Pulling pod/multus-additional-cni-plugins-hg7bc Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dec1af3c24148179ab45f9137ba74876f4287121f07aa32dc7a3643da749a9" openshift-multus 29m Normal Started pod/multus-additional-cni-plugins-hg7bc Started container cni-plugins openshift-multus 29m Normal Created pod/multus-additional-cni-plugins-hg7bc Created container cni-plugins openshift-monitoring 29m Normal Pulled pod/node-exporter-ztvgk Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" in 31.098524639s (31.098537733s including waiting) openshift-kube-scheduler 29m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" in 30.786170582s (30.786182793s including waiting) openshift-kube-apiserver 29m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" in 30.805568957s (30.805604387s including waiting) openshift-kube-apiserver 29m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver-cert-syncer openshift-kube-apiserver 29m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver-cert-syncer openshift-monitoring 29m Normal Created pod/node-exporter-ztvgk Created container kube-rbac-proxy openshift-ovn-kubernetes 29m Normal Created pod/ovnkube-node-x8pqn Created container ovnkube-node openshift-ovn-kubernetes 29m Normal Started pod/ovnkube-node-x8pqn Started container ovnkube-node openshift-kube-apiserver 29m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 29m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver-cert-regeneration-controller openshift-kube-apiserver 29m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver-cert-regeneration-controller openshift-kube-apiserver 29m Warning FastControllerResync pod/kube-apiserver-ip-10-0-197-197.ec2.internal Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling openshift-dns 29m Normal Created pod/node-resolver-t57dw Created container dns-node-resolver openshift-kube-apiserver-operator 29m Normal PodCreated deployment/kube-apiserver-operator Created Pod/kube-apiserver-guard-ip-10-0-197-197.ec2.internal -n openshift-kube-apiserver because it was missing openshift-kube-apiserver 29m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-machine-config-operator 29m Normal Started pod/machine-config-daemon-ll5kq Started container oauth-proxy openshift-multus 29m Normal Created pod/multus-486wq Created container kube-multus openshift-kube-scheduler-operator 29m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/openshift-kube-scheduler-guard-ip-10-0-197-197.ec2.internal -n openshift-kube-scheduler because it was missing openshift-kube-apiserver 29m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver-insecure-readyz openshift-kube-apiserver 29m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver-insecure-readyz openshift-kube-apiserver 29m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 29m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver-check-endpoints openshift-kube-apiserver 29m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver-check-endpoints openshift-operator-lifecycle-manager 29m Normal SuccessfulCreate job/collect-profiles-27990045 Created pod: collect-profiles-27990045-xf7fw openshift-ovn-kubernetes 29m Normal Started pod/ovnkube-master-kzdhz Started container ovn-dbchecker openshift-ovn-kubernetes 29m Normal Created pod/ovnkube-master-kzdhz Created container ovn-dbchecker openshift-ovn-kubernetes 29m Normal Pulled pod/ovnkube-master-kzdhz Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-kube-scheduler 29m Normal Created pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Created container kube-scheduler-recovery-controller openshift-ovn-kubernetes 29m Normal Started pod/ovnkube-master-kzdhz Started container ovnkube-master openshift-ovn-kubernetes 29m Normal Created pod/ovnkube-master-kzdhz Created container ovnkube-master openshift-etcd 29m Normal Created pod/etcd-ip-10-0-197-197.ec2.internal Created container etcd-metrics openshift-etcd 29m Normal Started pod/etcd-ip-10-0-197-197.ec2.internal Started container etcd-metrics openshift-kube-scheduler 29m Normal Started pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Started container kube-scheduler-recovery-controller openshift-etcd 29m Normal Pulling pod/etcd-ip-10-0-197-197.ec2.internal Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" openshift-etcd-operator 29m Normal PodCreated deployment/etcd-operator Created Pod/etcd-guard-ip-10-0-197-197.ec2.internal -n openshift-etcd because it was missing openshift-monitoring 29m Normal SuccessfulCreate job/osd-rebalance-infra-nodes-27990045 Created pod: osd-rebalance-infra-nodes-27990045-mxscc openshift-kube-controller-manager 29m Warning ClusterInfrastructureStatus pod/kube-controller-manager-ip-10-0-197-197.ec2.internal unable to get cluster infrastructure status, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope openshift-operator-lifecycle-manager 29m Normal SuccessfulCreate cronjob/collect-profiles Created job collect-profiles-27990045 openshift-monitoring 29m Normal SuccessfulCreate cronjob/osd-rebalance-infra-nodes Created job osd-rebalance-infra-nodes-27990045 openshift-multus 29m Normal Started pod/multus-additional-cni-plugins-hg7bc Started container bond-cni-plugin openshift-kube-controller-manager 29m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-239-132.ec2.internal created SCC ranges for redhat-ods-operator namespace openshift-image-registry 29m Normal SuccessfulCreate replicaset/image-registry-5bd87dfd7 Created pod: image-registry-5bd87dfd7-4rptq openshift-cluster-csi-drivers 29m Normal Created pod/aws-ebs-csi-driver-node-q9lmf Created container csi-node-driver-registrar openshift-image-registry 29m Normal ScalingReplicaSet deployment/image-registry Scaled up replica set image-registry-5bd87dfd7 to 2 from 1 openshift-image-registry 29m Normal ScalingReplicaSet deployment/image-registry Scaled up replica set image-registry-5bd87dfd7 to 1 openshift-image-registry 29m Normal SuccessfulCreate replicaset/image-registry-5bd87dfd7 Created pod: image-registry-5bd87dfd7-vhs2b openshift-multus 29m Normal Pulling pod/multus-additional-cni-plugins-hg7bc Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4903dfa59fc0fcc3521f6431d51c6ed0ef563b0c63614a2df44ee4d447ab6c5d" openshift-cluster-csi-drivers 29m Normal Started pod/aws-ebs-csi-driver-node-q9lmf Started container csi-node-driver-registrar openshift-image-registry 29m Normal Killing pod/image-registry-55b7d998b9-pq262 Stopping container registry openshift-image-registry 29m Normal DeploymentUpdated deployment/cluster-image-registry-operator Updated Deployment.apps/image-registry -n openshift-image-registry because it changed openshift-image-registry 29m Normal ScalingReplicaSet deployment/image-registry Scaled down replica set image-registry-55b7d998b9 to 1 from 2 openshift-cluster-csi-drivers 29m Normal Pulled pod/aws-ebs-csi-driver-node-q9lmf Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" in 1.488728567s (1.488736869s including waiting) openshift-ovn-kubernetes 29m Normal Pulled pod/ovnkube-node-x8pqn Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-kube-controller-manager 29m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-239-132.ec2.internal created SCC ranges for redhat-ods-applications namespace openshift-image-registry 29m Normal SuccessfulDelete replicaset/image-registry-55b7d998b9 Deleted pod: image-registry-55b7d998b9-pq262 openshift-multus 29m Normal Pulled pod/multus-additional-cni-plugins-hg7bc Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dec1af3c24148179ab45f9137ba74876f4287121f07aa32dc7a3643da749a9" in 1.15075042s (1.150762661s including waiting) openshift-cluster-csi-drivers 29m Normal Pulling pod/aws-ebs-csi-driver-node-q9lmf Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" openshift-multus 29m Normal Created pod/multus-additional-cni-plugins-hg7bc Created container bond-cni-plugin openshift-kube-controller-manager 29m Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-239-132.ec2.internal created SCC ranges for redhat-ods-monitoring namespace openshift-kube-scheduler 29m Warning FailedCreatePodSandBox pod/revision-pruner-9-ip-10-0-197-197.ec2.internal Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-9-ip-10-0-197-197.ec2.internal_openshift-kube-scheduler_884ce5f7-3bd5-4d8e-9ae0-2373c1062946_0(0235e4440cfa9aad45c09fbdd7fb2cbc501c933bc49bb4cf3ce4000f98436947): error adding pod openshift-kube-scheduler_revision-pruner-9-ip-10-0-197-197.ec2.internal to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-kube-scheduler/revision-pruner-9-ip-10-0-197-197.ec2.internal/884ce5f7-3bd5-4d8e-9ae0-2373c1062946]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition openshift-kube-controller-manager 29m Warning FailedCreatePodSandBox pod/installer-8-ip-10-0-197-197.ec2.internal Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-8-ip-10-0-197-197.ec2.internal_openshift-kube-controller-manager_bf49a279-59e4-41d4-a83c-6c9c03715a3a_0(ea414c484911f5c7c9ba19316cecd68de550d846a9cbf1f51b477ddb2690b5a9): error adding pod openshift-kube-controller-manager_installer-8-ip-10-0-197-197.ec2.internal to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-kube-controller-manager/installer-8-ip-10-0-197-197.ec2.internal/bf49a279-59e4-41d4-a83c-6c9c03715a3a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition openshift-network-diagnostics 29m Warning FailedCreatePodSandBox pod/network-check-target-dvjbf Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-dvjbf_openshift-network-diagnostics_e7302422-b879-4439-b392-52f7371a110f_0(effa25ce2f35e4d07639d41b19bb1191938af29d95eea312eda1e4319764b726): error adding pod openshift-network-diagnostics_network-check-target-dvjbf to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-network-diagnostics/network-check-target-dvjbf/e7302422-b879-4439-b392-52f7371a110f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition openshift-kube-apiserver 29m Warning FastControllerResync node/ip-10-0-197-197.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling openshift-validation-webhook 29m Warning FailedCreatePodSandBox pod/validation-webhook-p4gz5 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_validation-webhook-p4gz5_openshift-validation-webhook_d6da5e0e-45ce-44f4-8e29-f2792c486e1f_0(b9796de7ccd0114d58ba39270da4c23eb993075db5e759b54e93eab2b029de8f): error adding pod openshift-validation-webhook_validation-webhook-p4gz5 to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-validation-webhook/validation-webhook-p4gz5/d6da5e0e-45ce-44f4-8e29-f2792c486e1f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition openshift-kube-apiserver 29m Warning FastControllerResync node/ip-10-0-197-197.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling openshift-dns 29m Warning FailedCreatePodSandBox pod/dns-default-vlp6d Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dns-default-vlp6d_openshift-dns_09ae9b91-76a0-48d0-a54e-ead8e323dd5c_0(878a4677d375b1e5ac121dbc2b8252ca802499a978ee72a6b26b7584fdade826): error adding pod openshift-dns_dns-default-vlp6d to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-dns/dns-default-vlp6d/09ae9b91-76a0-48d0-a54e-ead8e323dd5c]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition openshift-security 29m Warning FailedCreatePodSandBox pod/audit-exporter-vscxm Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_audit-exporter-vscxm_openshift-security_f2273b0e-2a81-47a1-8be1-5c43878cf2c8_0(da2b57e796ec6f4b10da9118d46444903c9f548be125b20d38e9cfdeca1f0a8b): error adding pod openshift-security_audit-exporter-vscxm to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-security/audit-exporter-vscxm/f2273b0e-2a81-47a1-8be1-5c43878cf2c8]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition openshift-kube-scheduler 29m Warning FailedCreatePodSandBox pod/installer-9-ip-10-0-197-197.ec2.internal Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-9-ip-10-0-197-197.ec2.internal_openshift-kube-scheduler_105e3a06-c630-4438-a42d-606ef5e1c2d2_0(6ea6de752f05f35f4522077afb5139e2f85d769b1ea86a8f887962d967755b2f): error adding pod openshift-kube-scheduler_installer-9-ip-10-0-197-197.ec2.internal to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-kube-scheduler/installer-9-ip-10-0-197-197.ec2.internal/105e3a06-c630-4438-a42d-606ef5e1c2d2]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition openshift-kube-apiserver 29m Warning FailedCreatePodSandBox pod/revision-pruner-12-ip-10-0-197-197.ec2.internal Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-12-ip-10-0-197-197.ec2.internal_openshift-kube-apiserver_ae7a4d21-bf41-4567-9c64-2417c08b0657_0(d0f001a6f9378a0a788a5530d0e1e33db75b1020fd7983e41d480d64c84bb866): error adding pod openshift-kube-apiserver_revision-pruner-12-ip-10-0-197-197.ec2.internal to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-kube-apiserver/revision-pruner-12-ip-10-0-197-197.ec2.internal/ae7a4d21-bf41-4567-9c64-2417c08b0657]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition openshift-etcd 29m Warning FailedCreatePodSandBox pod/revision-pruner-7-ip-10-0-197-197.ec2.internal Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_revision-pruner-7-ip-10-0-197-197.ec2.internal_openshift-etcd_ccb147aa-18ea-4279-8380-52383aa7fe4f_0(1df598e0f4ac12314cf5d31cbe5ccc1ffdfdabd0070aecaf20d6a96af1fb01d8): error adding pod openshift-etcd_revision-pruner-7-ip-10-0-197-197.ec2.internal to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-etcd/revision-pruner-7-ip-10-0-197-197.ec2.internal/ccb147aa-18ea-4279-8380-52383aa7fe4f]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition openshift-monitoring 29m Warning FailedCreatePodSandBox pod/sre-dns-latency-exporter-62rmk Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_sre-dns-latency-exporter-62rmk_openshift-monitoring_005a7b8f-38d8-4184-95a0-a8ba4d2829ca_0(bfe48c13dcccf39c3d62e2c2c633b480f29db1a71a2337c9135e9a0017ac6141): error adding pod openshift-monitoring_sre-dns-latency-exporter-62rmk to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-monitoring/sre-dns-latency-exporter-62rmk/005a7b8f-38d8-4184-95a0-a8ba4d2829ca]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition openshift-kube-controller-manager-operator 29m Normal PodCreated deployment/kube-controller-manager-operator Created Pod/kube-controller-manager-guard-ip-10-0-197-197.ec2.internal -n openshift-kube-controller-manager because it was missing openshift-multus 29m Warning FailedCreatePodSandBox pod/network-metrics-daemon-9gx7g Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-9gx7g_openshift-multus_b23ed972-30f8-43e5-8acd-cc2550ae73f0_0(e221e328d45681a3a33ea036e501b3dac2ea121099574e57387eb965168c99a8): error adding pod openshift-multus_network-metrics-daemon-9gx7g to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-multus/network-metrics-daemon-9gx7g/b23ed972-30f8-43e5-8acd-cc2550ae73f0]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition openshift-multus 29m Normal Pulling pod/multus-additional-cni-plugins-hg7bc Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" openshift-multus 29m Normal Started pod/multus-additional-cni-plugins-hg7bc Started container routeoverride-cni openshift-cluster-csi-drivers 29m Normal Pulled pod/aws-ebs-csi-driver-node-q9lmf Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" in 2.833418122s (2.833425926s including waiting) openshift-etcd 29m Normal Pulled pod/etcd-ip-10-0-197-197.ec2.internal Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" in 4.038777316s (4.038785587s including waiting) openshift-etcd 29m Normal Created pod/etcd-ip-10-0-197-197.ec2.internal Created container etcd-readyz openshift-kube-scheduler 29m Normal AddedInterface pod/revision-pruner-9-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.6/23] from ovn-kubernetes openshift-kube-scheduler 29m Normal Pulled pod/revision-pruner-9-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-etcd 29m Normal Started pod/etcd-ip-10-0-197-197.ec2.internal Started container etcd-readyz openshift-kube-scheduler 29m Normal Pulled pod/openshift-kube-scheduler-guard-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-multus 29m Normal Created pod/multus-additional-cni-plugins-hg7bc Created container routeoverride-cni default 29m Normal ConfigDriftMonitorStarted node/ip-10-0-197-197.ec2.internal Config Drift Monitor started, watching against rendered-master-d273453f5fe4894c22cd393f5c0dbfa3 openshift-ovn-kubernetes 29m Normal Started pod/ovnkube-node-x8pqn Started container ovn-controller openshift-multus 29m Normal Pulled pod/multus-additional-cni-plugins-hg7bc Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4903dfa59fc0fcc3521f6431d51c6ed0ef563b0c63614a2df44ee4d447ab6c5d" in 2.366649396s (2.36665826s including waiting) openshift-ovn-kubernetes 29m Normal Created pod/ovnkube-node-x8pqn Created container ovn-controller openshift-kube-apiserver 29m Normal Pulled pod/kube-apiserver-guard-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-etcd 29m Normal Pulled pod/etcd-guard-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-etcd 29m Normal AddedInterface pod/etcd-guard-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.12/23] from ovn-kubernetes openshift-etcd-operator 29m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:true Took:1.027026ms Error:} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.197.197:2379]: context deadline exceeded} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:true Took:887.443µs Error:}]\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-197-197.ec2.internal is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-197-197.ec2.internal is unhealthy" openshift-kube-controller-manager 29m Warning FastControllerResync pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling default 29m Normal NodeDone node/ip-10-0-197-197.ec2.internal Setting node ip-10-0-197-197.ec2.internal, currentConfig rendered-master-d273453f5fe4894c22cd393f5c0dbfa3 to Done openshift-etcd-operator 29m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:true Took:1.027026ms Error:} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.197.197:2379]: context deadline exceeded} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:true Took:887.443µs Error:}]\nClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-197-197.ec2.internal is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4823875419117155993 name:\"ip-10-0-140-6.ec2.internal\" peerURLs:\"https://10.0.140.6:2380\" clientURLs:\"https://10.0.140.6:2379\" Healthy:true Took:1.027026ms Error:} {Member:ID:9529258792665464299 name:\"ip-10-0-197-197.ec2.internal\" peerURLs:\"https://10.0.197.197:2380\" clientURLs:\"https://10.0.197.197:2379\" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.197.197:2379]: context deadline exceeded} {Member:ID:14419360892373211128 name:\"ip-10-0-239-132.ec2.internal\" peerURLs:\"https://10.0.239.132:2380\" clientURLs:\"https://10.0.239.132:2379\" Healthy:true Took:887.443µs Error:}]\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-197-197.ec2.internal is unhealthy" default 29m Normal Uncordon node/ip-10-0-197-197.ec2.internal Update completed for config rendered-master-d273453f5fe4894c22cd393f5c0dbfa3 and node has been uncordoned openshift-kube-scheduler 29m Normal AddedInterface pod/openshift-kube-scheduler-guard-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.10/23] from ovn-kubernetes openshift-kube-apiserver 29m Normal AddedInterface pod/kube-apiserver-guard-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.11/23] from ovn-kubernetes openshift-kube-controller-manager 29m Normal Created pod/kube-controller-manager-guard-ip-10-0-197-197.ec2.internal Created container guard openshift-etcd 29m Normal AddedInterface pod/revision-pruner-7-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.8/23] from ovn-kubernetes openshift-kube-controller-manager 29m Normal Started pod/installer-8-ip-10-0-197-197.ec2.internal Started container installer openshift-kube-controller-manager 29m Normal Created pod/installer-8-ip-10-0-197-197.ec2.internal Created container installer openshift-kube-scheduler 29m Normal Started pod/revision-pruner-9-ip-10-0-197-197.ec2.internal Started container pruner openshift-kube-scheduler 29m Normal Created pod/revision-pruner-9-ip-10-0-197-197.ec2.internal Created container pruner openshift-kube-controller-manager 29m Normal Pulled pod/installer-8-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-apiserver 29m Normal Started pod/kube-apiserver-guard-ip-10-0-197-197.ec2.internal Started container guard openshift-kube-controller-manager 29m Normal AddedInterface pod/kube-controller-manager-guard-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.13/23] from ovn-kubernetes openshift-kube-controller-manager 29m Normal Pulled pod/kube-controller-manager-guard-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-apiserver 29m Normal Created pod/kube-apiserver-guard-ip-10-0-197-197.ec2.internal Created container guard openshift-kube-controller-manager 29m Normal Started pod/kube-controller-manager-guard-ip-10-0-197-197.ec2.internal Started container guard openshift-kube-scheduler 29m Normal Started pod/openshift-kube-scheduler-guard-ip-10-0-197-197.ec2.internal Started container guard openshift-etcd 29m Normal Created pod/revision-pruner-7-ip-10-0-197-197.ec2.internal Created container pruner openshift-etcd 29m Normal Started pod/revision-pruner-7-ip-10-0-197-197.ec2.internal Started container pruner openshift-kube-apiserver 29m Normal Started pod/revision-pruner-12-ip-10-0-197-197.ec2.internal Started container pruner openshift-kube-apiserver 29m Normal Pulled pod/revision-pruner-12-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-etcd 29m Normal Created pod/etcd-guard-ip-10-0-197-197.ec2.internal Created container guard openshift-kube-scheduler 29m Normal Started pod/installer-9-ip-10-0-197-197.ec2.internal Started container installer openshift-kube-scheduler 29m Normal AddedInterface pod/installer-9-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.5/23] from ovn-kubernetes openshift-kube-scheduler 29m Normal Pulled pod/installer-9-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-apiserver 29m Normal AddedInterface pod/revision-pruner-12-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.7/23] from ovn-kubernetes openshift-kube-scheduler 29m Normal Created pod/installer-9-ip-10-0-197-197.ec2.internal Created container installer openshift-etcd 29m Normal Pulled pod/revision-pruner-7-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5f169ad45e402da4067e53b4821e0ff888012716b072013145cf3d09d14fa88" already present on machine openshift-kube-scheduler 29m Normal Created pod/openshift-kube-scheduler-guard-ip-10-0-197-197.ec2.internal Created container guard openshift-kube-apiserver 29m Normal Created pod/revision-pruner-12-ip-10-0-197-197.ec2.internal Created container pruner openshift-etcd 29m Normal Started pod/etcd-guard-ip-10-0-197-197.ec2.internal Started container guard openshift-kube-controller-manager 29m Normal AddedInterface pod/installer-8-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.9/23] from ovn-kubernetes openshift-multus 28m Normal Pulled pod/multus-additional-cni-plugins-hg7bc Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" in 1.933661677s (1.933672359s including waiting) default 28m Normal RenderedConfigGenerated machineconfigpool/master rendered-master-838f9e3f5e3c5d8294dd4171e78b3c53 successfully generated (release version: 4.13.0-rc.0, controller version: 40575b862f7bd42a2c40c8e6b7203cd4c29b0021) default 28m Normal RenderedConfigGenerated machineconfigpool/worker rendered-worker-b75428ccc32943c30e9e5b63da3f059e successfully generated (release version: 4.13.0-rc.0, controller version: 40575b862f7bd42a2c40c8e6b7203cd4c29b0021) openshift-kube-controller-manager-operator 28m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"cluster-policy-controller\" is waiting: ImagePullBackOff: Back-off pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fec19020202ba3b7f09460bc0cb52fc001941fbb3a3fa7d1ca34a2e9ebf0e9\"\nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager\" is waiting: ImagePullBackOff: Back-off pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865\"\nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-cert-syncer\" is waiting: ImagePullBackOff: Back-off pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f\"\nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-recovery-controller\" is waiting: ImagePullBackOff: Back-off pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f\"\nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" openshift-multus 28m Normal Started pod/multus-additional-cni-plugins-hg7bc Started container whereabouts-cni-bincopy openshift-multus 28m Normal Created pod/multus-additional-cni-plugins-hg7bc Created container whereabouts-cni-bincopy openshift-multus 28m Normal Pulled pod/multus-additional-cni-plugins-hg7bc Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" already present on machine openshift-multus 28m Normal Created pod/multus-additional-cni-plugins-hg7bc Created container whereabouts-cni openshift-multus 28m Normal Started pod/multus-additional-cni-plugins-hg7bc Started container whereabouts-cni default 28m Normal SetDesiredConfig machineconfigpool/master Targeted node ip-10-0-239-132.ec2.internal to config rendered-master-838f9e3f5e3c5d8294dd4171e78b3c53 openshift-multus 28m Normal Pulled pod/multus-additional-cni-plugins-hg7bc Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" already present on machine openshift-multus 28m Normal Created pod/multus-additional-cni-plugins-hg7bc Created container kube-multus-additional-cni-plugins default 28m Normal AnnotationChange machineconfigpool/master Node ip-10-0-239-132.ec2.internal now has machineconfiguration.openshift.io/desiredConfig=rendered-master-838f9e3f5e3c5d8294dd4171e78b3c53 openshift-kube-apiserver-operator 28m Normal NodeCurrentRevisionChanged deployment/kube-apiserver-operator Updated node "ip-10-0-140-6.ec2.internal" from revision 9 to 12 because static pod is ready openshift-kube-apiserver-operator 28m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 nodes are at revision 9; 1 nodes are at revision 11; 1 nodes are at revision 12" to "NodeInstallerProgressing: 1 nodes are at revision 11; 2 nodes are at revision 12",Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 9; 1 nodes are at revision 11; 1 nodes are at revision 12" to "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 11; 2 nodes are at revision 12" default 28m Normal AnnotationChange machineconfigpool/master Node ip-10-0-239-132.ec2.internal now has machineconfiguration.openshift.io/state=Working default 28m Normal ConfigDriftMonitorStopped node/ip-10-0-239-132.ec2.internal Config Drift Monitor stopped openshift-security 28m Normal Pulling pod/audit-exporter-vscxm Pulling image "quay.io/app-sre/splunk-audit-exporter@sha256:bbca8dfd77d15c6dde3495985c1a75354ad79339ecba6820e7ceef2282422964" openshift-security 28m Normal AddedInterface pod/audit-exporter-vscxm Add eth0 [10.130.0.56/23] from ovn-kubernetes openshift-dns 28m Normal Pulling pod/dns-default-vlp6d Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a604c96952da5e59f104a305e0c8303d474e33ef345b8c534fd677f189d05299" openshift-dns 28m Normal AddedInterface pod/dns-default-vlp6d Add eth0 [10.130.0.39/23] from ovn-kubernetes openshift-validation-webhook 28m Normal AddedInterface pod/validation-webhook-p4gz5 Add eth0 [10.130.0.54/23] from ovn-kubernetes openshift-security 28m Normal Pulled pod/audit-exporter-vscxm Successfully pulled image "quay.io/app-sre/splunk-audit-exporter@sha256:bbca8dfd77d15c6dde3495985c1a75354ad79339ecba6820e7ceef2282422964" in 1.934165342s (1.934178363s including waiting) default 28m Normal PendingConfig node/ip-10-0-239-132.ec2.internal Written pending config rendered-master-838f9e3f5e3c5d8294dd4171e78b3c53 default 28m Normal OSUpdateStarted node/ip-10-0-239-132.ec2.internal openshift-network-diagnostics 28m Normal AddedInterface pod/network-check-target-dvjbf Add eth0 [10.130.0.3/23] from ovn-kubernetes openshift-validation-webhook 28m Normal Pulling pod/validation-webhook-p4gz5 Pulling image "quay.io/app-sre/managed-cluster-validating-webhooks@sha256:3b13c3a89da30c5fbfaf7529ec3175dd43053c508d4bd09c79ef369d53ecc023" openshift-dns 28m Normal Pulled pod/dns-default-vlp6d Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a604c96952da5e59f104a305e0c8303d474e33ef345b8c534fd677f189d05299" in 1.91892121s (1.918934063s including waiting) openshift-dns 28m Normal Started pod/dns-default-vlp6d Started container dns openshift-network-diagnostics 28m Normal Pulling pod/network-check-target-dvjbf Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" openshift-dns 28m Normal Pulled pod/dns-default-vlp6d Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine default 28m Normal OSUpdateStaged node/ip-10-0-239-132.ec2.internal Changes to OS staged default 28m Normal SkipReboot node/ip-10-0-239-132.ec2.internal Config changes do not require reboot. Service crio was reloaded. openshift-dns 28m Normal Created pod/dns-default-vlp6d Created container dns openshift-multus 28m Normal Pulling pod/network-metrics-daemon-9gx7g Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5e01ac462952a60254c8a9a0f43cce8b323da60014d37a49250a2e586142a5" openshift-dns 28m Normal Started pod/dns-default-vlp6d Started container kube-rbac-proxy openshift-security 28m Normal Created pod/audit-exporter-vscxm Created container audit-exporter openshift-security 28m Normal Started pod/audit-exporter-vscxm Started container audit-exporter openshift-multus 28m Normal AddedInterface pod/network-metrics-daemon-9gx7g Add eth0 [10.130.0.4/23] from ovn-kubernetes openshift-monitoring 28m Normal Pulling pod/sre-dns-latency-exporter-62rmk Pulling image "quay.io/app-sre/managed-prometheus-exporter-base:latest" openshift-dns 28m Normal Created pod/dns-default-vlp6d Created container kube-rbac-proxy openshift-monitoring 28m Normal AddedInterface pod/sre-dns-latency-exporter-62rmk Add eth0 [10.130.0.55/23] from ovn-kubernetes openshift-multus 28m Normal Started pod/network-metrics-daemon-9gx7g Started container kube-rbac-proxy openshift-network-diagnostics 28m Normal Pulled pod/network-check-target-dvjbf Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" in 3.575417994s (3.575431777s including waiting) openshift-multus 28m Normal Created pod/network-metrics-daemon-9gx7g Created container kube-rbac-proxy openshift-network-diagnostics 28m Normal Created pod/network-check-target-dvjbf Created container network-check-target-container openshift-multus 28m Normal Started pod/network-metrics-daemon-9gx7g Started container network-metrics-daemon openshift-multus 28m Normal Created pod/network-metrics-daemon-9gx7g Created container network-metrics-daemon openshift-multus 28m Normal Pulled pod/network-metrics-daemon-9gx7g Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5e01ac462952a60254c8a9a0f43cce8b323da60014d37a49250a2e586142a5" in 2.658904007s (2.658922301s including waiting) openshift-validation-webhook 28m Normal Created pod/validation-webhook-p4gz5 Created container webhooks openshift-network-diagnostics 28m Normal Started pod/network-check-target-dvjbf Started container network-check-target-container openshift-validation-webhook 28m Normal Started pod/validation-webhook-p4gz5 Started container webhooks openshift-validation-webhook 28m Normal Pulled pod/validation-webhook-p4gz5 Successfully pulled image "quay.io/app-sre/managed-cluster-validating-webhooks@sha256:3b13c3a89da30c5fbfaf7529ec3175dd43053c508d4bd09c79ef369d53ecc023" in 4.273880236s (4.273892108s including waiting) openshift-multus 28m Normal Pulled pod/network-metrics-daemon-9gx7g Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 28m Normal Pulled pod/sre-dns-latency-exporter-62rmk Successfully pulled image "quay.io/app-sre/managed-prometheus-exporter-base:latest" in 6.019916586s (6.019934748s including waiting) openshift-monitoring 28m Normal Started pod/sre-dns-latency-exporter-62rmk Started container main openshift-monitoring 28m Normal Created pod/sre-dns-latency-exporter-62rmk Created container main openshift-monitoring 28m Normal SuccessfulCreate statefulset/prometheus-k8s create Pod prometheus-k8s-1 in StatefulSet prometheus-k8s successful default 28m Normal ConfigDriftMonitorStarted node/ip-10-0-239-132.ec2.internal Config Drift Monitor started, watching against rendered-master-838f9e3f5e3c5d8294dd4171e78b3c53 default 28m Normal NodeDone node/ip-10-0-239-132.ec2.internal Setting node ip-10-0-239-132.ec2.internal, currentConfig rendered-master-838f9e3f5e3c5d8294dd4171e78b3c53 to Done default 28m Normal Uncordon node/ip-10-0-239-132.ec2.internal Update completed for config rendered-master-838f9e3f5e3c5d8294dd4171e78b3c53 and node has been uncordoned default 28m Normal OSUpdateStarted node/ip-10-0-195-121.ec2.internal default 28m Normal Reboot node/ip-10-0-195-121.ec2.internal Node will reboot into config rendered-worker-c37c7a9e551f049d382df8406f11fe9b default 28m Normal OSUpdateStaged node/ip-10-0-195-121.ec2.internal Changes to OS staged default 28m Normal PendingConfig node/ip-10-0-195-121.ec2.internal Written pending config rendered-worker-c37c7a9e551f049d382df8406f11fe9b default 28m Normal AnnotationChange machineconfigpool/master Node ip-10-0-140-6.ec2.internal now has machineconfiguration.openshift.io/desiredConfig=rendered-master-838f9e3f5e3c5d8294dd4171e78b3c53 default 28m Normal SetDesiredConfig machineconfigpool/master Targeted node ip-10-0-140-6.ec2.internal to config rendered-master-838f9e3f5e3c5d8294dd4171e78b3c53 default 28m Normal ConfigDriftMonitorStopped node/ip-10-0-140-6.ec2.internal Config Drift Monitor stopped default 28m Normal AnnotationChange machineconfigpool/master Node ip-10-0-140-6.ec2.internal now has machineconfiguration.openshift.io/state=Working openshift-kube-scheduler 28m Normal Killing pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Stopping container kube-scheduler-recovery-controller openshift-kube-scheduler 28m Normal Killing pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Stopping container kube-scheduler-cert-syncer openshift-kube-scheduler 28m Normal Killing pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Stopping container kube-scheduler openshift-kube-scheduler 28m Normal StaticPodInstallerCompleted pod/installer-9-ip-10-0-197-197.ec2.internal Successfully installed revision 9 default 28m Normal OSUpdateStaged node/ip-10-0-140-6.ec2.internal Changes to OS staged default 28m Normal PendingConfig node/ip-10-0-140-6.ec2.internal Written pending config rendered-master-838f9e3f5e3c5d8294dd4171e78b3c53 openshift-kube-scheduler-operator 28m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal container \"kube-scheduler\" is terminated: Completed: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal container \"kube-scheduler-cert-syncer\" is terminated: Error: I0321 12:44:59.723798 1 base_controller.go:67] Waiting for caches to sync for CertSyncController\nStaticPodsDegraded: I0321 12:44:59.724194 1 event.go:285] Event(v1.ObjectReference{Kind:\"Pod\", Namespace:\"openshift-kube-scheduler\", Name:\"openshift-kube-scheduler-ip-10-0-197-197.ec2.internal\", UID:\"\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}): type: 'Warning' reason: 'FastControllerResync' Controller \"CertSyncController\" resync interval is set to 0s which might lead to client request throttling\nStaticPodsDegraded: I0321 12:44:59.724575 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: I0321 12:44:59.824502 1 base_controller.go:73] Caches are synced for CertSyncController \nStaticPodsDegraded: I0321 12:44:59.824583 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ...\nStaticPodsDegraded: I0321 12:44:59.824669 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:44:59.824694 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal container \"kube-scheduler-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" default 28m Normal SkipReboot node/ip-10-0-140-6.ec2.internal Config changes do not require reboot. Service crio was reloaded. openshift-etcd-operator 28m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-197-197.ec2.internal is unhealthy" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" default 28m Normal OSUpdateStarted node/ip-10-0-140-6.ec2.internal openshift-etcd-operator 28m Normal OperatorStatusChanged deployment/etcd-operator Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 2 of 3 members are available, ip-10-0-197-197.ec2.internal is unhealthy" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 3 members are available" openshift-kube-controller-manager 28m Normal StaticPodInstallerCompleted pod/installer-8-ip-10-0-197-197.ec2.internal Successfully installed revision 8 openshift-kube-scheduler 28m Normal LeaderElection configmap/kube-scheduler ip-10-0-239-132_1eedd215-242a-4122-83f9-62e51b6e4648 became leader openshift-kube-scheduler 28m Normal LeaderElection lease/kube-scheduler ip-10-0-239-132_1eedd215-242a-4122-83f9-62e51b6e4648 became leader openshift-image-registry 28m Normal Pulled pod/image-registry-5bd87dfd7-4rptq Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" already present on machine openshift-authentication-operator 28m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-74455c7c5-6zb4s pod)" openshift-authentication 28m Normal AddedInterface pod/oauth-openshift-85644d984b-5jmpn Add eth0 [10.130.0.17/23] from ovn-kubernetes openshift-route-controller-manager 28m Normal Pulling pod/route-controller-manager-6594987c6f-q7rdv Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:884971410b4691a0666287eb389e07a425040d4fc90665c864aa9f4728d4599c" openshift-authentication 28m Normal Pulling pod/oauth-openshift-85644d984b-5jmpn Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2156db516db2fda43363a063c11f91bd201dde2de7eb4fbab9c1210d17e699ba" openshift-monitoring 28m Normal AddedInterface pod/osd-rebalance-infra-nodes-27990045-mxscc Add eth0 [10.129.2.7/23] from ovn-kubernetes openshift-controller-manager 28m Normal AddedInterface pod/controller-manager-66b447958d-w97xv Add eth0 [10.130.0.18/23] from ovn-kubernetes openshift-apiserver 28m Normal Pulling pod/apiserver-5f568869f-kw7fx Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" openshift-apiserver 28m Normal AddedInterface pod/apiserver-5f568869f-kw7fx Add eth0 [10.130.0.16/23] from ovn-kubernetes openshift-image-registry 28m Normal AddedInterface pod/image-registry-5bd87dfd7-vhs2b Add eth0 [10.129.2.3/23] from ovn-kubernetes openshift-route-controller-manager 28m Normal AddedInterface pod/route-controller-manager-6594987c6f-q7rdv Add eth0 [10.130.0.14/23] from ovn-kubernetes openshift-monitoring 28m Normal Pulling pod/osd-rebalance-infra-nodes-27990045-mxscc Pulling image "image-registry.openshift-image-registry.svc:5000/openshift/cli:latest" openshift-image-registry 28m Normal AddedInterface pod/image-registry-5bd87dfd7-4rptq Add eth0 [10.131.0.17/23] from ovn-kubernetes openshift-oauth-apiserver 28m Normal AddedInterface pod/apiserver-74455c7c5-6zb4s Add eth0 [10.130.0.15/23] from ovn-kubernetes openshift-image-registry 28m Normal Created pod/image-registry-5bd87dfd7-vhs2b Created container registry openshift-image-registry 28m Normal Pulled pod/image-registry-5bd87dfd7-vhs2b Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" already present on machine openshift-oauth-apiserver 28m Normal Pulling pod/apiserver-74455c7c5-6zb4s Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" openshift-image-registry 28m Normal Created pod/image-registry-5bd87dfd7-4rptq Created container registry openshift-image-registry 28m Normal Started pod/image-registry-5bd87dfd7-4rptq Started container registry openshift-operator-lifecycle-manager 28m Normal AddedInterface pod/collect-profiles-27990045-xf7fw Add eth0 [10.128.2.3/23] from ovn-kubernetes openshift-controller-manager 28m Normal Pulling pod/controller-manager-66b447958d-w97xv Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:03d999968e93d76624e3d10f22591abfbfdaaf2dd8f315abf64cdc590ab64195" openshift-operator-lifecycle-manager 28m Normal Pulling pod/collect-profiles-27990045-xf7fw Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:497121bfdb3b293af2c939af393bfbda39aeace9d754f7288a244cd7ec2662d7" openshift-kube-controller-manager 28m Normal LeaderElection configmap/cert-recovery-controller-lock ip-10-0-239-132_af359a50-9614-4f5d-84b2-d7b242b238a0 became leader openshift-image-registry 28m Normal Started pod/image-registry-5bd87dfd7-vhs2b Started container registry openshift-kube-controller-manager 28m Normal LeaderElection lease/cert-recovery-controller-lock ip-10-0-239-132_af359a50-9614-4f5d-84b2-d7b242b238a0 became leader openshift-monitoring 28m Normal Pulled pod/osd-rebalance-infra-nodes-27990045-mxscc Successfully pulled image "image-registry.openshift-image-registry.svc:5000/openshift/cli:latest" in 332.483654ms (332.497911ms including waiting) openshift-monitoring 28m Normal Created pod/osd-rebalance-infra-nodes-27990045-mxscc Created container osd-rebalance-infra-nodes openshift-monitoring 28m Normal Started pod/osd-rebalance-infra-nodes-27990045-mxscc Started container osd-rebalance-infra-nodes openshift-apiserver-operator 28m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-5f568869f-kw7fx pod)" openshift-kube-controller-manager-operator 28m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"cluster-policy-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-cert-syncer\" is terminated: Error: I0321 12:45:04.798982 1 base_controller.go:67] Waiting for caches to sync for CertSyncController\nStaticPodsDegraded: I0321 12:45:04.799121 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: I0321 12:45:04.799135 1 event.go:285] Event(v1.ObjectReference{Kind:\"Pod\", Namespace:\"openshift-kube-controller-manager\", Name:\"kube-controller-manager-ip-10-0-197-197.ec2.internal\", UID:\"\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}): type: 'Warning' reason: 'FastControllerResync' Controller \"CertSyncController\" resync interval is set to 0s which might lead to client request throttling\nStaticPodsDegraded: I0321 12:45:04.899213 1 base_controller.go:73] Caches are synced for CertSyncController \nStaticPodsDegraded: I0321 12:45:04.899245 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ...\nStaticPodsDegraded: I0321 12:45:04.899323 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:45:04.899691 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:45:25.588669 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:45:25.588922 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" openshift-monitoring 28m Normal SawCompletedJob cronjob/osd-rebalance-infra-nodes Saw completed job: osd-rebalance-infra-nodes-27990045, status: Complete openshift-monitoring 28m Normal Completed job/osd-rebalance-infra-nodes-27990045 Job completed openshift-controller-manager 28m Normal Started pod/controller-manager-66b447958d-w97xv Started container controller-manager openshift-apiserver 28m Normal Pulled pod/apiserver-5f568869f-kw7fx Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" in 4.414457989s (4.414472717s including waiting) openshift-route-controller-manager 28m Normal Started pod/route-controller-manager-6594987c6f-q7rdv Started container route-controller-manager openshift-oauth-apiserver 28m Normal Pulled pod/apiserver-74455c7c5-6zb4s Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" in 4.484303811s (4.484313004s including waiting) openshift-route-controller-manager 28m Normal Created pod/route-controller-manager-6594987c6f-q7rdv Created container route-controller-manager openshift-authentication 28m Normal Pulled pod/oauth-openshift-85644d984b-5jmpn Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2156db516db2fda43363a063c11f91bd201dde2de7eb4fbab9c1210d17e699ba" in 4.712207155s (4.712224171s including waiting) openshift-authentication 28m Normal Created pod/oauth-openshift-85644d984b-5jmpn Created container oauth-openshift openshift-authentication 28m Normal Started pod/oauth-openshift-85644d984b-5jmpn Started container oauth-openshift openshift-operator-lifecycle-manager 28m Normal Pulled pod/collect-profiles-27990045-xf7fw Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:497121bfdb3b293af2c939af393bfbda39aeace9d754f7288a244cd7ec2662d7" in 4.897487987s (4.897494694s including waiting) openshift-controller-manager 28m Normal Pulled pod/controller-manager-66b447958d-w97xv Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:03d999968e93d76624e3d10f22591abfbfdaaf2dd8f315abf64cdc590ab64195" in 4.711760592s (4.71177264s including waiting) openshift-apiserver 28m Normal Created pod/apiserver-5f568869f-kw7fx Created container fix-audit-permissions openshift-oauth-apiserver 28m Normal Started pod/apiserver-74455c7c5-6zb4s Started container fix-audit-permissions openshift-operator-lifecycle-manager 28m Normal Created pod/collect-profiles-27990045-xf7fw Created container collect-profiles openshift-operator-lifecycle-manager 28m Normal Started pod/collect-profiles-27990045-xf7fw Started container collect-profiles openshift-apiserver 28m Normal Started pod/apiserver-5f568869f-kw7fx Started container fix-audit-permissions openshift-route-controller-manager 28m Normal Pulled pod/route-controller-manager-6594987c6f-q7rdv Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:884971410b4691a0666287eb389e07a425040d4fc90665c864aa9f4728d4599c" in 4.732631084s (4.732640119s including waiting) openshift-controller-manager 28m Normal Created pod/controller-manager-66b447958d-w97xv Created container controller-manager openshift-oauth-apiserver 28m Normal Created pod/apiserver-74455c7c5-6zb4s Created container fix-audit-permissions openshift-apiserver 28m Normal Started pod/apiserver-5f568869f-kw7fx Started container openshift-apiserver-check-endpoints openshift-apiserver 28m Normal Pulled pod/apiserver-5f568869f-kw7fx Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:558f0eb88863582cd8532a701904559e7d9be443b2eefc4688e2c0fc04cae08d" already present on machine openshift-oauth-apiserver 28m Normal Started pod/apiserver-74455c7c5-6zb4s Started container oauth-apiserver openshift-oauth-apiserver 28m Normal Created pod/apiserver-74455c7c5-6zb4s Created container oauth-apiserver openshift-kube-apiserver-operator 28m Normal NodeTargetRevisionChanged deployment/kube-apiserver-operator Updating node "ip-10-0-197-197.ec2.internal" from revision 11 to 12 because node ip-10-0-197-197.ec2.internal with revision 11 is the oldest openshift-oauth-apiserver 28m Normal Pulled pod/apiserver-74455c7c5-6zb4s Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0755461bb6ec987da5c44b85947d02eba8df0b0816923f397cee3f235303a74d" already present on machine openshift-authentication-operator 28m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-74455c7c5-6zb4s pod)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is waiting in pending apiserver-74455c7c5-6zb4s pod)" openshift-apiserver 28m Normal Created pod/apiserver-5f568869f-kw7fx Created container openshift-apiserver openshift-apiserver 28m Normal Pulled pod/apiserver-5f568869f-kw7fx Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-apiserver 28m Normal Created pod/apiserver-5f568869f-kw7fx Created container openshift-apiserver-check-endpoints openshift-apiserver 28m Normal Started pod/apiserver-5f568869f-kw7fx Started container openshift-apiserver openshift-authentication 28m Normal Killing pod/oauth-openshift-5c9d8ccbcc-bkr8m Stopping container oauth-openshift openshift-authentication 28m Normal ScalingReplicaSet deployment/oauth-openshift Scaled up replica set oauth-openshift-85644d984b to 3 from 2 openshift-authentication 28m Normal ScalingReplicaSet deployment/oauth-openshift Scaled down replica set oauth-openshift-5c9d8ccbcc to 0 from 1 openshift-apiserver-operator 28m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-5f568869f-kw7fx pod)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-5f568869f-kw7fx pod)" openshift-authentication 28m Normal SuccessfulDelete replicaset/oauth-openshift-5c9d8ccbcc Deleted pod: oauth-openshift-5c9d8ccbcc-bkr8m openshift-authentication 28m Normal SuccessfulCreate replicaset/oauth-openshift-85644d984b Created pod: oauth-openshift-85644d984b-2d8rq openshift-apiserver 28m Warning FastControllerResync node/ip-10-0-197-197.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling openshift-apiserver 28m Warning FastControllerResync node/ip-10-0-197-197.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager 28m Warning Unhealthy pod/kube-controller-manager-guard-ip-10-0-197-197.ec2.internal Readiness probe failed: Get "https://10.0.197.197:10257/healthz": dial tcp 10.0.197.197:10257: connect: connection refused openshift-kube-controller-manager 28m Warning ProbeError pod/kube-controller-manager-guard-ip-10-0-197-197.ec2.internal Readiness probe error: Get "https://10.0.197.197:10257/healthz": dial tcp 10.0.197.197:10257: connect: connection refused... default 28m Normal ConfigDriftMonitorStarted node/ip-10-0-140-6.ec2.internal Config Drift Monitor started, watching against rendered-master-838f9e3f5e3c5d8294dd4171e78b3c53 default 28m Normal NodeDone node/ip-10-0-140-6.ec2.internal Setting node ip-10-0-140-6.ec2.internal, currentConfig rendered-master-838f9e3f5e3c5d8294dd4171e78b3c53 to Done default 28m Normal Uncordon node/ip-10-0-140-6.ec2.internal Update completed for config rendered-master-838f9e3f5e3c5d8294dd4171e78b3c53 and node has been uncordoned openshift-kube-controller-manager 28m Normal Pulled pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 28m Normal Pulled pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-controller-manager 28m Normal Pulled pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:40fec19020202ba3b7f09460bc0cb52fc001941fbb3a3fa7d1ca34a2e9ebf0e9" already present on machine openshift-kube-controller-manager 28m Normal Created pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Created container cluster-policy-controller openshift-kube-controller-manager 28m Normal Started pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Started container cluster-policy-controller openshift-kube-controller-manager 28m Normal Started pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Started container kube-controller-manager openshift-kube-controller-manager 28m Normal Created pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Created container kube-controller-manager openshift-kube-controller-manager 28m Normal Created pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Created container kube-controller-manager-cert-syncer openshift-operator-lifecycle-manager 28m Normal SawCompletedJob cronjob/collect-profiles Saw completed job: collect-profiles-27990045, status: Complete openshift-kube-controller-manager 28m Normal Pulled pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23c30c160f461075478682516ada5b744edc1a68f24b89b7066693afccf3333f" already present on machine openshift-kube-controller-manager 28m Normal Started pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Started container kube-controller-manager-cert-syncer openshift-kube-controller-manager 28m Warning FastControllerResync pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling openshift-kube-controller-manager 28m Normal Created pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Created container kube-controller-manager-recovery-controller openshift-kube-controller-manager 28m Warning ClusterInfrastructureStatus pod/kube-controller-manager-ip-10-0-197-197.ec2.internal unable to get cluster infrastructure status, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope openshift-operator-lifecycle-manager 28m Normal Completed job/collect-profiles-27990045 Job completed openshift-kube-controller-manager 28m Normal Started pod/kube-controller-manager-ip-10-0-197-197.ec2.internal Started container kube-controller-manager-recovery-controller openshift-authentication-operator 28m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is waiting in pending apiserver-74455c7c5-6zb4s pod)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-74455c7c5-6zb4s pod)" openshift-kube-controller-manager-operator 28m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"cluster-policy-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-cert-syncer\" is terminated: Error: I0321 12:45:04.798982 1 base_controller.go:67] Waiting for caches to sync for CertSyncController\nStaticPodsDegraded: I0321 12:45:04.799121 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: I0321 12:45:04.799135 1 event.go:285] Event(v1.ObjectReference{Kind:\"Pod\", Namespace:\"openshift-kube-controller-manager\", Name:\"kube-controller-manager-ip-10-0-197-197.ec2.internal\", UID:\"\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}): type: 'Warning' reason: 'FastControllerResync' Controller \"CertSyncController\" resync interval is set to 0s which might lead to client request throttling\nStaticPodsDegraded: I0321 12:45:04.899213 1 base_controller.go:73] Caches are synced for CertSyncController \nStaticPodsDegraded: I0321 12:45:04.899245 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ...\nStaticPodsDegraded: I0321 12:45:04.899323 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:45:04.899691 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: I0321 12:45:25.588669 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true}]\nStaticPodsDegraded: I0321 12:45:25.588922 1 certsync_controller.go:170] Syncing secrets: [{kube-controller-manager-client-cert-key false} {csr-signer false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-controller-manager-ip-10-0-197-197.ec2.internal container \"kube-controller-manager-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" openshift-authentication-operator 28m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-74455c7c5-6zb4s pod)" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-74455c7c5-6zb4s pod)\nOAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication ()",Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well") openshift-kube-apiserver-operator 28m Normal PodCreated deployment/kube-apiserver-operator Created Pod/installer-12-ip-10-0-197-197.ec2.internal -n openshift-kube-apiserver because it was missing openshift-apiserver-operator 28m Normal OperatorStatusChanged deployment/openshift-apiserver-operator Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-5f568869f-kw7fx pod)" to "All is well" openshift-kube-apiserver 28m Normal Created pod/installer-12-ip-10-0-197-197.ec2.internal Created container installer openshift-kube-apiserver 28m Normal AddedInterface pod/installer-12-ip-10-0-197-197.ec2.internal Add eth0 [10.130.0.19/23] from ovn-kubernetes openshift-kube-apiserver 28m Normal Pulled pod/installer-12-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine default 28m Normal NodeNotSchedulable node/ip-10-0-195-121.ec2.internal Node ip-10-0-195-121.ec2.internal status is now: NodeNotSchedulable openshift-kube-apiserver 28m Normal Started pod/installer-12-ip-10-0-197-197.ec2.internal Started container installer default 28m Normal NodeAllocatableEnforced node/ip-10-0-195-121.ec2.internal Updated Node Allocatable limit across pods default 28m Normal NodeHasNoDiskPressure node/ip-10-0-195-121.ec2.internal Node ip-10-0-195-121.ec2.internal status is now: NodeHasNoDiskPressure default 28m Normal NodeNotReady node/ip-10-0-195-121.ec2.internal Node ip-10-0-195-121.ec2.internal status is now: NodeNotReady default 28m Warning Rebooted node/ip-10-0-195-121.ec2.internal Node ip-10-0-195-121.ec2.internal has been rebooted, boot id: dfd84685-5ef7-41f7-826f-3bd9326557e6 default 28m Normal NodeHasSufficientPID node/ip-10-0-195-121.ec2.internal Node ip-10-0-195-121.ec2.internal status is now: NodeHasSufficientPID default 28m Normal Starting node/ip-10-0-195-121.ec2.internal Starting kubelet. default 28m Normal NodeHasSufficientMemory node/ip-10-0-195-121.ec2.internal Node ip-10-0-195-121.ec2.internal status is now: NodeHasSufficientMemory openshift-kube-scheduler-operator 28m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "StaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal container \"kube-scheduler\" is terminated: Completed: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal container \"kube-scheduler-cert-syncer\" is terminated: Error: I0321 12:44:59.723798 1 base_controller.go:67] Waiting for caches to sync for CertSyncController\nStaticPodsDegraded: I0321 12:44:59.724194 1 event.go:285] Event(v1.ObjectReference{Kind:\"Pod\", Namespace:\"openshift-kube-scheduler\", Name:\"openshift-kube-scheduler-ip-10-0-197-197.ec2.internal\", UID:\"\", APIVersion:\"v1\", ResourceVersion:\"\", FieldPath:\"\"}): type: 'Warning' reason: 'FastControllerResync' Controller \"CertSyncController\" resync interval is set to 0s which might lead to client request throttling\nStaticPodsDegraded: I0321 12:44:59.724575 1 observer_polling.go:159] Starting file observer\nStaticPodsDegraded: I0321 12:44:59.824502 1 base_controller.go:73] Caches are synced for CertSyncController \nStaticPodsDegraded: I0321 12:44:59.824583 1 base_controller.go:110] Starting #1 worker of CertSyncController controller ...\nStaticPodsDegraded: I0321 12:44:59.824669 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:44:59.824694 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal container \"kube-scheduler-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" default 28m Normal AnnotationChange machineconfigpool/master Node ip-10-0-197-197.ec2.internal now has machineconfiguration.openshift.io/desiredConfig=rendered-master-838f9e3f5e3c5d8294dd4171e78b3c53 default 28m Normal SetDesiredConfig machineconfigpool/master Targeted node ip-10-0-197-197.ec2.internal to config rendered-master-838f9e3f5e3c5d8294dd4171e78b3c53 default 28m Normal NodeReady node/ip-10-0-195-121.ec2.internal Node ip-10-0-195-121.ec2.internal status is now: NodeReady openshift-machine-api 28m Normal DetectedUnhealthy machine/qeaisrhods-c13-28wr5-infra-us-east-1a-54lb2 Machine openshift-machine-api/srep-infra-healthcheck/qeaisrhods-c13-28wr5-infra-us-east-1a-54lb2/ip-10-0-195-121.ec2.internal has unhealthy node ip-10-0-195-121.ec2.internal openshift-cluster-storage-operator 28m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Progressing changed from False to True ("AWSEBSCSIDriverOperatorCRProgressing: AWSEBSDriverNodeServiceControllerProgressing: Waiting for DaemonSet to deploy node pods") default 28m Normal AnnotationChange machineconfigpool/master Node ip-10-0-197-197.ec2.internal now has machineconfiguration.openshift.io/state=Working default 28m Normal ConfigDriftMonitorStopped node/ip-10-0-197-197.ec2.internal Config Drift Monitor stopped openshift-kube-scheduler 28m Normal LeaderElection lease/cert-recovery-controller-lock ip-10-0-239-132_0221334d-e17d-430a-b461-5771c784490b became leader openshift-kube-scheduler 28m Normal LeaderElection configmap/cert-recovery-controller-lock ip-10-0-239-132_0221334d-e17d-430a-b461-5771c784490b became leader openshift-authentication-operator 28m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-74455c7c5-6zb4s pod)\nOAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication ()" to "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication ()" default 28m Normal OSUpdateStarted node/ip-10-0-197-197.ec2.internal default 28m Normal SkipReboot node/ip-10-0-197-197.ec2.internal Config changes do not require reboot. Service crio was reloaded. default 28m Normal PendingConfig node/ip-10-0-197-197.ec2.internal Written pending config rendered-master-838f9e3f5e3c5d8294dd4171e78b3c53 default 28m Normal OSUpdateStaged node/ip-10-0-197-197.ec2.internal Changes to OS staged openshift-image-registry 28m Normal SuccessfulDelete replicaset/image-registry-55b7d998b9 Deleted pod: image-registry-55b7d998b9-pf4xh openshift-image-registry 28m Normal ScalingReplicaSet deployment/image-registry Scaled down replica set image-registry-55b7d998b9 to 0 from 1 openshift-image-registry 28m Normal Killing pod/image-registry-55b7d998b9-pf4xh Stopping container registry openshift-kube-apiserver 28m Normal LeaderElection lease/cert-regeneration-controller-lock ip-10-0-239-132_0744e194-f5d3-4efb-ad2e-feedea97c244 became leader openshift-kube-controller-manager-operator 28m Normal NodeCurrentRevisionChanged deployment/kube-controller-manager-operator Updated node "ip-10-0-197-197.ec2.internal" from revision 7 to 8 because static pod is ready openshift-kube-controller-manager-operator 28m Normal OperatorStatusChanged deployment/kube-controller-manager-operator Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 8"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 7; 2 nodes are at revision 8" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 8" default 27m Normal Uncordon node/ip-10-0-197-197.ec2.internal Update completed for config rendered-master-838f9e3f5e3c5d8294dd4171e78b3c53 and node has been uncordoned default 27m Normal NodeDone node/ip-10-0-197-197.ec2.internal Setting node ip-10-0-197-197.ec2.internal, currentConfig rendered-master-838f9e3f5e3c5d8294dd4171e78b3c53 to Done default 27m Normal ConfigDriftMonitorStarted node/ip-10-0-197-197.ec2.internal Config Drift Monitor started, watching against rendered-master-838f9e3f5e3c5d8294dd4171e78b3c53 openshift-authentication 27m Normal AddedInterface pod/oauth-openshift-85644d984b-2d8rq Add eth0 [10.129.0.5/23] from ovn-kubernetes openshift-authentication 27m Normal Created pod/oauth-openshift-85644d984b-2d8rq Created container oauth-openshift openshift-authentication 27m Normal Started pod/oauth-openshift-85644d984b-2d8rq Started container oauth-openshift openshift-authentication 27m Normal Pulled pod/oauth-openshift-85644d984b-2d8rq Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2156db516db2fda43363a063c11f91bd201dde2de7eb4fbab9c1210d17e699ba" already present on machine openshift-kube-scheduler 27m Warning Unhealthy pod/openshift-kube-scheduler-guard-ip-10-0-197-197.ec2.internal Readiness probe failed: Get "https://10.0.197.197:10259/healthz": dial tcp 10.0.197.197:10259: connect: connection refused openshift-kube-scheduler 27m Warning ProbeError pod/openshift-kube-scheduler-guard-ip-10-0-197-197.ec2.internal Readiness probe error: Get "https://10.0.197.197:10259/healthz": dial tcp 10.0.197.197:10259: connect: connection refused... openshift-monitoring 27m Normal Pulling pod/node-exporter-sn6ks Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" openshift-cluster-node-tuning-operator 27m Normal Pulling pod/tuned-nhvkp Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" openshift-machine-config-operator 27m Normal Pulling pod/machine-config-daemon-tpglq Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" openshift-ovn-kubernetes 27m Normal Pulling pod/ovnkube-node-6jsx2 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" openshift-multus 27m Normal Pulling pod/multus-db5qv Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" openshift-cluster-csi-drivers 27m Normal Pulling pod/aws-ebs-csi-driver-node-r2n4w Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" openshift-multus 27m Normal Pulling pod/multus-additional-cni-plugins-x8r6f Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633" openshift-dns 27m Normal Pulling pod/node-resolver-njmd5 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" openshift-image-registry 27m Normal Pulling pod/node-ca-fg6h6 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" openshift-kube-scheduler 27m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-scheduler 27m Normal Started pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Started container wait-for-host-port openshift-kube-scheduler 27m Normal Created pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Created container wait-for-host-port openshift-kube-scheduler 27m Normal Started pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Started container kube-scheduler-cert-syncer openshift-kube-scheduler 27m Warning FastControllerResync pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler 27m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 27m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-scheduler 27m Normal Created pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Created container kube-scheduler openshift-kube-scheduler 27m Normal Started pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Started container kube-scheduler openshift-kube-scheduler 27m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 27m Normal Created pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Created container kube-scheduler-cert-syncer openshift-kube-scheduler 27m Normal Started pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Started container kube-scheduler-recovery-controller openshift-kube-scheduler 27m Normal Created pod/openshift-kube-scheduler-ip-10-0-197-197.ec2.internal Created container kube-scheduler-recovery-controller openshift-monitoring 27m Normal Pulled pod/node-exporter-sn6ks Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" in 3.794831897s (3.794920998s including waiting) openshift-image-registry 27m Warning Unhealthy pod/image-registry-55b7d998b9-pf4xh Readiness probe failed: Get "https://10.128.2.11:5000/healthz": dial tcp 10.128.2.11:5000: connect: connection refused openshift-image-registry 27m Warning ProbeError pod/image-registry-55b7d998b9-pf4xh Readiness probe error: Get "https://10.128.2.11:5000/healthz": dial tcp 10.128.2.11:5000: connect: connection refused... openshift-monitoring 27m Normal Started pod/node-exporter-sn6ks Started container init-textfile openshift-monitoring 27m Normal Created pod/node-exporter-sn6ks Created container init-textfile openshift-authentication-operator 27m Normal OperatorStatusChanged deployment/authentication-operator Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerDeploymentDegraded: 1 of 3 requested instances are unavailable for oauth-openshift.openshift-authentication ()" to "All is well" openshift-monitoring 27m Normal Pulled pod/node-exporter-sn6ks Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f4a21a1dba07ad90fdf2a59a4462e11e25567c6a30bdde0a1eb837e44c5573e" already present on machine openshift-kube-apiserver 27m Normal StaticPodInstallerCompleted pod/installer-12-ip-10-0-197-197.ec2.internal Successfully installed revision 12 openshift-kube-apiserver 27m Normal Killing pod/kube-apiserver-ip-10-0-197-197.ec2.internal Stopping container kube-apiserver-cert-syncer openshift-kube-apiserver 27m Normal Killing pod/kube-apiserver-ip-10-0-197-197.ec2.internal Stopping container kube-apiserver-insecure-readyz openshift-kube-apiserver 27m Normal Killing pod/kube-apiserver-ip-10-0-197-197.ec2.internal Stopping container kube-apiserver openshift-kube-apiserver 27m Normal Killing pod/kube-apiserver-ip-10-0-197-197.ec2.internal Stopping container kube-apiserver-check-endpoints openshift-kube-apiserver 27m Normal Killing pod/kube-apiserver-ip-10-0-197-197.ec2.internal Stopping container kube-apiserver-cert-regeneration-controller default 27m Warning ResolutionFailed namespace/openshift-must-gather-operator constraints not satisfiable: subscription must-gather-operator exists, no operators found from catalog must-gather-operator-registry in namespace openshift-must-gather-operator referenced by subscription must-gather-operator openshift-image-registry 27m Normal Pulled pod/node-ca-fg6h6 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c31fa588cb4f0e3350b4642471bf1615b7e3f85d138bdb880d6724ba4caba0d7" in 18.379093904s (18.379105241s including waiting) openshift-monitoring 27m Normal Started pod/node-exporter-sn6ks Started container node-exporter openshift-cluster-csi-drivers 27m Normal Pulled pod/aws-ebs-csi-driver-node-r2n4w Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:57f524dade9dfab2170d530abeba6be1c2d65e5651ac927307376358572b2140" in 19.192487366s (19.192493384s including waiting) openshift-multus 27m Normal Pulled pod/multus-db5qv Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" in 19.185377042s (19.185382314s including waiting) openshift-monitoring 27m Normal Created pod/node-exporter-sn6ks Created container node-exporter openshift-machine-config-operator 27m Normal Pulled pod/machine-config-daemon-tpglq Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:438b7282ad388ca861674acf7331b7bf0d53b78112dbb12c964e9f5605e0b3f6" in 19.187656992s (19.187663354s including waiting) openshift-monitoring 27m Normal Pulling pod/node-exporter-sn6ks Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" default 27m Warning ResolutionFailed namespace/openshift-deployment-validation-operator constraints not satisfiable: subscription deployment-validation-operator exists, no operators found from catalog deployment-validation-operator-catalog in namespace openshift-deployment-validation-operator referenced by subscription deployment-validation-operator openshift-cluster-node-tuning-operator 27m Normal Started pod/tuned-nhvkp Started container tuned openshift-cluster-node-tuning-operator 27m Normal Created pod/tuned-nhvkp Created container tuned openshift-cluster-node-tuning-operator 27m Normal Pulled pod/tuned-nhvkp Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3f88ca4f79ec5b7e3a2323f32da8564c02bbd269ff1e87ad3ad602df95a8106" in 23.035575054s (23.035605118s including waiting) openshift-multus 27m Normal Pulled pod/multus-additional-cni-plugins-x8r6f Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a14e0c908f369adb40d2902d6f331cab420ad5928887f8de7cd082057a2f633" in 23.009438702s (23.009446067s including waiting) openshift-dns 27m Normal Pulled pod/node-resolver-njmd5 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" in 23.038330627s (23.03836104s including waiting) openshift-machine-config-operator 27m Normal Pulling pod/machine-config-daemon-tpglq Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" openshift-ovn-kubernetes 27m Normal Pulled pod/ovnkube-node-6jsx2 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" in 23.173541233s (23.173559208s including waiting) openshift-machine-config-operator 27m Normal Started pod/machine-config-daemon-tpglq Started container machine-config-daemon openshift-machine-config-operator 27m Normal Created pod/machine-config-daemon-tpglq Created container machine-config-daemon openshift-ovn-kubernetes 27m Normal Pulling pod/ovnkube-node-6jsx2 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" openshift-multus 27m Normal Created pod/multus-db5qv Created container kube-multus openshift-dns 27m Normal Started pod/node-resolver-njmd5 Started container dns-node-resolver openshift-cluster-csi-drivers 27m Normal Pulling pod/aws-ebs-csi-driver-node-r2n4w Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" openshift-dns 27m Normal Created pod/node-resolver-njmd5 Created container dns-node-resolver openshift-ovn-kubernetes 27m Normal Pulled pod/ovnkube-node-6jsx2 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-image-registry 27m Normal Created pod/node-ca-fg6h6 Created container node-ca openshift-multus 27m Normal Started pod/multus-additional-cni-plugins-x8r6f Started container egress-router-binary-copy openshift-image-registry 27m Normal Started pod/node-ca-fg6h6 Started container node-ca openshift-multus 27m Normal Created pod/multus-additional-cni-plugins-x8r6f Created container egress-router-binary-copy openshift-cluster-csi-drivers 27m Normal Started pod/aws-ebs-csi-driver-node-r2n4w Started container csi-driver openshift-cluster-csi-drivers 27m Normal Created pod/aws-ebs-csi-driver-node-r2n4w Created container csi-driver openshift-ovn-kubernetes 27m Normal Created pod/ovnkube-node-6jsx2 Created container ovn-acl-logging openshift-ovn-kubernetes 27m Normal Started pod/ovnkube-node-6jsx2 Started container ovn-acl-logging openshift-multus 27m Normal Started pod/multus-db5qv Started container kube-multus openshift-ovn-kubernetes 27m Normal Pulled pod/ovnkube-node-6jsx2 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" in 518.394962ms (518.425421ms including waiting) openshift-machine-config-operator 27m Normal Started pod/machine-config-daemon-tpglq Started container oauth-proxy openshift-machine-config-operator 27m Normal Created pod/machine-config-daemon-tpglq Created container oauth-proxy openshift-cluster-csi-drivers 27m Normal Pulled pod/aws-ebs-csi-driver-node-r2n4w Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f8022fba7aebc67e52a69bf57825548b74484dea32dd6d5758fc1f592e6dc821" in 1.290117133s (1.290131387s including waiting) openshift-ovn-kubernetes 27m Normal Created pod/ovnkube-node-6jsx2 Created container kube-rbac-proxy openshift-ovn-kubernetes 27m Normal Started pod/ovnkube-node-6jsx2 Started container kube-rbac-proxy openshift-machine-config-operator 27m Normal Pulled pod/machine-config-daemon-tpglq Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" in 2.186939554s (2.186948294s including waiting) openshift-ovn-kubernetes 27m Normal Pulled pod/ovnkube-node-6jsx2 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-ovn-kubernetes 27m Normal Started pod/ovnkube-node-6jsx2 Started container kube-rbac-proxy-ovn-metrics openshift-ovn-kubernetes 27m Normal Created pod/ovnkube-node-6jsx2 Created container kube-rbac-proxy-ovn-metrics openshift-ovn-kubernetes 27m Normal Pulled pod/ovnkube-node-6jsx2 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 27m Normal Pulled pod/node-exporter-sn6ks Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" in 5.597418962s (5.597447989s including waiting) openshift-monitoring 27m Normal Created pod/node-exporter-sn6ks Created container kube-rbac-proxy openshift-multus 27m Normal Pulling pod/multus-additional-cni-plugins-x8r6f Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be088cb9218a248cd4a616132c8daf531502c7b85404aadb0b73e76f8dba01df" openshift-monitoring 27m Normal Started pod/node-exporter-sn6ks Started container kube-rbac-proxy openshift-kube-apiserver-operator 27m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" started at 2023-03-21 12:44:28 +0000 UTC is still not ready" openshift-kube-apiserver-operator 27m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" started at 2023-03-21 12:44:28 +0000 UTC is still not ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" started at 2023-03-21 12:44:28 +0000 UTC is still not ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" started at 2023-03-21 12:45:00 +0000 UTC is still not ready" openshift-cluster-csi-drivers 26m Normal Pulling pod/aws-ebs-csi-driver-node-r2n4w Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" openshift-cluster-csi-drivers 26m Normal Started pod/aws-ebs-csi-driver-node-r2n4w Started container csi-node-driver-registrar openshift-cluster-csi-drivers 26m Normal Created pod/aws-ebs-csi-driver-node-r2n4w Created container csi-node-driver-registrar openshift-multus 26m Normal Created pod/multus-additional-cni-plugins-x8r6f Created container cni-plugins openshift-cluster-csi-drivers 26m Normal Pulled pod/aws-ebs-csi-driver-node-r2n4w Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fbffd94ea750eac9dcef360d12a6ca41746915f50f44fb7ed2be1a71c128487" in 2.080444952s (2.080458234s including waiting) openshift-ovn-kubernetes 26m Normal Created pod/ovnkube-node-6jsx2 Created container ovnkube-node openshift-ovn-kubernetes 26m Normal Started pod/ovnkube-node-6jsx2 Started container ovnkube-node openshift-multus 26m Normal Started pod/multus-additional-cni-plugins-x8r6f Started container cni-plugins openshift-cluster-csi-drivers 26m Normal Created pod/aws-ebs-csi-driver-node-r2n4w Created container csi-liveness-probe openshift-cluster-csi-drivers 26m Normal Started pod/aws-ebs-csi-driver-node-r2n4w Started container csi-liveness-probe openshift-multus 26m Normal Pulled pod/multus-additional-cni-plugins-x8r6f Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be088cb9218a248cd4a616132c8daf531502c7b85404aadb0b73e76f8dba01df" in 24.98435188s (24.984369788s including waiting) openshift-multus 26m Warning FailedCreatePodSandBox pod/network-metrics-daemon-qfgm8 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-metrics-daemon-qfgm8_openshift-multus_accfe2ec-f405-4ace-9e9a-93addd128129_0(442f42c0447a3ef761d7fe7c8a6e66408b995af52d06fc1ad7d56a09d7335bbc): error adding pod openshift-multus_network-metrics-daemon-qfgm8 to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-multus/network-metrics-daemon-qfgm8/accfe2ec-f405-4ace-9e9a-93addd128129]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition openshift-ingress-canary 26m Warning FailedCreatePodSandBox pod/ingress-canary-xb5f7 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_ingress-canary-xb5f7_openshift-ingress-canary_a4935be3-3ac0-4ea4-b6ca-d00abe394e0b_0(64d29e4b33c1922a7f5973bb68a293a232756f468c856809014167ab25064bfd): error adding pod openshift-ingress-canary_ingress-canary-xb5f7 to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-ingress-canary/ingress-canary-xb5f7/a4935be3-3ac0-4ea4-b6ca-d00abe394e0b]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition openshift-ovn-kubernetes 26m Normal Pulled pod/ovnkube-node-6jsx2 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c1300e2d232a5e06e02a291d2de9a8be62cfed1d0f25dd8f0db5c6c20aa2967" already present on machine openshift-cluster-storage-operator 26m Normal OperatorStatusChanged deployment/cluster-storage-operator Status for clusteroperator/storage changed: Progressing changed from True to False ("AWSEBSCSIDriverOperatorCRProgressing: All is well") openshift-network-diagnostics 26m Warning FailedCreatePodSandBox pod/network-check-target-trrh7 Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_network-check-target-trrh7_openshift-network-diagnostics_2098be88-9554-49ef-9f08-a9e0faaeb11a_0(bc4d2a4ff78e6c32270fdb41ca54dea5ec15c48dfb578ebebe633156790ab68e): error adding pod openshift-network-diagnostics_network-check-target-trrh7 to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-network-diagnostics/network-check-target-trrh7/2098be88-9554-49ef-9f08-a9e0faaeb11a]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition openshift-kube-scheduler-operator 26m Normal NodeCurrentRevisionChanged deployment/openshift-kube-scheduler-operator Updated node "ip-10-0-197-197.ec2.internal" from revision 7 to 9 because static pod is ready openshift-multus 26m Normal Pulled pod/multus-additional-cni-plugins-x8r6f Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dec1af3c24148179ab45f9137ba74876f4287121f07aa32dc7a3643da749a9" in 821.301467ms (821.318408ms including waiting) openshift-kube-scheduler-operator 26m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 nodes are at revision 7; 1 nodes are at revision 8; 1 nodes are at revision 9" to "NodeInstallerProgressing: 1 nodes are at revision 8; 2 nodes are at revision 9",Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 7; 1 nodes are at revision 8; 1 nodes are at revision 9" to "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 8; 2 nodes are at revision 9" openshift-multus 26m Normal Pulling pod/multus-additional-cni-plugins-x8r6f Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56dec1af3c24148179ab45f9137ba74876f4287121f07aa32dc7a3643da749a9" openshift-ovn-kubernetes 26m Normal Created pod/ovnkube-node-6jsx2 Created container ovn-controller openshift-ovn-kubernetes 26m Normal Started pod/ovnkube-node-6jsx2 Started container ovn-controller openshift-monitoring 26m Warning FailedCreatePodSandBox pod/sre-dns-latency-exporter-v8kzl Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_sre-dns-latency-exporter-v8kzl_openshift-monitoring_4d4600b4-ff80-4dbb-bbfb-a9f596c99fdb_0(c9eb0e04d575d42624fc3162fb0c6ffdac6caadf9fb40ada0e8de008d48df3de): error adding pod openshift-monitoring_sre-dns-latency-exporter-v8kzl to CNI network "multus-cni-network": plugin type="multus" name="multus-cni-network" failed (add): Multus: [openshift-monitoring/sre-dns-latency-exporter-v8kzl/4d4600b4-ff80-4dbb-bbfb-a9f596c99fdb]: have you checked that your default network is ready? still waiting for readinessindicatorfile @ /var/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition openshift-multus 26m Normal Created pod/multus-additional-cni-plugins-x8r6f Created container bond-cni-plugin openshift-multus 26m Normal Started pod/multus-additional-cni-plugins-x8r6f Started container bond-cni-plugin openshift-multus 26m Normal Pulled pod/multus-additional-cni-plugins-x8r6f Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4903dfa59fc0fcc3521f6431d51c6ed0ef563b0c63614a2df44ee4d447ab6c5d" in 750.935459ms (750.951305ms including waiting) openshift-multus 26m Normal Pulling pod/multus-additional-cni-plugins-x8r6f Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4903dfa59fc0fcc3521f6431d51c6ed0ef563b0c63614a2df44ee4d447ab6c5d" openshift-multus 26m Normal Created pod/multus-additional-cni-plugins-x8r6f Created container routeoverride-cni openshift-monitoring 26m Normal Pulling pod/prometheus-adapter-8467ff79fd-cth85 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbc27b4ea8b6ed06d8490b60e95b36bda21f09f15ec3f25f901c8dffc32292d9" openshift-ingress 26m Normal Pulling pod/router-default-7cf4c94d4-tqmcb Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0743d54d3acaf6558295618248ff446b4352dde0234d52465d7578c7c261e6fd" openshift-monitoring 26m Normal Pulling pod/prometheus-operator-admission-webhook-5c9b9d98cc-9qkgr Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e2218fd1d860bdb72a28d8fc34e1d5e7c3674bf1d0005583d70800dcd79d2" openshift-monitoring 26m Normal AddedInterface pod/prometheus-operator-admission-webhook-5c9b9d98cc-9qkgr Add eth0 [10.130.2.3/23] from ovn-kubernetes openshift-ingress 26m Normal AddedInterface pod/router-default-7cf4c94d4-tqmcb Add eth0 [10.130.2.8/23] from ovn-kubernetes openshift-multus 26m Normal Started pod/multus-additional-cni-plugins-x8r6f Started container routeoverride-cni openshift-multus 26m Normal Pulling pod/multus-additional-cni-plugins-x8r6f Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" openshift-monitoring 26m Normal AddedInterface pod/prometheus-adapter-8467ff79fd-cth85 Add eth0 [10.130.2.12/23] from ovn-kubernetes openshift-monitoring 26m Normal Pulling pod/thanos-querier-6566ccfdd9-lkbh6 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a898acb346b7aae546bb9dc31e43266415a096fa755241b56acb5969190584f9" openshift-monitoring 26m Normal AddedInterface pod/thanos-querier-6566ccfdd9-lkbh6 Add eth0 [10.130.2.9/23] from ovn-kubernetes openshift-monitoring 26m Normal SuccessfulAttachVolume pod/prometheus-k8s-1 AttachVolume.Attach succeeded for volume "pvc-2abeae08-0492-477e-a938-e36e3511d5b3" openshift-monitoring 26m Normal SuccessfulAttachVolume pod/alertmanager-main-0 AttachVolume.Attach succeeded for volume "pvc-1b1f012e-b506-4373-bdfd-02e4e6dd5098" default 26m Normal NodeSchedulable node/ip-10-0-195-121.ec2.internal Node ip-10-0-195-121.ec2.internal status is now: NodeSchedulable openshift-kube-scheduler-operator 26m Normal NodeTargetRevisionChanged deployment/openshift-kube-scheduler-operator Updating node "ip-10-0-239-132.ec2.internal" from revision 8 to 9 because node ip-10-0-239-132.ec2.internal with revision 8 is the oldest openshift-monitoring 26m Normal AddedInterface pod/alertmanager-main-0 Add eth0 [10.130.2.11/23] from ovn-kubernetes openshift-monitoring 26m Normal Started pod/thanos-querier-6566ccfdd9-lkbh6 Started container thanos-query openshift-monitoring 26m Normal Pulled pod/thanos-querier-6566ccfdd9-lkbh6 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-monitoring 26m Normal Pulled pod/prometheus-adapter-8467ff79fd-cth85 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bbc27b4ea8b6ed06d8490b60e95b36bda21f09f15ec3f25f901c8dffc32292d9" in 4.203762949s (4.203768748s including waiting) openshift-monitoring 26m Normal Created pod/prometheus-adapter-8467ff79fd-cth85 Created container prometheus-adapter openshift-monitoring 26m Normal Started pod/prometheus-adapter-8467ff79fd-cth85 Started container prometheus-adapter openshift-kube-apiserver 26m Warning Unhealthy pod/kube-apiserver-guard-ip-10-0-197-197.ec2.internal Readiness probe failed: HTTP probe failed with statuscode: 500 openshift-multus 26m Normal Started pod/multus-additional-cni-plugins-x8r6f Started container whereabouts-cni-bincopy openshift-kube-apiserver 26m Warning ProbeError pod/kube-apiserver-guard-ip-10-0-197-197.ec2.internal Readiness probe error: HTTP probe failed with statuscode: 500... default 26m Normal SetDesiredConfig machineconfigpool/worker Targeted node ip-10-0-160-152.ec2.internal to config rendered-worker-b75428ccc32943c30e9e5b63da3f059e openshift-monitoring 26m Normal Created pod/thanos-querier-6566ccfdd9-lkbh6 Created container oauth-proxy openshift-monitoring 26m Normal Started pod/thanos-querier-6566ccfdd9-lkbh6 Started container oauth-proxy openshift-monitoring 26m Normal Pulled pod/prometheus-operator-admission-webhook-5c9b9d98cc-9qkgr Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e2218fd1d860bdb72a28d8fc34e1d5e7c3674bf1d0005583d70800dcd79d2" in 4.357826152s (4.357834237s including waiting) openshift-monitoring 26m Normal Created pod/prometheus-operator-admission-webhook-5c9b9d98cc-9qkgr Created container prometheus-operator-admission-webhook openshift-monitoring 26m Normal Started pod/prometheus-operator-admission-webhook-5c9b9d98cc-9qkgr Started container prometheus-operator-admission-webhook openshift-ingress 26m Normal Started pod/router-default-7cf4c94d4-tqmcb Started container router openshift-ingress 26m Normal Pulled pod/router-default-7cf4c94d4-tqmcb Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0743d54d3acaf6558295618248ff446b4352dde0234d52465d7578c7c261e6fd" in 4.328587556s (4.328592992s including waiting) openshift-multus 26m Normal Created pod/multus-additional-cni-plugins-x8r6f Created container whereabouts-cni-bincopy openshift-monitoring 26m Normal AddedInterface pod/prometheus-k8s-1 Add eth0 [10.130.2.10/23] from ovn-kubernetes openshift-ingress 26m Normal Created pod/router-default-7cf4c94d4-tqmcb Created container router default 26m Normal Uncordon node/ip-10-0-195-121.ec2.internal Update completed for config rendered-worker-c37c7a9e551f049d382df8406f11fe9b and node has been uncordoned openshift-monitoring 26m Normal Started pod/thanos-querier-6566ccfdd9-lkbh6 Started container kube-rbac-proxy default 26m Normal ConfigDriftMonitorStarted node/ip-10-0-195-121.ec2.internal Config Drift Monitor started, watching against rendered-worker-c37c7a9e551f049d382df8406f11fe9b openshift-monitoring 26m Normal Created pod/thanos-querier-6566ccfdd9-lkbh6 Created container thanos-query default 26m Normal NodeDone node/ip-10-0-195-121.ec2.internal Setting node ip-10-0-195-121.ec2.internal, currentConfig rendered-worker-c37c7a9e551f049d382df8406f11fe9b to Done openshift-monitoring 26m Normal Pulling pod/thanos-querier-6566ccfdd9-lkbh6 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8541443d5c6c0d31959a8e5f8d6a525d43bb4df66be563603d282f48bdeb304" openshift-monitoring 26m Normal Pulled pod/thanos-querier-6566ccfdd9-lkbh6 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a898acb346b7aae546bb9dc31e43266415a096fa755241b56acb5969190584f9" in 4.22369898s (4.223705069s including waiting) openshift-multus 26m Normal Pulled pod/multus-additional-cni-plugins-x8r6f Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" in 4.887303464s (4.887310022s including waiting) openshift-monitoring 26m Normal Pulling pod/alertmanager-main-0 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1a766de06530f7008fc155e6239f6e3d63b816deeb24c6d520ef1980d748050" openshift-monitoring 26m Normal Created pod/thanos-querier-6566ccfdd9-lkbh6 Created container kube-rbac-proxy openshift-monitoring 26m Normal Pulled pod/thanos-querier-6566ccfdd9-lkbh6 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 26m Normal Pulling pod/prometheus-k8s-1 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" openshift-multus 26m Normal Created pod/multus-additional-cni-plugins-x8r6f Created container whereabouts-cni default 26m Normal ConfigDriftMonitorStopped node/ip-10-0-160-152.ec2.internal Config Drift Monitor stopped openshift-multus 26m Normal Started pod/multus-additional-cni-plugins-x8r6f Started container whereabouts-cni openshift-multus 26m Normal Pulled pod/multus-additional-cni-plugins-x8r6f Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d56a44d5d9429dc668e476de36d9fa1083fbc948fd3a1c799a3733c9af4894b" already present on machine openshift-kube-scheduler-operator 26m Normal PodCreated deployment/openshift-kube-scheduler-operator Created Pod/installer-9-ip-10-0-239-132.ec2.internal -n openshift-kube-scheduler because it was missing openshift-monitoring 26m Normal Pulled pod/prometheus-k8s-1 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" in 1.178528054s (1.178540535s including waiting) openshift-multus 26m Normal Pulled pod/multus-additional-cni-plugins-x8r6f Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e15e4dbebc9b2ec1d44a393eb125a07564afd83dcf1154fd3a7bf2955aa0b13a" already present on machine openshift-monitoring 26m Normal Created pod/thanos-querier-6566ccfdd9-lkbh6 Created container kube-rbac-proxy-metrics openshift-monitoring 26m Normal Pulled pod/thanos-querier-6566ccfdd9-lkbh6 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 26m Normal Started pod/thanos-querier-6566ccfdd9-lkbh6 Started container kube-rbac-proxy-rules openshift-monitoring 26m Normal Started pod/thanos-querier-6566ccfdd9-lkbh6 Started container kube-rbac-proxy-metrics openshift-monitoring 26m Normal Pulling pod/prometheus-k8s-1 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1cdf8e55288584a04e1a403133d4508bac27808a20427a18aee85a92c367316" openshift-monitoring 26m Normal Pulled pod/alertmanager-main-0 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1a766de06530f7008fc155e6239f6e3d63b816deeb24c6d520ef1980d748050" in 1.975955884s (1.975961553s including waiting) openshift-monitoring 26m Normal Created pod/thanos-querier-6566ccfdd9-lkbh6 Created container kube-rbac-proxy-rules openshift-monitoring 26m Normal Created pod/prometheus-k8s-1 Created container init-config-reloader openshift-monitoring 26m Normal Pulled pod/alertmanager-main-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" already present on machine openshift-kube-scheduler 26m Normal Started pod/installer-9-ip-10-0-239-132.ec2.internal Started container installer openshift-kube-scheduler 26m Normal Created pod/installer-9-ip-10-0-239-132.ec2.internal Created container installer openshift-kube-scheduler 26m Normal Pulled pod/installer-9-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-monitoring 26m Normal Started pod/thanos-querier-6566ccfdd9-lkbh6 Started container prom-label-proxy openshift-kube-scheduler 26m Normal AddedInterface pod/installer-9-ip-10-0-239-132.ec2.internal Add eth0 [10.129.0.6/23] from ovn-kubernetes openshift-monitoring 26m Normal Created pod/thanos-querier-6566ccfdd9-lkbh6 Created container prom-label-proxy openshift-multus 26m Normal Created pod/multus-additional-cni-plugins-x8r6f Created container kube-multus-additional-cni-plugins openshift-monitoring 26m Normal Started pod/prometheus-k8s-1 Started container init-config-reloader openshift-monitoring 26m Normal Pulled pod/thanos-querier-6566ccfdd9-lkbh6 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8541443d5c6c0d31959a8e5f8d6a525d43bb4df66be563603d282f48bdeb304" in 1.257956232s (1.257963906s including waiting) openshift-monitoring 26m Normal Pulled pod/thanos-querier-6566ccfdd9-lkbh6 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 26m Normal Pulled pod/alertmanager-main-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 26m Normal Created pod/alertmanager-main-0 Created container kube-rbac-proxy-metric openshift-monitoring 26m Normal Started pod/alertmanager-main-0 Started container kube-rbac-proxy openshift-monitoring 26m Normal Started pod/alertmanager-main-0 Started container kube-rbac-proxy-metric openshift-monitoring 26m Normal Pulled pod/alertmanager-main-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8541443d5c6c0d31959a8e5f8d6a525d43bb4df66be563603d282f48bdeb304" already present on machine openshift-monitoring 26m Normal Created pod/alertmanager-main-0 Created container prom-label-proxy openshift-monitoring 26m Normal Started pod/alertmanager-main-0 Started container prom-label-proxy openshift-monitoring 26m Normal Pulled pod/alertmanager-main-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 26m Normal Started pod/alertmanager-main-0 Started container alertmanager-proxy openshift-monitoring 26m Normal Created pod/alertmanager-main-0 Created container alertmanager-proxy openshift-monitoring 26m Normal Pulled pod/alertmanager-main-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-monitoring 26m Normal Started pod/alertmanager-main-0 Started container config-reloader default 26m Normal OSUpdateStarted node/ip-10-0-160-152.ec2.internal default 26m Normal OSUpdateStaged node/ip-10-0-160-152.ec2.internal Changes to OS staged default 26m Normal PendingConfig node/ip-10-0-160-152.ec2.internal Written pending config rendered-worker-b75428ccc32943c30e9e5b63da3f059e openshift-monitoring 26m Normal Pulling pod/sre-dns-latency-exporter-v8kzl Pulling image "quay.io/app-sre/managed-prometheus-exporter-base:latest" openshift-monitoring 26m Normal AddedInterface pod/sre-dns-latency-exporter-v8kzl Add eth0 [10.130.2.4/23] from ovn-kubernetes default 26m Normal SkipReboot node/ip-10-0-160-152.ec2.internal Config changes do not require reboot. Service crio was reloaded. openshift-monitoring 26m Normal Created pod/alertmanager-main-0 Created container kube-rbac-proxy openshift-monitoring 26m Normal Created pod/alertmanager-main-0 Created container config-reloader openshift-monitoring 26m Normal Pulled pod/alertmanager-main-0 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1a766de06530f7008fc155e6239f6e3d63b816deeb24c6d520ef1980d748050" already present on machine openshift-monitoring 26m Normal Started pod/alertmanager-main-0 Started container alertmanager openshift-monitoring 26m Normal Created pod/alertmanager-main-0 Created container alertmanager default 26m Warning ResolutionFailed namespace/redhat-ods-operator constraints not satisfiable: subscription addon-managed-odh exists, no operators found from catalog addon-managed-odh-catalog in namespace redhat-ods-operator referenced by subscription addon-managed-odh openshift-monitoring 26m Normal Pulled pod/prometheus-k8s-1 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1cdf8e55288584a04e1a403133d4508bac27808a20427a18aee85a92c367316" in 5.696939986s (5.696953011s including waiting) openshift-network-diagnostics 26m Normal AddedInterface pod/network-check-target-trrh7 Add eth0 [10.130.2.6/23] from ovn-kubernetes openshift-monitoring 26m Normal Started pod/prometheus-k8s-1 Started container prometheus openshift-monitoring 26m Normal Pulled pod/prometheus-k8s-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a898acb346b7aae546bb9dc31e43266415a096fa755241b56acb5969190584f9" already present on machine openshift-monitoring 26m Normal Created pod/prometheus-k8s-1 Created container thanos-sidecar openshift-monitoring 26m Normal Started pod/prometheus-k8s-1 Started container thanos-sidecar openshift-monitoring 26m Normal Pulled pod/prometheus-k8s-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4e32e949909c4b9d7e400356a9c6e6c236bc2715f7a3cbdf6fe54e0b2612d154" already present on machine openshift-monitoring 26m Normal Created pod/prometheus-k8s-1 Created container prometheus-proxy openshift-monitoring 26m Normal Created pod/prometheus-k8s-1 Created container prometheus openshift-monitoring 26m Normal Pulled pod/prometheus-k8s-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-ingress-canary 26m Normal Pulling pod/ingress-canary-xb5f7 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6de5d1e6d774c20a0c3bd4d4c544234519bc4528e5535f24d0cf78a6788b4056" openshift-monitoring 26m Normal Started pod/prometheus-k8s-1 Started container config-reloader openshift-multus 26m Normal Pulling pod/network-metrics-daemon-qfgm8 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5e01ac462952a60254c8a9a0f43cce8b323da60014d37a49250a2e586142a5" openshift-multus 26m Normal AddedInterface pod/network-metrics-daemon-qfgm8 Add eth0 [10.130.2.5/23] from ovn-kubernetes openshift-monitoring 26m Normal Created pod/prometheus-k8s-1 Created container config-reloader openshift-monitoring 26m Normal Pulled pod/sre-dns-latency-exporter-v8kzl Successfully pulled image "quay.io/app-sre/managed-prometheus-exporter-base:latest" in 5.149402643s (5.149415652s including waiting) openshift-monitoring 26m Normal Created pod/sre-dns-latency-exporter-v8kzl Created container main openshift-monitoring 26m Normal Started pod/sre-dns-latency-exporter-v8kzl Started container main openshift-network-diagnostics 26m Normal Pulling pod/network-check-target-trrh7 Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" openshift-monitoring 26m Normal Pulled pod/prometheus-k8s-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" already present on machine openshift-monitoring 26m Normal Started pod/prometheus-k8s-1 Started container prometheus-proxy openshift-ingress-canary 26m Normal AddedInterface pod/ingress-canary-xb5f7 Add eth0 [10.130.2.7/23] from ovn-kubernetes openshift-monitoring 26m Normal Started pod/prometheus-k8s-1 Started container kube-rbac-proxy-thanos openshift-monitoring 26m Normal Started pod/prometheus-k8s-1 Started container kube-rbac-proxy openshift-monitoring 26m Normal Pulled pod/prometheus-k8s-1 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 26m Normal Created pod/prometheus-k8s-1 Created container kube-rbac-proxy openshift-monitoring 26m Normal Created pod/prometheus-k8s-1 Created container kube-rbac-proxy-thanos openshift-monitoring 26m Normal SuccessfulCreate replicaset/telemeter-client-6756b7679c Created pod: telemeter-client-6756b7679c-qgzlk openshift-monitoring 26m Normal ScalingReplicaSet deployment/telemeter-client Scaled up replica set telemeter-client-6756b7679c to 1 openshift-multus 26m Normal Started pod/network-metrics-daemon-qfgm8 Started container network-metrics-daemon openshift-monitoring 26m Normal Started pod/telemeter-client-6756b7679c-qgzlk Started container kube-rbac-proxy openshift-monitoring 26m Normal Started pod/telemeter-client-6756b7679c-qgzlk Started container reload openshift-monitoring 26m Normal Created pod/telemeter-client-6756b7679c-qgzlk Created container reload openshift-multus 26m Normal Pulled pod/network-metrics-daemon-qfgm8 Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 26m Normal Created pod/telemeter-client-6756b7679c-qgzlk Created container kube-rbac-proxy openshift-multus 26m Normal Created pod/network-metrics-daemon-qfgm8 Created container network-metrics-daemon openshift-monitoring 26m Normal AddedInterface pod/telemeter-client-6756b7679c-qgzlk Add eth0 [10.129.2.8/23] from ovn-kubernetes openshift-monitoring 26m Normal Pulled pod/telemeter-client-6756b7679c-qgzlk Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff0062cc5aad6c3c8538f4b9a2572191f60181c9c15e9a94de2661aafba2df83" already present on machine openshift-monitoring 26m Normal Pulled pod/telemeter-client-6756b7679c-qgzlk Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:942a1ba76f95d02ba681afbb7d1aea28d457fb2a9d967cacc2233bb243588990" already present on machine openshift-multus 26m Normal Pulled pod/network-metrics-daemon-qfgm8 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a5e01ac462952a60254c8a9a0f43cce8b323da60014d37a49250a2e586142a5" in 2.770104091s (2.770112048s including waiting) openshift-monitoring 26m Normal Pulled pod/telemeter-client-6756b7679c-qgzlk Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5795789f6e56ae0c7644ae2380c6d88d6b2a88d64ab8581064bc746f0a97b52d" already present on machine openshift-monitoring 26m Normal Created pod/telemeter-client-6756b7679c-qgzlk Created container telemeter-client openshift-monitoring 26m Normal Started pod/telemeter-client-6756b7679c-qgzlk Started container telemeter-client openshift-monitoring 26m Warning Unhealthy pod/prometheus-k8s-1 Startup probe failed: % Total % Received % Xferd Average Speed Time Time Time Current... openshift-monitoring 26m Normal Killing pod/telemeter-client-5c9599c744-rlt2c Stopping container telemeter-client openshift-monitoring 26m Normal ScalingReplicaSet deployment/telemeter-client Scaled down replica set telemeter-client-5c9599c744 to 0 from 1 default 26m Normal Uncordon node/ip-10-0-160-152.ec2.internal Update completed for config rendered-worker-b75428ccc32943c30e9e5b63da3f059e and node has been uncordoned default 26m Normal NodeDone node/ip-10-0-160-152.ec2.internal Setting node ip-10-0-160-152.ec2.internal, currentConfig rendered-worker-b75428ccc32943c30e9e5b63da3f059e to Done default 26m Normal ConfigDriftMonitorStarted node/ip-10-0-160-152.ec2.internal Config Drift Monitor started, watching against rendered-worker-b75428ccc32943c30e9e5b63da3f059e openshift-monitoring 26m Normal SuccessfulDelete replicaset/telemeter-client-5c9599c744 Deleted pod: telemeter-client-5c9599c744-rlt2c openshift-monitoring 26m Normal Killing pod/telemeter-client-5c9599c744-rlt2c Stopping container reload openshift-monitoring 26m Normal Killing pod/telemeter-client-5c9599c744-rlt2c Stopping container kube-rbac-proxy openshift-ingress-canary 26m Normal Started pod/ingress-canary-xb5f7 Started container serve-healthcheck-canary openshift-network-diagnostics 26m Normal Started pod/network-check-target-trrh7 Started container network-check-target-container openshift-multus 26m Normal Started pod/network-metrics-daemon-qfgm8 Started container kube-rbac-proxy openshift-network-diagnostics 26m Normal Created pod/network-check-target-trrh7 Created container network-check-target-container openshift-network-diagnostics 26m Normal Pulled pod/network-check-target-trrh7 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5d8e73d86d1b77620f415e1552aaccc3b38aa2959467df2ebe586bd9a7e11892" in 6.10723181s (6.107244795s including waiting) openshift-multus 26m Normal Created pod/network-metrics-daemon-qfgm8 Created container kube-rbac-proxy openshift-ingress-canary 26m Normal Created pod/ingress-canary-xb5f7 Created container serve-healthcheck-canary openshift-ingress-canary 26m Normal Pulled pod/ingress-canary-xb5f7 Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6de5d1e6d774c20a0c3bd4d4c544234519bc4528e5535f24d0cf78a6788b4056" in 5.587471613s (5.587478362s including waiting) default 26m Normal SetDesiredConfig machineconfigpool/worker Targeted node ip-10-0-232-8.ec2.internal to config rendered-worker-b75428ccc32943c30e9e5b63da3f059e default 26m Normal ConfigDriftMonitorStopped node/ip-10-0-232-8.ec2.internal Config Drift Monitor stopped default 26m Normal OSUpdateStarted node/ip-10-0-232-8.ec2.internal default 26m Normal OSUpdateStaged node/ip-10-0-232-8.ec2.internal Changes to OS staged default 26m Normal SkipReboot node/ip-10-0-232-8.ec2.internal Config changes do not require reboot. Service crio was reloaded. default 26m Normal PendingConfig node/ip-10-0-232-8.ec2.internal Written pending config rendered-worker-b75428ccc32943c30e9e5b63da3f059e default 26m Normal NodeDone node/ip-10-0-232-8.ec2.internal Setting node ip-10-0-232-8.ec2.internal, currentConfig rendered-worker-b75428ccc32943c30e9e5b63da3f059e to Done default 26m Normal ConfigDriftMonitorStarted node/ip-10-0-232-8.ec2.internal Config Drift Monitor started, watching against rendered-worker-b75428ccc32943c30e9e5b63da3f059e default 26m Normal Uncordon node/ip-10-0-232-8.ec2.internal Update completed for config rendered-worker-b75428ccc32943c30e9e5b63da3f059e and node has been uncordoned openshift-kube-scheduler 26m Normal Killing pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Stopping container kube-scheduler-recovery-controller openshift-kube-scheduler 26m Normal StaticPodInstallerCompleted pod/installer-9-ip-10-0-239-132.ec2.internal Successfully installed revision 9 openshift-kube-scheduler 26m Normal Killing pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Stopping container kube-scheduler openshift-kube-scheduler 26m Normal Killing pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Stopping container kube-scheduler-cert-syncer default 26m Normal SetDesiredConfig machineconfigpool/worker Targeted node ip-10-0-187-75.ec2.internal to config rendered-worker-b75428ccc32943c30e9e5b63da3f059e openshift-kube-scheduler-operator 26m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal container \"kube-scheduler\" is terminated: Completed: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal container \"kube-scheduler-cert-syncer\" is terminated: Error: 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:47:17.220797 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:47:17.220892 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:47:17.256024 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:47:17.256152 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:47:17.256412 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:47:17.256425 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:47:17.258549 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:47:17.258640 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:47:17.258736 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:47:17.258745 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:47:17.260541 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:47:17.260559 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:47:17.262594 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:47:17.262646 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:47:24.039670 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:47:24.039761 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:47:39.751264 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:47:39.751288 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:47:50.056769 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:47:50.056795 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal container \"kube-scheduler-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" default 26m Normal ConfigDriftMonitorStopped node/ip-10-0-187-75.ec2.internal Config Drift Monitor stopped default 26m Normal OSUpdateStarted node/ip-10-0-187-75.ec2.internal default 26m Normal PendingConfig node/ip-10-0-187-75.ec2.internal Written pending config rendered-worker-b75428ccc32943c30e9e5b63da3f059e default 26m Normal SkipReboot node/ip-10-0-187-75.ec2.internal Config changes do not require reboot. Service crio was reloaded. default 26m Normal OSUpdateStaged node/ip-10-0-187-75.ec2.internal Changes to OS staged openshift-kube-scheduler 25m Warning Unhealthy pod/openshift-kube-scheduler-guard-ip-10-0-239-132.ec2.internal Readiness probe failed: Get "https://10.0.239.132:10259/healthz": dial tcp 10.0.239.132:10259: connect: connection refused openshift-kube-scheduler 25m Warning ProbeError pod/openshift-kube-scheduler-guard-ip-10-0-239-132.ec2.internal Readiness probe error: Get "https://10.0.239.132:10259/healthz": dial tcp 10.0.239.132:10259: connect: connection refused... openshift-kube-scheduler 25m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-scheduler 25m Normal Started pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Started container wait-for-host-port openshift-kube-scheduler 25m Normal Created pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Created container wait-for-host-port openshift-kube-scheduler 25m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 25m Normal Started pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Started container kube-scheduler openshift-kube-scheduler 25m Normal Created pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Created container kube-scheduler-recovery-controller openshift-kube-scheduler 25m Warning FastControllerResync pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling openshift-kube-scheduler 25m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4c52e0d353383738834a2786a775a7b58a504abc15118dfd8db5cb5a233f802f" already present on machine openshift-kube-scheduler 25m Normal Created pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Created container kube-scheduler-cert-syncer openshift-kube-scheduler 25m Normal Started pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Started container kube-scheduler-recovery-controller openshift-kube-scheduler 25m Normal Started pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Started container kube-scheduler-cert-syncer openshift-kube-scheduler 25m Normal Created pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Created container kube-scheduler openshift-kube-scheduler 25m Normal Pulled pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-scheduler 25m Normal LeaderElection configmap/kube-scheduler ip-10-0-239-132_2e2b649e-9028-4bba-bee5-a8a04e8c7f2b became leader openshift-kube-scheduler-operator 25m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "StaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal container \"kube-scheduler\" is terminated: Completed: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal container \"kube-scheduler-cert-syncer\" is terminated: Error: 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:47:17.220797 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:47:17.220892 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:47:17.256024 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:47:17.256152 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:47:17.256412 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:47:17.256425 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:47:17.258549 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:47:17.258640 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:47:17.258736 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:47:17.258745 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:47:17.260541 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:47:17.260559 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:47:17.262594 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:47:17.262646 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:47:24.039670 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:47:24.039761 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:47:39.751264 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:47:39.751288 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: I0321 12:47:50.056769 1 certsync_controller.go:66] Syncing configmaps: []\nStaticPodsDegraded: I0321 12:47:50.056795 1 certsync_controller.go:170] Syncing secrets: [{kube-scheduler-client-cert-key false}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/openshift-kube-scheduler-ip-10-0-239-132.ec2.internal container \"kube-scheduler-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready" openshift-kube-scheduler 25m Normal LeaderElection lease/kube-scheduler ip-10-0-239-132_2e2b649e-9028-4bba-bee5-a8a04e8c7f2b became leader openshift-kube-scheduler 25m Normal LeaderElection configmap/cert-recovery-controller-lock ip-10-0-239-132_4fc26289-17fb-42c7-a69d-1fe4653efc35 became leader openshift-kube-scheduler 25m Normal LeaderElection lease/cert-recovery-controller-lock ip-10-0-239-132_4fc26289-17fb-42c7-a69d-1fe4653efc35 became leader default 25m Normal Uncordon node/ip-10-0-187-75.ec2.internal Update completed for config rendered-worker-b75428ccc32943c30e9e5b63da3f059e and node has been uncordoned default 25m Normal NodeDone node/ip-10-0-187-75.ec2.internal Setting node ip-10-0-187-75.ec2.internal, currentConfig rendered-worker-b75428ccc32943c30e9e5b63da3f059e to Done default 25m Normal ConfigDriftMonitorStarted node/ip-10-0-187-75.ec2.internal Config Drift Monitor started, watching against rendered-worker-b75428ccc32943c30e9e5b63da3f059e openshift-kube-scheduler-operator 25m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand openshift-kube-scheduler-ip-10-0-239-132.ec2.internal on node ip-10-0-239-132.ec2.internal" default 25m Normal SetDesiredConfig machineconfigpool/worker Targeted node ip-10-0-195-121.ec2.internal to config rendered-worker-b75428ccc32943c30e9e5b63da3f059e default 25m Normal ConfigDriftMonitorStopped node/ip-10-0-195-121.ec2.internal Config Drift Monitor stopped openshift-kube-scheduler-operator 25m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nGuardControllerDegraded: Missing PodIP in operand openshift-kube-scheduler-ip-10-0-239-132.ec2.internal on node ip-10-0-239-132.ec2.internal" to "NodeControllerDegraded: All master nodes are ready" default 25m Normal PendingConfig node/ip-10-0-195-121.ec2.internal Written pending config rendered-worker-b75428ccc32943c30e9e5b63da3f059e default 25m Normal SkipReboot node/ip-10-0-195-121.ec2.internal Config changes do not require reboot. Service crio was reloaded. default 25m Normal OSUpdateStarted node/ip-10-0-195-121.ec2.internal default 25m Normal OSUpdateStaged node/ip-10-0-195-121.ec2.internal Changes to OS staged default 25m Normal Uncordon node/ip-10-0-195-121.ec2.internal Update completed for config rendered-worker-b75428ccc32943c30e9e5b63da3f059e and node has been uncordoned default 25m Normal NodeDone node/ip-10-0-195-121.ec2.internal Setting node ip-10-0-195-121.ec2.internal, currentConfig rendered-worker-b75428ccc32943c30e9e5b63da3f059e to Done default 25m Normal ConfigDriftMonitorStarted node/ip-10-0-195-121.ec2.internal Config Drift Monitor started, watching against rendered-worker-b75428ccc32943c30e9e5b63da3f059e openshift-network-diagnostics 25m Normal ConnectivityRestored node/ip-10-0-160-152.ec2.internal Connectivity restored after 59.998467701s: load-balancer-api-external: tcp connection to api.qeaisrhods-c13.abmw.s1.devshift.org:6443 succeeded openshift-network-diagnostics 25m Warning ConnectivityOutageDetected node/ip-10-0-160-152.ec2.internal Connectivity outage detected: load-balancer-api-external: failed to establish a TCP connection to api.qeaisrhods-c13.abmw.s1.devshift.org:6443: dial tcp 10.0.209.0:6443: connect: connection refused openshift-kube-apiserver-operator 25m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" started at 2023-03-21 12:44:28 +0000 UTC is still not ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" started at 2023-03-21 12:45:00 +0000 UTC is still not ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-cert-regeneration-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-cert-syncer\" is terminated: Error: rue} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nStaticPodsDegraded: I0321 12:45:18.331928 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}]\nStaticPodsDegraded: I0321 12:45:18.332244 1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nStaticPodsDegraded: I0321 12:45:18.332658 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}]\nStaticPodsDegraded: I0321 12:45:18.332855 1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-insecure-readyz\" is terminated: Completed: " openshift-kube-apiserver 25m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-apiserver 25m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container setup openshift-kube-apiserver 25m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container setup openshift-kube-apiserver 25m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 25m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver openshift-kube-apiserver 25m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver openshift-kube-apiserver 25m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f07ab081508d10035f9be64166a3f97d4b0706ec8f30dbcba7377686aaaba865" already present on machine openshift-kube-apiserver 25m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 25m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver-check-endpoints openshift-kube-apiserver 25m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver-check-endpoints openshift-kube-apiserver 25m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver-cert-regeneration-controller openshift-kube-apiserver 25m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver-insecure-readyz openshift-kube-apiserver 25m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver-cert-regeneration-controller openshift-kube-apiserver 25m Warning FastControllerResync pod/kube-apiserver-ip-10-0-197-197.ec2.internal Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver 25m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 25m Normal Created pod/kube-apiserver-ip-10-0-197-197.ec2.internal Created container kube-apiserver-cert-syncer openshift-kube-apiserver 25m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver-cert-syncer openshift-kube-apiserver 25m Normal Pulled pod/kube-apiserver-ip-10-0-197-197.ec2.internal Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7816a6f699e70da73dcee8a12e526ad0b234f6ced09ded67d81d55db54b577c9" already present on machine openshift-kube-apiserver 25m Normal Started pod/kube-apiserver-ip-10-0-197-197.ec2.internal Started container kube-apiserver-insecure-readyz openshift-kube-apiserver 25m Warning FastControllerResync node/ip-10-0-197-197.ec2.internal Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver 25m Warning FastControllerResync node/ip-10-0-197-197.ec2.internal Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling openshift-kube-apiserver-operator 25m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-cert-regeneration-controller\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-cert-syncer\" is terminated: Error: rue} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nStaticPodsDegraded: I0321 12:45:18.331928 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}]\nStaticPodsDegraded: I0321 12:45:18.332244 1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nStaticPodsDegraded: I0321 12:45:18.332658 1 certsync_controller.go:66] Syncing configmaps: [{aggregator-client-ca false} {client-ca false} {trusted-ca-bundle true} {control-plane-node-kubeconfig false} {check-endpoints-kubeconfig false}]\nStaticPodsDegraded: I0321 12:45:18.332855 1 certsync_controller.go:170] Syncing secrets: [{aggregator-client false} {localhost-serving-cert-certkey false} {service-network-serving-certkey false} {external-loadbalancer-serving-certkey false} {internal-loadbalancer-serving-certkey false} {bound-service-account-signing-key false} {control-plane-node-admin-client-cert-key false} {check-endpoints-client-cert-key false} {kubelet-client false} {node-kubeconfigs false} {user-serving-cert true} {user-serving-cert-000 true} {user-serving-cert-001 true} {user-serving-cert-002 true} {user-serving-cert-003 true} {user-serving-cert-004 true} {user-serving-cert-005 true} {user-serving-cert-006 true} {user-serving-cert-007 true} {user-serving-cert-008 true} {user-serving-cert-009 true}]\nStaticPodsDegraded: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-check-endpoints\" is terminated: Completed: \nStaticPodsDegraded: pod/kube-apiserver-ip-10-0-197-197.ec2.internal container \"kube-apiserver-insecure-readyz\" is terminated: Completed: " to "NodeControllerDegraded: All master nodes are ready" openshift-kube-scheduler-operator 25m Normal NodeCurrentRevisionChanged deployment/openshift-kube-scheduler-operator Updated node "ip-10-0-239-132.ec2.internal" from revision 8 to 9 because static pod is ready openshift-kube-scheduler-operator 25m Normal OperatorStatusChanged deployment/openshift-kube-scheduler-operator Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 9"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 8; 2 nodes are at revision 9" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 9" openshift-kube-apiserver-operator 24m Normal NodeCurrentRevisionChanged deployment/kube-apiserver-operator Updated node "ip-10-0-197-197.ec2.internal" from revision 11 to 12 because static pod is ready openshift-kube-apiserver-operator 24m Normal OperatorStatusChanged deployment/kube-apiserver-operator Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 12"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 11; 2 nodes are at revision 12" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 12" openshift-network-diagnostics 23m Normal ConnectivityRestored node/ip-10-0-160-152.ec2.internal Connectivity restored after 1m0.718185903s: kubernetes-apiserver-endpoint-ip-10-0-197-197: tcp connection to 10.0.197.197:6443 succeeded openshift-network-diagnostics 23m Normal ConnectivityRestored node/ip-10-0-160-152.ec2.internal Connectivity restored after 4m0.915912027s: openshift-apiserver-endpoint-ip-10-0-197-197: tcp connection to 10.130.0.16:8443 succeeded openshift-network-diagnostics 23m Normal ConnectivityRestored node/ip-10-0-160-152.ec2.internal Connectivity restored after 2m0.718900515s: network-check-target-ip-10-0-197-197: tcp connection to 10.130.0.3:8080 succeeded openshift-network-diagnostics 23m Warning ConnectivityOutageDetected node/ip-10-0-160-152.ec2.internal Connectivity outage detected: network-check-target-ip-10-0-195-121: failed to establish a TCP connection to 10.130.2.6:8080: dial tcp 10.130.2.6:8080: i/o timeout openshift-network-diagnostics 23m Normal ConnectivityRestored node/ip-10-0-160-152.ec2.internal Connectivity restored after 2m0.000063932s: network-check-target-ip-10-0-195-121: tcp connection to 10.130.2.6:8080 succeeded openshift-marketplace 23m Normal Pulling pod/redhat-operators-rf7k9 Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.12" openshift-marketplace 23m Normal AddedInterface pod/redhat-operators-rf7k9 Add eth0 [10.128.0.5/23] from ovn-kubernetes openshift-marketplace 22m Normal Created pod/redhat-operators-rf7k9 Created container registry-server openshift-marketplace 22m Normal Pulled pod/redhat-operators-rf7k9 Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.12" in 7.540137515s (7.540155999s including waiting) openshift-marketplace 22m Normal Started pod/redhat-operators-rf7k9 Started container registry-server openshift-marketplace 22m Normal AddedInterface pod/certified-operators-f6dr2 Add eth0 [10.128.0.6/23] from ovn-kubernetes openshift-marketplace 22m Normal Pulling pod/certified-operators-f6dr2 Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.12" openshift-marketplace 22m Normal Created pod/certified-operators-f6dr2 Created container registry-server openshift-marketplace 22m Normal Pulled pod/certified-operators-f6dr2 Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.12" in 603.604624ms (603.615833ms including waiting) openshift-marketplace 22m Normal Started pod/certified-operators-f6dr2 Started container registry-server openshift-marketplace 22m Normal Killing pod/redhat-operators-pcjm7 Stopping container registry-server openshift-marketplace 22m Normal Killing pod/certified-operators-f6dr2 Stopping container registry-server openshift-monitoring 22m Warning BackOff pod/osd-cluster-ready-pzbtd Back-off restarting failed container osd-cluster-ready in pod osd-cluster-ready-pzbtd_openshift-monitoring(76980122-e709-47b7-9ed4-b4594d55b3ff) openshift-marketplace 22m Normal AddedInterface pod/redhat-marketplace-qxchz Add eth0 [10.128.0.7/23] from ovn-kubernetes openshift-marketplace 22m Normal Pulling pod/redhat-marketplace-qxchz Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.12" openshift-marketplace 22m Normal Started pod/redhat-marketplace-qxchz Started container registry-server openshift-marketplace 22m Normal Created pod/redhat-marketplace-qxchz Created container registry-server openshift-marketplace 22m Normal Pulled pod/redhat-marketplace-qxchz Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.12" in 612.269796ms (612.283664ms including waiting) openshift-marketplace 22m Normal AddedInterface pod/community-operators-q5p9v Add eth0 [10.128.0.8/23] from ovn-kubernetes openshift-marketplace 22m Normal Pulling pod/community-operators-q5p9v Pulling image "registry.redhat.io/redhat/community-operator-index:v4.12" openshift-marketplace 22m Normal Started pod/community-operators-q5p9v Started container registry-server openshift-marketplace 22m Normal Pulled pod/community-operators-q5p9v Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.12" in 662.464035ms (662.476483ms including waiting) openshift-marketplace 22m Normal Created pod/community-operators-q5p9v Created container registry-server openshift-marketplace 22m Normal Killing pod/redhat-marketplace-qxchz Stopping container registry-server openshift-marketplace 22m Normal Killing pod/community-operators-q5p9v Stopping container registry-server default 21m Warning ResolutionFailed namespace/openshift-addon-operator constraints not satisfiable: subscription addon-operator exists, no operators found from catalog addon-operator-catalog in namespace openshift-addon-operator referenced by subscription addon-operator openshift-etcd-operator 21m Normal DefragControllerDefragmentSuccess deployment/etcd-operator etcd member has been defragmented: ip-10-0-140-6.ec2.internal, memberID: 4823875419117155993 openshift-etcd-operator 21m Normal DefragControllerDefragmentAttempt deployment/etcd-operator Attempting defrag on member: ip-10-0-140-6.ec2.internal, memberID: 42f1d922bd377699, dbSize: 234807296, dbInUse: 97918976, leader ID: 14419360892373211128 openshift-etcd-operator 20m Normal DefragControllerDefragmentAttempt deployment/etcd-operator Attempting defrag on member: ip-10-0-197-197.ec2.internal, memberID: 843eba623d97bdeb, dbSize: 234627072, dbInUse: 98004992, leader ID: 14419360892373211128 openshift-etcd-operator 20m Normal DefragControllerDefragmentSuccess deployment/etcd-operator etcd member has been defragmented: ip-10-0-197-197.ec2.internal, memberID: 9529258792665464299 openshift-etcd-operator 20m Normal DefragControllerDefragmentAttempt deployment/etcd-operator Attempting defrag on member: ip-10-0-239-132.ec2.internal, memberID: c81bdc55a61097f8, dbSize: 234766336, dbInUse: 97988608, leader ID: 14419360892373211128 openshift-etcd-operator 20m Normal DefragControllerDefragmentSuccess deployment/etcd-operator etcd member has been defragmented: ip-10-0-239-132.ec2.internal, memberID: 14419360892373211128 default 16m Warning ResolutionFailed namespace/openshift-rbac-permissions constraints not satisfiable: no operators found from catalog rbac-permissions-operator-registry in namespace openshift-rbac-permissions referenced by subscription rbac-permissions-operator, subscription rbac-permissions-operator exists openshift-sre-pruning 14m Normal SuccessfulCreate job/deployments-pruner-27990060 Created pod: deployments-pruner-27990060-vz8dp openshift-operator-lifecycle-manager 14m Normal Started pod/collect-profiles-27990060-kvm2x Started container collect-profiles openshift-monitoring 14m Normal SuccessfulCreate job/osd-rebalance-infra-nodes-27990060 Created pod: osd-rebalance-infra-nodes-27990060-8r9xq openshift-operator-lifecycle-manager 14m Normal SuccessfulCreate cronjob/collect-profiles Created job collect-profiles-27990060 openshift-operator-lifecycle-manager 14m Normal AddedInterface pod/collect-profiles-27990060-kvm2x Add eth0 [10.128.2.8/23] from ovn-kubernetes openshift-sre-pruning 14m Normal AddedInterface pod/builds-pruner-27990060-2l29r Add eth0 [10.129.2.11/23] from ovn-kubernetes openshift-marketplace 14m Normal SuccessfulCreate cronjob/osd-patch-subscription-source Created job osd-patch-subscription-source-27990060 openshift-sre-pruning 14m Normal SuccessfulCreate cronjob/deployments-pruner Created job deployments-pruner-27990060 openshift-image-registry 14m Normal SuccessfulCreate cronjob/image-pruner Created job image-pruner-27990060 openshift-operator-lifecycle-manager 14m Normal SuccessfulCreate job/collect-profiles-27990060 Created pod: collect-profiles-27990060-kvm2x openshift-sre-pruning 14m Normal AddedInterface pod/deployments-pruner-27990060-vz8dp Add eth0 [10.129.2.10/23] from ovn-kubernetes openshift-sre-pruning 14m Normal Pulling pod/deployments-pruner-27990060-vz8dp Pulling image "image-registry.openshift-image-registry.svc:5000/openshift/cli:latest" openshift-monitoring 14m Normal SuccessfulCreate cronjob/osd-rebalance-infra-nodes Created job osd-rebalance-infra-nodes-27990060 openshift-sre-pruning 14m Normal SuccessfulCreate cronjob/builds-pruner Created job builds-pruner-27990060 openshift-image-registry 14m Normal SuccessfulCreate job/image-pruner-27990060 Created pod: image-pruner-27990060-dqmfp openshift-marketplace 14m Normal SuccessfulCreate job/osd-patch-subscription-source-27990060 Created pod: osd-patch-subscription-source-27990060-mm5c7 openshift-operator-lifecycle-manager 14m Normal Pulled pod/collect-profiles-27990060-kvm2x Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:497121bfdb3b293af2c939af393bfbda39aeace9d754f7288a244cd7ec2662d7" already present on machine openshift-image-registry 14m Normal Started pod/image-pruner-27990060-dqmfp Started container image-pruner openshift-image-registry 14m Normal Created pod/image-pruner-27990060-dqmfp Created container image-pruner openshift-image-registry 14m Normal Pulled pod/image-pruner-27990060-dqmfp Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c66da0adc83e5e6db1824ff5cf204610d36f29fc0c7a8352b24795f58151eefe" already present on machine openshift-image-registry 14m Normal AddedInterface pod/image-pruner-27990060-dqmfp Add eth0 [10.130.2.13/23] from ovn-kubernetes openshift-operator-lifecycle-manager 14m Normal Created pod/collect-profiles-27990060-kvm2x Created container collect-profiles openshift-sre-pruning 14m Normal SuccessfulCreate job/builds-pruner-27990060 Created pod: builds-pruner-27990060-2l29r openshift-monitoring 14m Normal Pulled pod/osd-rebalance-infra-nodes-27990060-8r9xq Successfully pulled image "image-registry.openshift-image-registry.svc:5000/openshift/cli:latest" in 53.848227ms (53.855865ms including waiting) openshift-sre-pruning 14m Normal Started pod/deployments-pruner-27990060-vz8dp Started container deployments-pruner openshift-sre-pruning 14m Normal Started pod/builds-pruner-27990060-2l29r Started container builds-pruner openshift-sre-pruning 14m Normal Created pod/builds-pruner-27990060-2l29r Created container builds-pruner openshift-sre-pruning 14m Normal Pulled pod/builds-pruner-27990060-2l29r Successfully pulled image "image-registry.openshift-image-registry.svc:5000/openshift/cli:latest" in 79.090413ms (79.104193ms including waiting) openshift-sre-pruning 14m Normal Pulling pod/builds-pruner-27990060-2l29r Pulling image "image-registry.openshift-image-registry.svc:5000/openshift/cli:latest" openshift-sre-pruning 14m Normal Created pod/deployments-pruner-27990060-vz8dp Created container deployments-pruner openshift-sre-pruning 14m Normal Pulled pod/deployments-pruner-27990060-vz8dp Successfully pulled image "image-registry.openshift-image-registry.svc:5000/openshift/cli:latest" in 276.667294ms (276.674258ms including waiting) openshift-marketplace 14m Normal Started pod/osd-patch-subscription-source-27990060-mm5c7 Started container osd-patch-subscription-source openshift-monitoring 14m Normal Started pod/osd-rebalance-infra-nodes-27990060-8r9xq Started container osd-rebalance-infra-nodes openshift-monitoring 14m Normal Created pod/osd-rebalance-infra-nodes-27990060-8r9xq Created container osd-rebalance-infra-nodes openshift-marketplace 14m Normal Pulled pod/osd-patch-subscription-source-27990060-mm5c7 Successfully pulled image "image-registry.openshift-image-registry.svc:5000/openshift/cli:latest" in 101.703398ms (101.714724ms including waiting) openshift-monitoring 14m Normal AddedInterface pod/osd-rebalance-infra-nodes-27990060-8r9xq Add eth0 [10.129.2.13/23] from ovn-kubernetes openshift-monitoring 14m Normal Pulling pod/osd-rebalance-infra-nodes-27990060-8r9xq Pulling image "image-registry.openshift-image-registry.svc:5000/openshift/cli:latest" openshift-marketplace 14m Normal AddedInterface pod/osd-patch-subscription-source-27990060-mm5c7 Add eth0 [10.129.2.12/23] from ovn-kubernetes openshift-marketplace 14m Normal Created pod/osd-patch-subscription-source-27990060-mm5c7 Created container osd-patch-subscription-source openshift-marketplace 14m Normal Pulling pod/osd-patch-subscription-source-27990060-mm5c7 Pulling image "image-registry.openshift-image-registry.svc:5000/openshift/cli:latest" openshift-sre-pruning 14m Normal Completed job/builds-pruner-27990060 Job completed openshift-sre-pruning 14m Normal SawCompletedJob cronjob/builds-pruner Saw completed job: builds-pruner-27990060, status: Complete openshift-monitoring 14m Normal Completed job/osd-rebalance-infra-nodes-27990060 Job completed openshift-monitoring 14m Normal SawCompletedJob cronjob/osd-rebalance-infra-nodes Saw completed job: osd-rebalance-infra-nodes-27990060, status: Complete openshift-sre-pruning 14m Normal SawCompletedJob cronjob/deployments-pruner Saw completed job: deployments-pruner-27990060, status: Complete openshift-sre-pruning 14m Normal Completed job/deployments-pruner-27990060 Job completed openshift-marketplace 14m Normal Completed job/osd-patch-subscription-source-27990060 Job completed openshift-image-registry 14m Normal SawCompletedJob cronjob/image-pruner Saw completed job: image-pruner-27990060, status: Complete openshift-marketplace 14m Normal SawCompletedJob cronjob/osd-patch-subscription-source Saw completed job: osd-patch-subscription-source-27990060, status: Complete openshift-image-registry 14m Normal Completed job/image-pruner-27990060 Job completed openshift-operator-lifecycle-manager 13m Normal SuccessfulDelete cronjob/collect-profiles Deleted job collect-profiles-27990015 openshift-operator-lifecycle-manager 13m Normal SawCompletedJob cronjob/collect-profiles Saw completed job: collect-profiles-27990060, status: Complete openshift-operator-lifecycle-manager 13m Normal Completed job/collect-profiles-27990060 Job completed default 13m Warning ResolutionFailed namespace/redhat-ods-operator constraints not satisfiable: no operators found from catalog addon-managed-odh-catalog in namespace redhat-ods-operator referenced by subscription addon-managed-odh, subscription addon-managed-odh exists openshift-marketplace 12m Normal Pulled pod/redhat-operators-wpqdp Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.12" in 258.760454ms (258.768814ms including waiting) openshift-marketplace 12m Normal AddedInterface pod/redhat-operators-wpqdp Add eth0 [10.128.0.9/23] from ovn-kubernetes openshift-marketplace 12m Normal Pulling pod/redhat-operators-wpqdp Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.12" openshift-marketplace 12m Normal Started pod/redhat-operators-wpqdp Started container registry-server openshift-marketplace 12m Normal Created pod/redhat-operators-wpqdp Created container registry-server default 12m Warning ResolutionFailed namespace/openshift-velero constraints not satisfiable: subscription managed-velero-operator exists, no operators found from catalog managed-velero-operator-registry in namespace openshift-velero referenced by subscription managed-velero-operator openshift-marketplace 12m Normal Killing pod/redhat-operators-wpqdp Stopping container registry-server default 11m Warning ResolutionFailed namespace/openshift-route-monitor-operator constraints not satisfiable: subscription route-monitor-operator exists, no operators found from catalog route-monitor-operator-registry in namespace openshift-route-monitor-operator referenced by subscription route-monitor-operator openshift-monitoring 11m Normal Completed job/osd-cluster-ready Job completed openshift-marketplace 11m Normal Pulling pod/certified-operators-sdz5q Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.12" openshift-marketplace 11m Normal AddedInterface pod/certified-operators-sdz5q Add eth0 [10.128.0.10/23] from ovn-kubernetes openshift-marketplace 11m Normal Pulled pod/certified-operators-sdz5q Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.12" in 746.108166ms (746.115434ms including waiting) openshift-marketplace 11m Normal Started pod/certified-operators-sdz5q Started container registry-server openshift-marketplace 11m Normal Created pod/certified-operators-sdz5q Created container registry-server openshift-marketplace 11m Normal Killing pod/certified-operators-sdz5q Stopping container registry-server openshift-marketplace 11m Normal Pulled pod/redhat-marketplace-jg5qp Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.12" in 236.288312ms (236.312955ms including waiting) openshift-marketplace 11m Normal AddedInterface pod/redhat-marketplace-jg5qp Add eth0 [10.128.0.11/23] from ovn-kubernetes openshift-marketplace 11m Normal Pulling pod/redhat-marketplace-jg5qp Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.12" openshift-marketplace 11m Normal Created pod/redhat-marketplace-jg5qp Created container registry-server openshift-marketplace 11m Normal Started pod/redhat-marketplace-jg5qp Started container registry-server openshift-marketplace 10m Normal Killing pod/redhat-marketplace-jg5qp Stopping container registry-server openshift-marketplace 10m Normal AddedInterface pod/community-operators-gqgqn Add eth0 [10.128.0.12/23] from ovn-kubernetes openshift-marketplace 10m Normal Pulling pod/community-operators-gqgqn Pulling image "registry.redhat.io/redhat/community-operator-index:v4.12" openshift-marketplace 10m Normal Pulled pod/community-operators-gqgqn Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.12" in 557.63024ms (557.643035ms including waiting) openshift-marketplace 10m Normal Started pod/community-operators-gqgqn Started container registry-server openshift-marketplace 10m Normal Created pod/community-operators-gqgqn Created container registry-server openshift-marketplace 10m Normal Killing pod/community-operators-gqgqn Stopping container registry-server default 7m15s Warning ResolutionFailed namespace/openshift-observability-operator constraints not satisfiable: subscription observability-operator exists, no operators found from catalog observability-operator-catalog in namespace openshift-observability-operator referenced by subscription observability-operator openshift-backplane-srep 7m5s Normal Created pod/osd-delete-ownerrefs-serviceaccounts-27990067-rbrjk Created container osd-delete-ownerrefs-serviceaccounts openshift-backplane-srep 7m5s Normal SuccessfulCreate cronjob/osd-delete-ownerrefs-serviceaccounts Created job osd-delete-ownerrefs-serviceaccounts-27990067 openshift-backplane-srep 7m5s Normal AddedInterface pod/osd-delete-ownerrefs-serviceaccounts-27990067-rbrjk Add eth0 [10.129.2.14/23] from ovn-kubernetes openshift-backplane-srep 7m5s Normal SuccessfulCreate job/osd-delete-ownerrefs-serviceaccounts-27990067 Created pod: osd-delete-ownerrefs-serviceaccounts-27990067-rbrjk openshift-backplane-srep 7m5s Normal Pulled pod/osd-delete-ownerrefs-serviceaccounts-27990067-rbrjk Successfully pulled image "image-registry.openshift-image-registry.svc:5000/openshift/cli:latest" in 146.883137ms (146.894234ms including waiting) openshift-backplane-srep 7m5s Normal Started pod/osd-delete-ownerrefs-serviceaccounts-27990067-rbrjk Started container osd-delete-ownerrefs-serviceaccounts openshift-backplane-srep 7m5s Normal Pulling pod/osd-delete-ownerrefs-serviceaccounts-27990067-rbrjk Pulling image "image-registry.openshift-image-registry.svc:5000/openshift/cli:latest" default 7m2s Warning ResolutionFailed namespace/openshift-cloud-ingress-operator constraints not satisfiable: subscription cloud-ingress-operator exists, no operators found from catalog cloud-ingress-operator-registry in namespace openshift-cloud-ingress-operator referenced by subscription cloud-ingress-operator openshift-backplane-srep 7m1s Normal SawCompletedJob cronjob/osd-delete-ownerrefs-serviceaccounts Saw completed job: osd-delete-ownerrefs-serviceaccounts-27990067, status: Complete openshift-backplane-srep 7m1s Normal Completed job/osd-delete-ownerrefs-serviceaccounts-27990067 Job completed default 6m58s Warning ResolutionFailed namespace/openshift-managed-upgrade-operator constraints not satisfiable: subscription managed-upgrade-operator exists, no operators found from catalog managed-upgrade-operator-catalog in namespace openshift-managed-upgrade-operator referenced by subscription managed-upgrade-operator default 6m56s Warning ResolutionFailed namespace/openshift-osd-metrics constraints not satisfiable: no operators found from catalog osd-metrics-exporter-registry in namespace openshift-osd-metrics referenced by subscription osd-metrics-exporter, subscription osd-metrics-exporter exists default 6m36s Warning ResolutionFailed namespace/openshift-splunk-forwarder-operator constraints not satisfiable: no operators found from catalog splunk-forwarder-operator-catalog in namespace openshift-splunk-forwarder-operator referenced by subscription openshift-splunk-forwarder-operator, subscription openshift-splunk-forwarder-operator exists openshift-kube-controller-manager 2m48s Normal CreatedSCCRanges pod/kube-controller-manager-ip-10-0-239-132.ec2.internal created SCC ranges for openshift-must-gather-nsf8m namespace default 2m25s Warning ResolutionFailed namespace/openshift-velero constraints not satisfiable: no operators found from catalog managed-velero-operator-registry in namespace openshift-velero referenced by subscription managed-velero-operator, subscription managed-velero-operator exists default 2m19s Warning ResolutionFailed namespace/openshift-must-gather-operator constraints not satisfiable: no operators found from catalog must-gather-operator-registry in namespace openshift-must-gather-operator referenced by subscription must-gather-operator, subscription must-gather-operator exists default 2m18s Warning ResolutionFailed namespace/openshift-observability-operator constraints not satisfiable: no operators found from catalog observability-operator-catalog in namespace openshift-observability-operator referenced by subscription observability-operator, subscription observability-operator exists default 2m9s Warning ResolutionFailed namespace/openshift-managed-node-metadata-operator constraints not satisfiable: no operators found from catalog managed-node-metadata-operator-registry in namespace openshift-managed-node-metadata-operator referenced by subscription managed-node-metadata-operator, subscription managed-node-metadata-operator exists default 2m9s Warning ResolutionFailed namespace/openshift-deployment-validation-operator constraints not satisfiable: no operators found from catalog deployment-validation-operator-catalog in namespace openshift-deployment-validation-operator referenced by subscription deployment-validation-operator, subscription deployment-validation-operator exists default 2m6s Warning ResolutionFailed namespace/openshift-custom-domains-operator constraints not satisfiable: no operators found from catalog custom-domains-operator-registry in namespace openshift-custom-domains-operator referenced by subscription custom-domains-operator, subscription custom-domains-operator exists default 2m2s Warning ResolutionFailed namespace/openshift-osd-metrics constraints not satisfiable: subscription osd-metrics-exporter exists, no operators found from catalog osd-metrics-exporter-registry in namespace openshift-osd-metrics referenced by subscription osd-metrics-exporter default 118s Warning ResolutionFailed namespace/openshift-rbac-permissions constraints not satisfiable: subscription rbac-permissions-operator exists, no operators found from catalog rbac-permissions-operator-registry in namespace openshift-rbac-permissions referenced by subscription rbac-permissions-operator default 113s Warning ResolutionFailed namespace/openshift-cloud-ingress-operator constraints not satisfiable: no operators found from catalog cloud-ingress-operator-registry in namespace openshift-cloud-ingress-operator referenced by subscription cloud-ingress-operator, subscription cloud-ingress-operator exists default 110s Warning ResolutionFailed namespace/openshift-managed-upgrade-operator constraints not satisfiable: no operators found from catalog managed-upgrade-operator-catalog in namespace openshift-managed-upgrade-operator referenced by subscription managed-upgrade-operator, subscription managed-upgrade-operator exists default 108s Warning ResolutionFailed namespace/openshift-ocm-agent-operator constraints not satisfiable: no operators found from catalog ocm-agent-operator-registry in namespace openshift-ocm-agent-operator referenced by subscription ocm-agent-operator, subscription ocm-agent-operator exists default 106s Warning ResolutionFailed namespace/openshift-route-monitor-operator constraints not satisfiable: no operators found from catalog route-monitor-operator-registry in namespace openshift-route-monitor-operator referenced by subscription route-monitor-operator, subscription route-monitor-operator exists default 104s Warning ResolutionFailed namespace/openshift-splunk-forwarder-operator constraints not satisfiable: subscription openshift-splunk-forwarder-operator exists, no operators found from catalog splunk-forwarder-operator-catalog in namespace openshift-splunk-forwarder-operator referenced by subscription openshift-splunk-forwarder-operator default 101s Warning ResolutionFailed namespace/openshift-addon-operator constraints not satisfiable: no operators found from catalog addon-operator-catalog in namespace openshift-addon-operator referenced by subscription addon-operator, subscription addon-operator exists openshift-marketplace 65s Normal Pulling pod/certified-operators-wzhjt Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.12" openshift-marketplace 65s Normal AddedInterface pod/certified-operators-wzhjt Add eth0 [10.128.0.13/23] from ovn-kubernetes openshift-marketplace 64s Normal Created pod/certified-operators-wzhjt Created container registry-server openshift-marketplace 64s Normal Pulled pod/certified-operators-wzhjt Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.12" in 705.315336ms (705.324454ms including waiting) openshift-marketplace 64s Normal AddedInterface pod/redhat-operators-5qn2j Add eth0 [10.128.0.15/23] from ovn-kubernetes openshift-marketplace 64s Normal Pulling pod/redhat-operators-5qn2j Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.12" openshift-marketplace 64s Normal Started pod/certified-operators-wzhjt Started container registry-server openshift-marketplace 63s Normal Pulled pod/redhat-operators-5qn2j Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.12" in 850.306931ms (850.321078ms including waiting) openshift-marketplace 63s Normal Started pod/redhat-operators-5qn2j Started container registry-server openshift-marketplace 63s Normal Created pod/redhat-operators-5qn2j Created container registry-server openshift-marketplace 56s Normal AddedInterface pod/redhat-marketplace-7d4zn Add eth0 [10.128.0.16/23] from ovn-kubernetes openshift-marketplace 56s Normal Pulling pod/redhat-marketplace-7d4zn Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.12" openshift-marketplace 55s Normal Created pod/redhat-marketplace-7d4zn Created container registry-server openshift-marketplace 55s Normal Started pod/redhat-marketplace-7d4zn Started container registry-server openshift-marketplace 55s Normal Pulled pod/redhat-marketplace-7d4zn Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.12" in 906.441474ms (906.455954ms including waiting) openshift-marketplace 49s Normal Killing pod/certified-operators-wzhjt Stopping container registry-server openshift-marketplace 46s Normal Killing pod/redhat-operators-5qn2j Stopping container registry-server openshift-marketplace 38s Normal Killing pod/redhat-marketplace-7d4zn Stopping container registry-server openshift-marketplace 22s Normal AddedInterface pod/community-operators-8hc4x Add eth0 [10.128.0.17/23] from ovn-kubernetes openshift-marketplace 22s Normal Pulled pod/community-operators-8hc4x Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.12" in 299.519911ms (299.533573ms including waiting) openshift-marketplace 22s Normal Created pod/community-operators-8hc4x Created container registry-server openshift-marketplace 22s Normal Started pod/community-operators-8hc4x Started container registry-server openshift-marketplace 22s Normal Pulling pod/community-operators-8hc4x Pulling image "registry.redhat.io/redhat/community-operator-index:v4.12" openshift-dns 7s Warning TopologyAwareHintsDisabled service/dns-default Insufficient Node information: allocatable CPU or zone not specified on one or more nodes, addressType: IPv4 openshift-marketplace 6s Normal Killing pod/community-operators-8hc4x Stopping container registry-server