Time | Namespace | Component | RelatedObject | Reason | Message |
---|---|---|---|---|---|
openshift-marketplace |
redhat-operators-pb62l |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-pb62l to master1 | ||
openshift-monitoring |
alertmanager-main-0 |
Scheduled |
Successfully assigned openshift-monitoring/alertmanager-main-0 to master1 | ||
default |
apiserver |
kube-system |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-route-controller-manager |
route-controller-manager-66c9d88ff4-lz6w7 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-66c9d88ff4-lz6w7 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-66c9d88ff4-lz6w7 |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-66c9d88ff4-lz6w7 to master1 | ||
openshift-console-operator |
console-operator-7f587bf69b-gpgdn |
Scheduled |
Successfully assigned openshift-console-operator/console-operator-7f587bf69b-gpgdn to master1 | ||
openshift-authentication |
oauth-openshift-b5887fb6f-fg5md |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-b5887fb6f-fg5md to master1 | ||
openshift-authentication |
oauth-openshift-b5887fb6f-fg5md |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-authentication |
oauth-openshift-79959d769-f6b52 |
FailedScheduling |
skip schedule deleting pod: openshift-authentication/oauth-openshift-79959d769-f6b52 | ||
openshift-authentication |
oauth-openshift-79959d769-f6b52 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-route-controller-manager |
route-controller-manager-7cb74487c-h9grb |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-7cb74487c-h9grb to master1 | ||
openshift-authentication |
oauth-openshift-76f8b8bcb7-vcnkl |
Scheduled |
Successfully assigned openshift-authentication/oauth-openshift-76f8b8bcb7-vcnkl to master1 | ||
openshift-authentication |
oauth-openshift-689f594445-thjj4 |
FailedScheduling |
skip schedule deleting pod: openshift-authentication/oauth-openshift-689f594445-thjj4 | ||
openshift-authentication |
oauth-openshift-689f594445-thjj4 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
default |
apiserver |
kube-system |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 0s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master1 |
KubeAPIReadyz |
readyz=true | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master1 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-console |
downloads-797d94d7f9-ln4r8 |
Scheduled |
Successfully assigned openshift-console/downloads-797d94d7f9-ln4r8 to master1 | ||
openshift-kube-apiserver |
apiserver |
kube-apiserver-master1 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master1 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master1 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master1 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master1 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master1 |
KubeAPIReadyz |
readyz=true | |
openshift-controller-manager |
controller-manager-7c9df9889d-wvt8q |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-7c9df9889d-wvt8q to master1 | ||
openshift-kube-apiserver |
apiserver |
kube-apiserver-master1 |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master1 |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master1 |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master1 |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-console |
console-87d9d6878-bjhl4 |
Scheduled |
Successfully assigned openshift-console/console-87d9d6878-bjhl4 to master1 | ||
openshift-kube-apiserver |
apiserver |
kube-apiserver-master1 |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
default |
apiserver |
openshift-kube-apiserver |
KubeAPIReadyz |
readyz=true | |
default |
apiserver |
openshift-kube-apiserver |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
default |
apiserver |
openshift-kube-apiserver |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
default |
apiserver |
openshift-kube-apiserver |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
default |
apiserver |
openshift-kube-apiserver |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
default |
apiserver |
openshift-kube-apiserver |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
default |
apiserver |
openshift-kube-apiserver |
KubeAPIReadyz |
readyz=true | |
default |
apiserver |
openshift-kube-apiserver |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
default |
apiserver |
openshift-kube-apiserver |
TerminationGracefulTerminationFinished |
All pending requests processed | |
default |
apiserver |
openshift-kube-apiserver |
KubeAPIReadyz |
readyz=true | |
default |
apiserver |
openshift-kube-apiserver |
KubeAPIReadyz |
readyz=true | |
default |
apiserver |
openshift-kube-apiserver |
ShutdownInitiated |
Received signal to terminate, becoming unready, but keeping serving | |
default |
apiserver |
openshift-kube-apiserver |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
default |
apiserver |
openshift-kube-apiserver |
AfterShutdownDelayDuration |
The minimal shutdown duration of 0s finished | |
default |
apiserver |
openshift-kube-apiserver |
InFlightRequestsDrained |
All non long-running request(s) in-flight have drained | |
default |
apiserver |
openshift-kube-apiserver |
HTTPServerStoppedListening |
HTTP Server has stopped listening | |
default |
apiserver |
openshift-kube-apiserver |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-kube-apiserver |
apiserver |
kube-apiserver-master1 |
KubeAPIReadyz |
readyz=true | |
openshift-marketplace |
certified-operators-tvmdn |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-tvmdn to master1 | ||
openshift-controller-manager |
controller-manager-7c9df9889d-wvt8q |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-controller-manager |
controller-manager-7c9df9889d-wvt8q |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | ||
openshift-console |
console-5c44fb5754-9zfch |
Scheduled |
Successfully assigned openshift-console/console-5c44fb5754-9zfch to master1 | ||
openshift-controller-manager |
controller-manager-779c4cdcc7-sfmpx |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-779c4cdcc7-sfmpx to master1 | ||
openshift-console |
console-7db86c8ffd-zswlt |
Scheduled |
Successfully assigned openshift-console/console-7db86c8ffd-zswlt to master1 | ||
openshift-operator-lifecycle-manager |
collect-profiles-27938340-swxrj |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/collect-profiles-27938340-swxrj to master1 | ||
openshift-apiserver |
apiserver |
apiserver-675fc6b586-hhb4g |
TerminationStart |
Received signal to terminate, becoming unready, but keeping serving | |
openshift-apiserver |
apiserver |
apiserver-675fc6b586-hhb4g |
TerminationMinimalShutdownDurationFinished |
The minimal shutdown duration of 15s finished | |
openshift-apiserver |
apiserver |
apiserver-675fc6b586-hhb4g |
TerminationStoppedServing |
Server has stopped listening | |
openshift-apiserver |
apiserver |
apiserver-675fc6b586-hhb4g |
TerminationPreShutdownHooksFinished |
All pre-shutdown hooks have been finished | |
openshift-apiserver |
apiserver |
apiserver-675fc6b586-hhb4g |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-multus |
cni-sysctl-allowlist-ds-zvgzt |
Scheduled |
Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-zvgzt to master1 | ||
openshift-multus |
cni-sysctl-allowlist-ds-9m8mw |
Scheduled |
Successfully assigned openshift-multus/cni-sysctl-allowlist-ds-9m8mw to master1 | ||
openshift-monitoring |
thanos-querier-5b8dcdd9b4-x9dtp |
Scheduled |
Successfully assigned openshift-monitoring/thanos-querier-5b8dcdd9b4-x9dtp to master1 | ||
openshift-kube-apiserver |
apiserver |
kube-apiserver-master1 |
TerminationGracefulTerminationFinished |
All pending requests processed | |
openshift-monitoring |
prometheus-k8s-0 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-k8s-0 to master1 | ||
openshift-monitoring |
prometheus-adapter-786496f679-ffgkb |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-adapter-786496f679-ffgkb to master1 | ||
openshift-monitoring |
prometheus-adapter-5dbc6bf64-j8nnk |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-adapter-5dbc6bf64-j8nnk to master1 | ||
openshift-image-registry |
node-ca-phrb9 |
Scheduled |
Successfully assigned openshift-image-registry/node-ca-phrb9 to master1 | ||
default |
apiserver |
kube-system |
TerminationStoppedServing |
Server has stopped listening | |
openshift-marketplace |
redhat-operators-vfgtk |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-vfgtk to master1 | ||
openshift-ingress-canary |
ingress-canary-lbrhv |
Scheduled |
Successfully assigned openshift-ingress-canary/ingress-canary-lbrhv to master1 | ||
openshift-marketplace |
redhat-marketplace-j79l7 |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-j79l7 to master1 | ||
openshift-marketplace |
community-operators-hx2zr |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-hx2zr to master1 | ||
kube-system |
Required control plane pods have been created | ||||
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master1_6b153d5d-d429-4949-accb-10b79718a12c became leader | |
kube-system |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master1_4ab583fb-d84a-4c74-891b-4f354acd84aa became leader | |
kube-system |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master1_4ab583fb-d84a-4c74-891b-4f354acd84aa became leader | |
kube-system |
cluster-policy-controller |
bootstrap-kube-controller-manager-master1 |
ClusterInfrastructureStatus |
unable to get cluster infrastructure status, using HA cluster values for leader election: the server could not find the requested resource (get infrastructures.config.openshift.io cluster) | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master1_a7ae9ac0-15c1-415e-9e09-48c89eb12762 became leader | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master1_a7ae9ac0-15c1-415e-9e09-48c89eb12762 became leader | |
kube-system |
podsecurity-admission-label-sync-controller-pod-security-admission-label-synchronization-controller-pod-security-admission-label-synchronization-controller |
bootstrap-kube-controller-manager-master1 |
FastControllerResync |
Controller "pod-security-admission-label-synchronization-controller" resync interval is set to 0s which might lead to client request throttling | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
FastControllerResync |
Controller "namespace-security-allocation-controller" resync interval is set to 0s which might lead to client request throttling | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-network-diagnostics namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-ingress-canary namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for kube-system namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for kube-node-lease namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-version namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-config namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-machine-api namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-route-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for kube-public namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-apiserver namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-service-ca namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-config-managed namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for default namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-oauth-apiserver namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-ingress namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-console-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-apiserver namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-console namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-apiserver-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-etcd namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-ingress-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-scheduler namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-credential-operator namespace | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master1_6cb8669f-149f-451b-be38-b01cf071d80d became leader | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master1_6cb8669f-149f-451b-be38-b01cf071d80d became leader | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled up replica set cluster-version-operator-5bb84c5d7f to 1 | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master1_41d2d59a-93d5-47d2-a78c-6806b5919a25 became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master1_41d2d59a-93d5-47d2-a78c-6806b5919a25 became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.12.2" image="quay.io/openshift-release-dev/ocp-release@sha256:31c7741fc7bb73ff752ba43f5acf014b8fadd69196fc522241302de918066cb1" | |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.12.2" image="quay.io/openshift-release-dev/ocp-release@sha256:31c7741fc7bb73ff752ba43f5acf014b8fadd69196fc522241302de918066cb1" | |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.12.2" image="quay.io/openshift-release-dev/ocp-release@sha256:31c7741fc7bb73ff752ba43f5acf014b8fadd69196fc522241302de918066cb1" architecture="amd64" | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-storage-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-network-config-controller namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-etcd-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-scheduler-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-insights namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-apiserver-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-authentication-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-samples-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-csi-drivers namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-controller-manager-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-node-tuning-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-cluster-machine-approver namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-network-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-config-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-marketplace namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-image-registry namespace | |
(x13) | openshift-cluster-version |
replicaset-controller |
cluster-version-operator-5bb84c5d7f |
FailedCreate |
Error creating: pods "cluster-version-operator-5bb84c5d7f-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-cloud-controller-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-service-ca-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-dns-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-machine-config-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-storage-version-migrator-operator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-openstack-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-operator-lifecycle-manager namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-kni-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-operators namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-ovirt-infra namespace | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled down replica set cluster-version-operator-5bb84c5d7f to 0 from 1 | |
openshift-cluster-version |
deployment-controller |
cluster-version-operator |
ScalingReplicaSet |
Scaled up replica set cluster-version-operator-6f997f7ccd to 1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-vsphere-infra namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-nutanix-infra namespace | |
openshift-apiserver-operator |
deployment-controller |
openshift-apiserver-operator |
ScalingReplicaSet |
Scaled up replica set openshift-apiserver-operator-5c6c84d584 to 1 | |
openshift-machine-api |
deployment-controller |
control-plane-machine-set-operator |
ScalingReplicaSet |
Scaled up replica set control-plane-machine-set-operator-749d766b67 to 1 | |
openshift-service-ca-operator |
deployment-controller |
service-ca-operator |
ScalingReplicaSet |
Scaled up replica set service-ca-operator-76d7c5458c to 1 | |
openshift-dns-operator |
deployment-controller |
dns-operator |
ScalingReplicaSet |
Scaled up replica set dns-operator-6cb7cc46cf to 1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-monitoring namespace | |
openshift-network-operator |
deployment-controller |
network-operator |
ScalingReplicaSet |
Scaled up replica set network-operator-767fc6c7f6 to 1 | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-user-workload-monitoring namespace | |
openshift-kube-controller-manager-operator |
deployment-controller |
kube-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set kube-controller-manager-operator-8bb498f8b to 1 | |
openshift-kube-scheduler-operator |
deployment-controller |
openshift-kube-scheduler-operator |
ScalingReplicaSet |
Scaled up replica set openshift-kube-scheduler-operator-6f7cd4b84d to 1 | |
openshift-controller-manager-operator |
deployment-controller |
openshift-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set openshift-controller-manager-operator-967d9d7c4 to 1 | |
openshift-marketplace |
deployment-controller |
marketplace-operator |
ScalingReplicaSet |
Scaled up replica set marketplace-operator-75746f848d to 1 | |
(x12) | openshift-cluster-version |
replicaset-controller |
cluster-version-operator-6f997f7ccd |
FailedCreate |
Error creating: pods "cluster-version-operator-6f997f7ccd-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-cluster-machine-approver |
deployment-controller |
machine-approver |
ScalingReplicaSet |
Scaled up replica set machine-approver-879c4799 to 1 | |
(x11) | openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-76d7c5458c |
FailedCreate |
Error creating: pods "service-ca-operator-76d7c5458c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x11) | openshift-dns-operator |
replicaset-controller |
dns-operator-6cb7cc46cf |
FailedCreate |
Error creating: pods "dns-operator-6cb7cc46cf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x11) | openshift-network-operator |
replicaset-controller |
network-operator-767fc6c7f6 |
FailedCreate |
Error creating: pods "network-operator-767fc6c7f6-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
openshift-kube-storage-version-migrator-operator |
deployment-controller |
kube-storage-version-migrator-operator |
ScalingReplicaSet |
Scaled up replica set kube-storage-version-migrator-operator-fbcb7858d to 1 | |
(x2) | openshift-operator-lifecycle-manager |
controllermanager |
packageserver-pdb |
NoPods |
No matching pods found |
openshift-authentication-operator |
deployment-controller |
authentication-operator |
ScalingReplicaSet |
Scaled up replica set authentication-operator-68df59f464 to 1 | |
(x12) | openshift-machine-api |
replicaset-controller |
control-plane-machine-set-operator-749d766b67 |
FailedCreate |
Error creating: pods "control-plane-machine-set-operator-749d766b67-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x11) | openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-8bb498f8b |
FailedCreate |
Error creating: pods "kube-controller-manager-operator-8bb498f8b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x11) | openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-6f7cd4b84d |
FailedCreate |
Error creating: pods "openshift-kube-scheduler-operator-6f7cd4b84d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x9) | openshift-authentication-operator |
replicaset-controller |
authentication-operator-68df59f464 |
FailedCreate |
Error creating: pods "authentication-operator-68df59f464-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x11) | openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-879c4799 |
FailedCreate |
Error creating: pods "machine-approver-879c4799-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
kube-system |
Required control plane pods have been created | ||||
(x10) | openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-fbcb7858d |
FailedCreate |
Error creating: pods "kube-storage-version-migrator-operator-fbcb7858d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x12) | openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-5c6c84d584 |
FailedCreate |
Error creating: pods "openshift-apiserver-operator-5c6c84d584-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x11) | openshift-marketplace |
replicaset-controller |
marketplace-operator-75746f848d |
FailedCreate |
Error creating: pods "marketplace-operator-75746f848d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x11) | openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-967d9d7c4 |
FailedCreate |
Error creating: pods "openshift-controller-manager-operator-967d9d7c4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master1_37a044f5-25d4-441e-8296-a993efc5fdaa became leader | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master1_e99e1514-4cda-447a-b22d-474296391406 became leader | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master1_e99e1514-4cda-447a-b22d-474296391406 became leader | |
(x13) | openshift-dns-operator |
replicaset-controller |
dns-operator-6cb7cc46cf |
FailedCreate |
Error creating: pods "dns-operator-6cb7cc46cf-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x13) | openshift-marketplace |
replicaset-controller |
marketplace-operator-75746f848d |
FailedCreate |
Error creating: pods "marketplace-operator-75746f848d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x13) | openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-879c4799 |
FailedCreate |
Error creating: pods "machine-approver-879c4799-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x13) | openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-6f7cd4b84d |
FailedCreate |
Error creating: pods "openshift-kube-scheduler-operator-6f7cd4b84d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x13) | openshift-network-operator |
replicaset-controller |
network-operator-767fc6c7f6 |
FailedCreate |
Error creating: pods "network-operator-767fc6c7f6-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x13) | openshift-cluster-version |
replicaset-controller |
cluster-version-operator-6f997f7ccd |
FailedCreate |
Error creating: pods "cluster-version-operator-6f997f7ccd-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x13) | openshift-authentication-operator |
replicaset-controller |
authentication-operator-68df59f464 |
FailedCreate |
Error creating: pods "authentication-operator-68df59f464-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x13) | openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-76d7c5458c |
FailedCreate |
Error creating: pods "service-ca-operator-76d7c5458c-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x13) | openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-fbcb7858d |
FailedCreate |
Error creating: pods "kube-storage-version-migrator-operator-fbcb7858d-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x13) | openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-967d9d7c4 |
FailedCreate |
Error creating: pods "openshift-controller-manager-operator-967d9d7c4-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x13) | openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-5c6c84d584 |
FailedCreate |
Error creating: pods "openshift-apiserver-operator-5c6c84d584-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x13) | openshift-machine-api |
replicaset-controller |
control-plane-machine-set-operator-749d766b67 |
FailedCreate |
Error creating: pods "control-plane-machine-set-operator-749d766b67-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
(x13) | openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-8bb498f8b |
FailedCreate |
Error creating: pods "kube-controller-manager-operator-8bb498f8b-" is forbidden: autoscaling.openshift.io/ManagementCPUsOverride the cluster does not have any nodes |
default |
node-controller |
master1 |
RegisteredNode |
Node master1 event: Registered Node master1 in Controller | |
openshift-dns-operator |
default-scheduler |
dns-operator-6cb7cc46cf-njb62 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-cluster-version |
default-scheduler |
cluster-version-operator-6f997f7ccd-p8rgc |
Scheduled |
Successfully assigned openshift-cluster-version/cluster-version-operator-6f997f7ccd-p8rgc to master1 | |
openshift-kube-controller-manager-operator |
replicaset-controller |
kube-controller-manager-operator-8bb498f8b |
SuccessfulCreate |
Created pod: kube-controller-manager-operator-8bb498f8b-dsrn8 | |
openshift-marketplace |
replicaset-controller |
marketplace-operator-75746f848d |
SuccessfulCreate |
Created pod: marketplace-operator-75746f848d-v4htq | |
openshift-kube-controller-manager-operator |
default-scheduler |
kube-controller-manager-operator-8bb498f8b-dsrn8 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-dns-operator |
replicaset-controller |
dns-operator-6cb7cc46cf |
SuccessfulCreate |
Created pod: dns-operator-6cb7cc46cf-njb62 | |
openshift-cluster-machine-approver |
replicaset-controller |
machine-approver-879c4799 |
SuccessfulCreate |
Created pod: machine-approver-879c4799-ds2g4 | |
openshift-marketplace |
default-scheduler |
marketplace-operator-75746f848d-v4htq |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-controller-manager-operator |
replicaset-controller |
openshift-controller-manager-operator-967d9d7c4 |
SuccessfulCreate |
Created pod: openshift-controller-manager-operator-967d9d7c4-5t48s | |
openshift-machine-api |
replicaset-controller |
control-plane-machine-set-operator-749d766b67 |
SuccessfulCreate |
Created pod: control-plane-machine-set-operator-749d766b67-gc5pf | |
openshift-cluster-machine-approver |
default-scheduler |
machine-approver-879c4799-ds2g4 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-kube-scheduler-operator |
default-scheduler |
openshift-kube-scheduler-operator-6f7cd4b84d-vl2fv |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-machine-api |
default-scheduler |
control-plane-machine-set-operator-749d766b67-gc5pf |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-authentication-operator |
default-scheduler |
authentication-operator-68df59f464-ffd6s |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-network-operator |
default-scheduler |
network-operator-767fc6c7f6-wph8h |
Scheduled |
Successfully assigned openshift-network-operator/network-operator-767fc6c7f6-wph8h to master1 | |
openshift-controller-manager-operator |
default-scheduler |
openshift-controller-manager-operator-967d9d7c4-5t48s |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-service-ca-operator |
default-scheduler |
service-ca-operator-76d7c5458c-wgsgp |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-kube-storage-version-migrator-operator |
replicaset-controller |
kube-storage-version-migrator-operator-fbcb7858d |
SuccessfulCreate |
Created pod: kube-storage-version-migrator-operator-fbcb7858d-95djl | |
openshift-kube-storage-version-migrator-operator |
default-scheduler |
kube-storage-version-migrator-operator-fbcb7858d-95djl |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-kube-scheduler-operator |
replicaset-controller |
openshift-kube-scheduler-operator-6f7cd4b84d |
SuccessfulCreate |
Created pod: openshift-kube-scheduler-operator-6f7cd4b84d-vl2fv | |
openshift-cluster-version |
replicaset-controller |
cluster-version-operator-6f997f7ccd |
SuccessfulCreate |
Created pod: cluster-version-operator-6f997f7ccd-p8rgc | |
openshift-apiserver-operator |
default-scheduler |
openshift-apiserver-operator-5c6c84d584-4z2d9 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-service-ca-operator |
replicaset-controller |
service-ca-operator-76d7c5458c |
SuccessfulCreate |
Created pod: service-ca-operator-76d7c5458c-wgsgp | |
openshift-authentication-operator |
replicaset-controller |
authentication-operator-68df59f464 |
SuccessfulCreate |
Created pod: authentication-operator-68df59f464-ffd6s | |
openshift-apiserver-operator |
replicaset-controller |
openshift-apiserver-operator-5c6c84d584 |
SuccessfulCreate |
Created pod: openshift-apiserver-operator-5c6c84d584-4z2d9 | |
openshift-network-operator |
replicaset-controller |
network-operator-767fc6c7f6 |
SuccessfulCreate |
Created pod: network-operator-767fc6c7f6-wph8h | |
openshift-network-operator |
kubelet |
network-operator-767fc6c7f6-wph8h |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5e07e3a1c8bfa3f66ddbdf1bb6b12f48587434f8a37f075d6a02435dfa18dc2" | |
openshift-network-operator |
kubelet |
network-operator-767fc6c7f6-wph8h |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5e07e3a1c8bfa3f66ddbdf1bb6b12f48587434f8a37f075d6a02435dfa18dc2" in 5.81514074s | |
openshift-network-operator |
kubelet |
mtu-prober-w5lxj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5e07e3a1c8bfa3f66ddbdf1bb6b12f48587434f8a37f075d6a02435dfa18dc2" already present on machine | |
openshift-network-operator |
job-controller |
mtu-prober |
SuccessfulCreate |
Created pod: mtu-prober-w5lxj | |
openshift-network-operator |
network-operator-management-state-recorder-managementstatecontroller |
network-operator |
StatusNotFound |
Unable to determine current operator status for cluster-network-operator | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
master1_517db234-fe9f-4f02-8244-eef6a2787b21 became leader | |
openshift-network-operator |
default-scheduler |
mtu-prober-w5lxj |
Scheduled |
Successfully assigned openshift-network-operator/mtu-prober-w5lxj to master1 | |
openshift-network-operator |
kubelet |
mtu-prober-w5lxj |
Created |
Created container prober | |
openshift-network-operator |
kubelet |
mtu-prober-w5lxj |
Started |
Started container prober | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
master1_517db234-fe9f-4f02-8244-eef6a2787b21 became leader | |
openshift-network-operator |
network-operator-loggingsyncer |
network-operator |
FastControllerResync |
Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling | |
openshift-network-operator |
job-controller |
mtu-prober |
Completed |
Job completed | |
openshift-multus |
kubelet |
multus-tj88c |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" | |
openshift-multus |
default-scheduler |
multus-tj88c |
Scheduled |
Successfully assigned openshift-multus/multus-tj88c to master1 | |
openshift-multus |
daemonset-controller |
multus |
SuccessfulCreate |
Created pod: multus-tj88c | |
openshift-multus |
default-scheduler |
multus-additional-cni-plugins-pcprp |
Scheduled |
Successfully assigned openshift-multus/multus-additional-cni-plugins-pcprp to master1 | |
openshift-multus |
daemonset-controller |
multus-additional-cni-plugins |
SuccessfulCreate |
Created pod: multus-additional-cni-plugins-pcprp | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-pcprp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cad5a85f21da1d2e653f41f82db607ab6827da0468283f63694c509e39374f0d" | |
openshift-multus |
default-scheduler |
network-metrics-daemon-d5jcm |
Scheduled |
Successfully assigned openshift-multus/network-metrics-daemon-d5jcm to master1 | |
openshift-multus |
daemonset-controller |
network-metrics-daemon |
SuccessfulCreate |
Created pod: network-metrics-daemon-d5jcm | |
openshift-multus |
replicaset-controller |
multus-admission-controller-7b9c64854b |
SuccessfulCreate |
Created pod: multus-admission-controller-7b9c64854b-bwsvm | |
openshift-multus |
default-scheduler |
multus-admission-controller-7b9c64854b-bwsvm |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-7b9c64854b to 1 | |
openshift-multus |
kubelet |
multus-tj88c |
Started |
Started container kube-multus | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-pcprp |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cad5a85f21da1d2e653f41f82db607ab6827da0468283f63694c509e39374f0d" in 5.576152579s | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-pcprp |
Started |
Started container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-tj88c |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" in 5.760336665s | |
openshift-multus |
kubelet |
multus-tj88c |
Created |
Created container kube-multus | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-pcprp |
Created |
Created container egress-router-binary-copy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-pcprp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03a73b14daa7fe32294f62fd5ef20edf193204d6a39df05dd4342e721e7746d" | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-node-8wrxz |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-node-8wrxz to master1 | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-node |
SuccessfulCreate |
Created pod: ovnkube-node-8wrxz | |
openshift-ovn-kubernetes |
daemonset-controller |
ovnkube-master |
SuccessfulCreate |
Created pod: ovnkube-master-bpbn7 | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-master-bpbn7 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:78078017998532005d730896ed4ca6f212ca9ac5713d65ca724eb9468fd8f7fb" | |
openshift-ovn-kubernetes |
default-scheduler |
ovnkube-master-bpbn7 |
Scheduled |
Successfully assigned openshift-ovn-kubernetes/ovnkube-master-bpbn7 to master1 | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8wrxz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:78078017998532005d730896ed4ca6f212ca9ac5713d65ca724eb9468fd8f7fb" | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8wrxz |
FailedMount |
MountVolume.SetUp failed for volume "ovn-node-metrics-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-network-diagnostics |
replicaset-controller |
network-check-source-b94cc7564 |
SuccessfulCreate |
Created pod: network-check-source-b94cc7564-hh9xp | |
openshift-network-diagnostics |
deployment-controller |
network-check-source |
ScalingReplicaSet |
Scaled up replica set network-check-source-b94cc7564 to 1 | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-pcprp |
Started |
Started container cni-plugins | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-pcprp |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d03a73b14daa7fe32294f62fd5ef20edf193204d6a39df05dd4342e721e7746d" in 6.245919555s | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-pcprp |
Created |
Created container cni-plugins | |
openshift-network-diagnostics |
default-scheduler |
network-check-target-7n4vr |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-target-7n4vr to master1 | |
openshift-network-diagnostics |
daemonset-controller |
network-check-target |
SuccessfulCreate |
Created pod: network-check-target-7n4vr | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-pcprp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c407846948c8ff2cd441089c6a57822cfe1a07a537dff1f9d7ebf2db2d1cdee" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-pcprp |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c407846948c8ff2cd441089c6a57822cfe1a07a537dff1f9d7ebf2db2d1cdee" in 4.336168727s | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-pcprp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:238c03849ea26995bfde9657c7628ae0e31fe35f4be068d7326b65acb1f55d01" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-pcprp |
Created |
Created container bond-cni-plugin | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-pcprp |
Started |
Started container bond-cni-plugin | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8wrxz |
Started |
Started container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8wrxz |
Created |
Created container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8wrxz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:78078017998532005d730896ed4ca6f212ca9ac5713d65ca724eb9468fd8f7fb" in 10.608950984s | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8wrxz |
Created |
Created container ovn-controller | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8wrxz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:78078017998532005d730896ed4ca6f212ca9ac5713d65ca724eb9468fd8f7fb" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8wrxz |
Started |
Started container ovn-acl-logging | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8wrxz |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-pcprp |
Created |
Created container routeoverride-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-pcprp |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:238c03849ea26995bfde9657c7628ae0e31fe35f4be068d7326b65acb1f55d01" in 4.458654635s | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-pcprp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d74f4833b6bb911b57cc08a170a7242733bb5d09ac9480399395a1970e21365" | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-pcprp |
Started |
Started container routeoverride-cni | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8wrxz |
Created |
Created container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8wrxz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8wrxz |
Started |
Started container kube-rbac-proxy-ovn-metrics | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8wrxz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:78078017998532005d730896ed4ca6f212ca9ac5713d65ca724eb9468fd8f7fb" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8wrxz |
Created |
Created container kube-rbac-proxy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8wrxz |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" in 5.30084625s | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8wrxz |
Started |
Started container kube-rbac-proxy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-master-bpbn7 |
Started |
Started container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-master-bpbn7 |
Created |
Created container northd | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-master-bpbn7 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:78078017998532005d730896ed4ca6f212ca9ac5713d65ca724eb9468fd8f7fb" in 18.513657506s | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8wrxz |
Created |
Created container ovnkube-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8wrxz |
Started |
Started container ovnkube-node | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-master-bpbn7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:78078017998532005d730896ed4ca6f212ca9ac5713d65ca724eb9468fd8f7fb" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-master-bpbn7 |
Created |
Created container nbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-master-bpbn7 |
Started |
Started container nbdb | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-pcprp |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d74f4833b6bb911b57cc08a170a7242733bb5d09ac9480399395a1970e21365" in 6.144933746s | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-pcprp |
Created |
Created container whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-pcprp |
Started |
Started container whereabouts-cni-bincopy | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-pcprp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d74f4833b6bb911b57cc08a170a7242733bb5d09ac9480399395a1970e21365" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-pcprp |
Started |
Started container whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-pcprp |
Created |
Created container whereabouts-cni | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-pcprp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" already present on machine | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-pcprp |
Created |
Created container kube-multus-additional-cni-plugins | |
(x7) | openshift-multus |
kubelet |
network-metrics-daemon-d5jcm |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : object "openshift-multus"/"metrics-daemon-secret" not registered |
(x18) | openshift-multus |
kubelet |
network-metrics-daemon-d5jcm |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
(x7) | openshift-network-diagnostics |
kubelet |
network-check-target-7n4vr |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-wnf78" : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] |
(x18) | openshift-network-diagnostics |
kubelet |
network-check-target-7n4vr |
NetworkNotReady |
network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? |
openshift-ovn-kubernetes |
kubelet |
ovnkube-master-bpbn7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-master-bpbn7 |
Started |
Started container kube-rbac-proxy | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-master-bpbn7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:78078017998532005d730896ed4ca6f212ca9ac5713d65ca724eb9468fd8f7fb" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-master-bpbn7 |
Created |
Created container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-master-bpbn7 |
Started |
Started container sbdb | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-master-bpbn7 |
Created |
Created container kube-rbac-proxy | |
(x7) | openshift-ovn-kubernetes |
kubelet |
ovnkube-node-8wrxz |
Unhealthy |
Readiness probe failed: |
openshift-ovn-kubernetes |
kubelet |
ovnkube-master-bpbn7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:78078017998532005d730896ed4ca6f212ca9ac5713d65ca724eb9468fd8f7fb" already present on machine | |
openshift-ovn-kubernetes |
controlplane |
ovn-kubernetes-master |
LeaderElection |
master1 became leader | |
openshift-ovn-kubernetes |
controlplane |
ovn-kubernetes-master |
LeaderElection |
master1 became leader | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-master-bpbn7 |
Started |
Started container ovn-dbchecker | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-master-bpbn7 |
Created |
Created container ovnkube-master | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-master-bpbn7 |
Started |
Started container ovnkube-master | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-master-bpbn7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:78078017998532005d730896ed4ca6f212ca9ac5713d65ca724eb9468fd8f7fb" already present on machine | |
openshift-ovn-kubernetes |
kubelet |
ovnkube-master-bpbn7 |
Created |
Created container ovn-dbchecker | |
(x2) | openshift-multus |
controlplane |
network-metrics-daemon-d5jcm |
ErrorAddingLogicalPort |
addLogicalPort failed for openshift-multus/network-metrics-daemon-d5jcm: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "master1" |
(x2) | default |
controlplane |
master1 |
ErrorReconcilingNode |
[k8s.ovn.org/node-chassis-id annotation not found for node master1, macAddress annotation not found for node "master1" , k8s.ovn.org/l3-gateway-config annotation not found for node "master1"] |
(x2) | openshift-network-diagnostics |
controlplane |
network-check-target-7n4vr |
ErrorAddingLogicalPort |
addLogicalPort failed for openshift-network-diagnostics/network-check-target-7n4vr: unable to parse node L3 gw annotation: k8s.ovn.org/l3-gateway-config annotation not found for node "master1" |
openshift-cluster-version |
kubelet |
cluster-version-operator-6f997f7ccd-p8rgc |
FailedMount |
Unable to attach or mount volumes: unmounted volumes=[serving-cert], unattached volumes=[serving-cert service-ca kube-api-access etc-ssl-certs etc-cvo-updatepayloads]: timed out waiting for the condition | |
openshift-marketplace |
default-scheduler |
marketplace-operator-75746f848d-v4htq |
Scheduled |
Successfully assigned openshift-marketplace/marketplace-operator-75746f848d-v4htq to master1 | |
openshift-service-ca-operator |
default-scheduler |
service-ca-operator-76d7c5458c-wgsgp |
Scheduled |
Successfully assigned openshift-service-ca-operator/service-ca-operator-76d7c5458c-wgsgp to master1 | |
openshift-kube-storage-version-migrator-operator |
default-scheduler |
kube-storage-version-migrator-operator-fbcb7858d-95djl |
Scheduled |
Successfully assigned openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-fbcb7858d-95djl to master1 | |
openshift-dns-operator |
default-scheduler |
dns-operator-6cb7cc46cf-njb62 |
Scheduled |
Successfully assigned openshift-dns-operator/dns-operator-6cb7cc46cf-njb62 to master1 | |
openshift-multus |
default-scheduler |
multus-admission-controller-7b9c64854b-bwsvm |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-7b9c64854b-bwsvm to master1 | |
openshift-authentication-operator |
default-scheduler |
authentication-operator-68df59f464-ffd6s |
Scheduled |
Successfully assigned openshift-authentication-operator/authentication-operator-68df59f464-ffd6s to master1 | |
openshift-kube-controller-manager-operator |
default-scheduler |
kube-controller-manager-operator-8bb498f8b-dsrn8 |
Scheduled |
Successfully assigned openshift-kube-controller-manager-operator/kube-controller-manager-operator-8bb498f8b-dsrn8 to master1 | |
openshift-controller-manager-operator |
default-scheduler |
openshift-controller-manager-operator-967d9d7c4-5t48s |
Scheduled |
Successfully assigned openshift-controller-manager-operator/openshift-controller-manager-operator-967d9d7c4-5t48s to master1 | |
openshift-apiserver-operator |
default-scheduler |
openshift-apiserver-operator-5c6c84d584-4z2d9 |
Scheduled |
Successfully assigned openshift-apiserver-operator/openshift-apiserver-operator-5c6c84d584-4z2d9 to master1 | |
openshift-kube-scheduler-operator |
default-scheduler |
openshift-kube-scheduler-operator-6f7cd4b84d-vl2fv |
Scheduled |
Successfully assigned openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-6f7cd4b84d-vl2fv to master1 | |
openshift-machine-api |
default-scheduler |
control-plane-machine-set-operator-749d766b67-gc5pf |
Scheduled |
Successfully assigned openshift-machine-api/control-plane-machine-set-operator-749d766b67-gc5pf to master1 | |
openshift-cluster-machine-approver |
default-scheduler |
machine-approver-879c4799-ds2g4 |
Scheduled |
Successfully assigned openshift-cluster-machine-approver/machine-approver-879c4799-ds2g4 to master1 | |
openshift-kube-storage-version-migrator-operator |
multus |
kube-storage-version-migrator-operator-fbcb7858d-95djl |
AddedInterface |
Add eth0 [10.128.0.14/23] from ovn-kubernetes | |
openshift-service-ca-operator |
multus |
service-ca-operator-76d7c5458c-wgsgp |
AddedInterface |
Add eth0 [10.128.0.15/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-749d766b67-gc5pf |
FailedMount |
MountVolume.SetUp failed for volume "kube-api-access-7vrpt" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-8bb498f8b-dsrn8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81a043e61c07b8e93c6b082aa920d61ffa69762bcc2ef1018360026d62c11b18" | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-fbcb7858d-95djl |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:773fe01f949872eaae7daee9bac53f06ca4d375e3f8d6207a9a3eccaa4ab9f98" | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-76d7c5458c-wgsgp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:14794ac4b4b5e1bb2728d253b939578a03730cf26ba5cf795c8e2d26b9737dd6" | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-5c6c84d584-4z2d9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:87666cc451e16c135276f6405cd7d0c2ce76fd5f19f02a9654c23bb9651c54b3" | |
openshift-authentication-operator |
multus |
authentication-operator-68df59f464-ffd6s |
AddedInterface |
Add eth0 [10.128.0.10/23] from ovn-kubernetes | |
openshift-apiserver-operator |
multus |
openshift-apiserver-operator-5c6c84d584-4z2d9 |
AddedInterface |
Add eth0 [10.128.0.12/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-6f7cd4b84d-vl2fv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e045fad043f28570b754619999ffb356bedee81ff842c56a32b1b13588fc1651" | |
openshift-authentication-operator |
kubelet |
authentication-operator-68df59f464-ffd6s |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a1252ab4a94ef96c90c19a926c6c10b1c73186377f408414c8a3aa1949a0a75" | |
openshift-controller-manager-operator |
multus |
openshift-controller-manager-operator-967d9d7c4-5t48s |
AddedInterface |
Add eth0 [10.128.0.9/23] from ovn-kubernetes | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-967d9d7c4-5t48s |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53c526dc7766f65b2de93215a5f609fdc2f790717c07d15ffcbf5d4ac79d002e" | |
openshift-kube-controller-manager-operator |
multus |
kube-controller-manager-operator-8bb498f8b-dsrn8 |
AddedInterface |
Add eth0 [10.128.0.13/23] from ovn-kubernetes | |
openshift-kube-scheduler-operator |
multus |
openshift-kube-scheduler-operator-6f7cd4b84d-vl2fv |
AddedInterface |
Add eth0 [10.128.0.7/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-749d766b67-gc5pf |
FailedMount |
MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
(x9) | openshift-cluster-version |
kubelet |
cluster-version-operator-6f997f7ccd-p8rgc |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "cluster-version-operator-serving-cert" not found |
(x2) | openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorVersionChanged |
clusteroperator/kube-storage-version-migrator version "operator" changed from "" to "4.12.2" |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-fbcb7858d-95djl |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:773fe01f949872eaae7daee9bac53f06ca4d375e3f8d6207a9a3eccaa4ab9f98" in 4.889133503s | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-fbcb7858d-95djl_0cedc599-4ae7-4092-981e-f3c239c99f0a became leader | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-loggingsyncer |
kube-storage-version-migrator-operator |
FastControllerResync |
Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from Unknown to True ("KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("KubeStorageVersionMigratorAvailable: Waiting for Deployment") | |
(x2) | openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),status.versions changed from [] to [{"operator" "4.12.2"}] |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-static-conditions-controller-staticconditionscontroller |
kube-storage-version-migrator-operator |
FastControllerResync |
Controller "StaticConditionsController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-staticresourcecontroller |
kube-storage-version-migrator-operator |
NamespaceCreated |
Created Namespace/openshift-kube-storage-version-migrator because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-staticresourcecontroller |
kube-storage-version-migrator-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-storage-version-migrator-sa -n openshift-kube-storage-version-migrator because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigratorstaticresources-staticresourcecontroller |
kube-storage-version-migrator-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/storage-version-migration-migrator because it was missing | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-kubestorageversionmigrator-deployment-controller--kubestorageversionmigrator |
kube-storage-version-migrator-operator |
DeploymentCreated |
Created Deployment.apps/migrator -n openshift-kube-storage-version-migrator because it was missing | |
openshift-kube-storage-version-migrator |
deployment-controller |
migrator |
ScalingReplicaSet |
Scaled up replica set migrator-5c54d8d69d to 1 | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded changed from Unknown to False ("All is well") | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing message changed from "KubeStorageVersionMigratorProgressing: Waiting for Deployment to act on changes" to "KubeStorageVersionMigratorProgressing: Waiting for Deployment to deploy pods" | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-967d9d7c4-5t48s |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53c526dc7766f65b2de93215a5f609fdc2f790717c07d15ffcbf5d4ac79d002e" in 6.587436365s | |
openshift-authentication-operator |
kubelet |
authentication-operator-68df59f464-ffd6s |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a1252ab4a94ef96c90c19a926c6c10b1c73186377f408414c8a3aa1949a0a75" in 6.51968117s | |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-8bb498f8b-dsrn8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81a043e61c07b8e93c6b082aa920d61ffa69762bcc2ef1018360026d62c11b18" in 6.474184072s | |
openshift-service-ca-operator |
kubelet |
service-ca-operator-76d7c5458c-wgsgp |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:14794ac4b4b5e1bb2728d253b939578a03730cf26ba5cf795c8e2d26b9737dd6" in 6.513927268s | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-5c6c84d584-4z2d9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:87666cc451e16c135276f6405cd7d0c2ce76fd5f19f02a9654c23bb9651c54b3" in 6.526283958s | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]any{ +Â "build": map[string]any{ +Â "buildDefaults": map[string]any{"resources": map[string]any{}}, +Â "imageTemplateFormat": map[string]any{ +Â "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:caf948d824"...), +Â }, +Â }, +Â "deployer": map[string]any{ +Â "imageTemplateFormat": map[string]any{ +Â "format": string("quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a230169514"...), +Â }, +Â }, +Â "featureGates": []any{}, +Â "ingress": map[string]any{"ingressIPNetworkCIDR": string("")}, Â Â } | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6cbc757bbf-85bgq |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-6cbc757bbf-85bgq to master1 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
ServiceCreated |
Created Service/route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing | |
(x5) | openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6cbc757bbf |
FailedCreate |
Error creating: pods "route-controller-manager-6cbc757bbf-" is forbidden: error looking up service account openshift-route-controller-manager/route-controller-manager-sa: serviceaccount "route-controller-manager-sa" not found |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6cbc757bbf |
SuccessfulCreate |
Created pod: route-controller-manager-6cbc757bbf-85bgq | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-76d7c5458c-wgsgp_43618c26-3809-4cb1-9cb4-ac39dd5e2731 became leader | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-76d7c5458c-wgsgp_43618c26-3809-4cb1-9cb4-ac39dd5e2731 became leader | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-route-controller-manager because it was missing | |
openshift-service-ca-operator |
service-ca-operator-loggingsyncer |
service-ca-operator |
FastControllerResync |
Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-oauthserverworkloadcontroller |
authentication-operator |
FastControllerResync |
Controller "OAuthServerWorkloadController" resync interval is set to 0s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-oauthapiservercontrollerworkloadcontroller |
authentication-operator |
FastControllerResync |
Controller "OAuthAPIServerControllerWorkloadController" resync interval is set to 0s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
FastControllerResync |
Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
authentication-operator |
FastControllerResync |
Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-secret-revision-prune-controller-secretrevisionprunecontroller |
authentication-operator |
FastControllerResync |
Controller "SecretRevisionPruneController" resync interval is set to 0s which might lead to client request throttling | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-6cbc757bbf to 1 | |
openshift-authentication-operator |
cluster-authentication-operator-unsupportedconfigoverridescontroller |
authentication-operator |
FastControllerResync |
Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling | |
openshift-authentication-operator |
cluster-authentication-operator-loggingsyncer |
authentication-operator |
FastControllerResync |
Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "operator" changed from "" to "4.12.2" | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-68df59f464-ffd6s_b75a9ed7-d863-490e-a17a-b563cca4171a became leader | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-68df59f464-ffd6s_b75a9ed7-d863-490e-a17a-b563cca4171a became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded set to False ("APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: "),Progressing set to False ("All is well"),Available set to False (""),Upgradeable set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftapiservers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-apiserver-operator"} {"" "namespaces" "" "openshift-apiserver"} {"" "namespaces" "" "openshift-etcd-operator"} {"" "endpoints" "openshift-etcd" "host-etcd-2"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-apiserver" ""} {"apiregistration.k8s.io" "apiservices" "" "v1.apps.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.authorization.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.build.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.image.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.project.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.quota.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.route.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.security.openshift.io"} {"apiregistration.k8s.io" "apiservices" "" "v1.template.openshift.io"}],status.versions changed from [] to [{"operator" "4.12.2"}] | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"audit\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorVersionChanged |
clusteroperator/openshift-apiserver version "operator" changed from "" to "4.12.2" | |
openshift-apiserver-operator |
openshift-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "ConnectivityCheckController" resync interval is set to 0s which might lead to client request throttling | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n kube-system because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-creating-openshift-controller-manager -n openshift-infra because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-controller-manager-sa -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
ServiceCreated |
Created Service/controller-manager -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/prometheus-k8s -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-loggingsyncer |
openshift-apiserver-operator |
FastControllerResync |
Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-unsupportedconfigoverridescontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-secret-revision-prune-controller-secretrevisionprunecontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "SecretRevisionPruneController" resync interval is set to 0s which might lead to client request throttling | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-5dfb447f4c to 1 | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/route-controller-manager-sa -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
NamespaceUpdated |
Updated Namespace/openshift-route-controller-manager because it changed | |
openshift-service-ca-operator |
service-ca-operator-service-ca-operator-servicecaoperator |
service-ca-operator |
FastControllerResync |
Controller "ServiceCAOperator" resync interval is set to 0s which might lead to client request throttling | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing | |
(x5) | openshift-multus |
kubelet |
multus-admission-controller-7b9c64854b-bwsvm |
FailedMount |
MountVolume.SetUp failed for volume "webhook-certs" : secret "multus-admission-controller-secret" not found |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-controller-manager-pod\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"" "nodes" "" ""} {"certificates.k8s.io" "certificatesigningrequests" "" ""}] to [{"operator.openshift.io" "kubecontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-controller-manager"} {"" "namespaces" "" "openshift-kube-controller-manager-operator"} {"" "namespaces" "" "kube-system"} {"certificates.k8s.io" "certificatesigningrequests" "" ""} {"" "nodes" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.12.2"}] | |
(x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "raw-internal" changed from "" to "4.12.2" |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
CABundleUpdateRequired |
"csr-controller-signer-ca" in "openshift-kube-controller-manager-operator" requires a new cert | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
NamespaceUpdated |
Updated Namespace/openshift-controller-manager because it changed | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-locking-openshift-controller-manager -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:tokenreview-openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:ingress-to-route-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentCreated |
Created Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded changed from Unknown to False ("All is well") | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentCreated |
Created Deployment.apps/controller-manager -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "openshiftcontrollermanagers" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-controller-manager-operator"} {"" "namespaces" "" "openshift-controller-manager"} {"" "namespaces" "" "openshift-route-controller-manager"}] | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager because it was missing | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "servicecas" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-service-ca-operator"} {"" "namespaces" "" "openshift-service-ca"}] | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller-installercontroller |
kube-controller-manager-operator |
FastControllerResync |
Controller "InstallerController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-guardcontroller |
kube-controller-manager-operator |
FastControllerResync |
Controller "GuardController" resync interval is set to 0s which might lead to client request throttling | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-creating-route-controller-manager -n openshift-infra because it was missing | |
(x5) | openshift-dns-operator |
kubelet |
dns-operator-6cb7cc46cf-njb62 |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "metrics-tls" not found |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserverworkloadcontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "OpenShiftAPIServerWorkloadController" resync interval is set to 0s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
FastControllerResync |
Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-config-observer-configobserver |
openshift-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated featureGates to | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager |
default-scheduler |
controller-manager-5dfb447f4c-qhfxd |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-5dfb447f4c-qhfxd to master1 | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-5c6c84d584-4z2d9_e0d85c6c-0bea-4f32-893f-3f7ff584157e became leader | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/openshift-global-ca -n openshift-controller-manager because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-5c6c84d584-4z2d9_e0d85c6c-0bea-4f32-893f-3f7ff584157e became leader | |
(x5) | openshift-cluster-machine-approver |
kubelet |
machine-approver-879c4799-ds2g4 |
FailedMount |
MountVolume.SetUp failed for volume "machine-approver-tls" : secret "machine-approver-tls" not found |
(x6) | openshift-controller-manager |
replicaset-controller |
controller-manager-5dfb447f4c |
FailedCreate |
Error creating: pods "controller-manager-5dfb447f4c-" is forbidden: error looking up service account openshift-controller-manager/openshift-controller-manager-sa: serviceaccount "openshift-controller-manager-sa" not found |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created configmap/openshift-service-ca-n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-controller-manager because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-967d9d7c4-5t48s_3914474a-0fd8-40e5-b50a-c49d04758625 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-loggingsyncer |
kube-controller-manager-operator |
FastControllerResync |
Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-unsupportedconfigoverridescontroller |
kube-controller-manager-operator |
FastControllerResync |
Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-nodecontroller |
kube-controller-manager-operator |
FastControllerResync |
Controller "NodeController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-prunecontroller |
kube-controller-manager-operator |
FastControllerResync |
Controller "PruneController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
FastControllerResync |
Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-967d9d7c4-5t48s_3914474a-0fd8-40e5-b50a-c49d04758625 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-8bb498f8b-dsrn8_db539722-b0e4-4352-a685-320d7aa83c3a became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-8bb498f8b-dsrn8_db539722-b0e4-4352-a685-320d7aa83c3a became leader | |
openshift-controller-manager |
replicaset-controller |
controller-manager-5dfb447f4c |
SuccessfulCreate |
Created pod: controller-manager-5dfb447f4c-qhfxd | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from Unknown to False ("All is well") | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-5dfb447f4c to 0 from 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]interface{}{ +Â "extendedArguments": map[string]interface{}{ +Â "cluster-cidr": []interface{}{string("10.128.0.0/14")}, +Â "cluster-name": []interface{}{string("test-cluster-2m245")}, +Â "feature-gates": []interface{}{ +Â string("APIPriorityAndFairness=true"), +Â string("RotateKubeletServerCertificate=true"), +Â string("DownwardAPIHugePages=true"), string("CSIMigrationAzureFile=false"), +Â string("CSIMigrationvSphere=false"), +Â }, +Â "service-cluster-ip-range": []interface{}{string("172.30.0.0/16")}, +Â }, +Â "servingInfo": map[string]interface{}{ +Â "cipherSuites": []interface{}{ +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"), +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, Â Â } | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from Unknown to False ("All is well") | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6cbc757bbf |
SuccessfulDelete |
Deleted pod: route-controller-manager-6cbc757bbf-85bgq | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-779794f74-q82d8 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-779794f74 to 1 from 0 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-779794f74 |
SuccessfulCreate |
Created pod: route-controller-manager-779794f74-q82d8 | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ServiceAccountCreated |
Created ServiceAccount/service-ca -n openshift-service-ca because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.config.yaml | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:controller:service-ca -n openshift-service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing | |
openshift-controller-manager |
default-scheduler |
controller-manager-78ddfb869c-hfkrq |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-controller-manager |
replicaset-controller |
controller-manager-78ddfb869c |
SuccessfulCreate |
Created pod: controller-manager-78ddfb869c-hfkrq | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:service-ca because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
NamespaceUpdated |
Updated Namespace/openshift-service-ca because it changed | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Degraded changed from Unknown to False ("All is well") | |
openshift-controller-manager |
replicaset-controller |
controller-manager-5dfb447f4c |
SuccessfulDelete |
Deleted pod: controller-manager-5dfb447f4c-qhfxd | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-6cbc757bbf to 0 from 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded set to False ("All is well"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "authentications" "" "cluster"} {"config.openshift.io" "authentications" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"route.openshift.io" "routes" "openshift-authentication" "oauth-openshift"} {"" "services" "openshift-authentication" "oauth-openshift"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-authentication"} {"" "namespaces" "" "openshift-authentication-operator"} {"" "namespaces" "" "openshift-ingress"} {"" "namespaces" "" "openshift-oauth-apiserver"}],status.versions changed from [] to [{"operator" "4.12.2"}] | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.config.yaml,data.openshift-controller-manager.openshift-global-ca.configmap | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:image-trigger-controller because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-openshiftcontrollermanagerstaticresources-staticresourcecontroller |
openshift-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-controller-manager:update-buildconfig-status because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-78ddfb869c to 1 from 0 | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available changed from Unknown to False ("APIServicesAvailable: endpoints \"api\" not found") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found" to "APIServicesAvailable: endpoints \"api\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "" to "APIServicesAvailable: endpoints \"api\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/revision-status-1 -n openshift-kube-controller-manager because it was missing | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
SecretCreated |
Created Secret/signing-key -n openshift-service-ca because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-signer-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from Unknown to True ("Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2."),Available changed from Unknown to False ("Available: no pods available on any node."),Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
TargetUpdateRequired |
"csr-signer" in "openshift-kube-controller-manager-operator" requires a new target cert/key pair: missing notAfter | |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-6f7cd4b84d-vl2fv |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e045fad043f28570b754619999ffb356bedee81ff842c56a32b1b13588fc1651" in 9.415399709s | |
(x3) | openshift-controller-manager |
kubelet |
controller-manager-5dfb447f4c-qhfxd |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
DeploymentCreated |
Created Deployment.apps/service-ca -n openshift-service-ca because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]interface{}{ +Â "routingConfig": map[string]interface{}{"subdomain": string("apps.test-cluster.redhat.com")}, +Â "servingInfo": map[string]interface{}{ +Â "cipherSuites": []interface{}{ +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"), +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, +Â "storageConfig": map[string]interface{}{"urls": []interface{}{string("https://192.168.126.10:2379")}}, Â Â } | |
openshift-service-ca-operator |
service-ca-operator-resource-sync-controller-resourcesynccontroller |
service-ca-operator |
ConfigMapCreated |
Created ConfigMap/service-ca -n openshift-config-managed because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "All is well" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded changed from Unknown to False ("All is well") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"" "namespaces" "" "openshift-kube-scheduler"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-scheduler" ""}] to [{"operator.openshift.io" "kubeschedulers" "" "cluster"} {"config.openshift.io" "schedulers" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-scheduler"} {"" "namespaces" "" "openshift-kube-scheduler-operator"} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""}],status.versions changed from [] to [{"raw-internal" "4.12.2"}] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]interface{}{ +Â "servingInfo": map[string]interface{}{ +Â "cipherSuites": []interface{}{ +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"), +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, Â Â } | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-config-observer-configobserver |
openshift-kube-scheduler-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-nodecontroller |
openshift-kube-scheduler-operator |
MasterNodeObserved |
Observed new master node master1 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "raw-internal" changed from "" to "4.12.2" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller-installercontroller |
openshift-kube-scheduler-operator |
FastControllerResync |
Controller "InstallerController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-guardcontroller |
openshift-kube-scheduler-operator |
FastControllerResync |
Controller "GuardController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-loggingsyncer |
openshift-kube-scheduler-operator |
FastControllerResync |
Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-unsupportedconfigoverridescontroller |
openshift-kube-scheduler-operator |
FastControllerResync |
Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "All is well" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-scheduler-pod\" not found" | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Progressing changed from Unknown to True ("Progressing: \nProgressing: service-ca does not have available replicas"),Available changed from Unknown to True ("All is well"),Upgradeable changed from Unknown to True ("All is well") | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator |
ConfigMapCreated |
Created ConfigMap/signing-cabundle -n openshift-service-ca because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
TargetConfigDeleted |
Deleted target configmap openshift-config-managed/csr-controller-ca because source config does not exist | |
openshift-service-ca |
default-scheduler |
service-ca-5d96446959-k8dqm |
Scheduled |
Successfully assigned openshift-service-ca/service-ca-5d96446959-k8dqm to master1 | |
openshift-service-ca |
multus |
service-ca-5d96446959-k8dqm |
AddedInterface |
Add eth0 [10.128.0.18/23] from ovn-kubernetes | |
openshift-service-ca |
kubelet |
service-ca-5d96446959-k8dqm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:14794ac4b4b5e1bb2728d253b939578a03730cf26ba5cf795c8e2d26b9737dd6" already present on machine | |
openshift-service-ca |
kubelet |
service-ca-5d96446959-k8dqm |
Created |
Created container service-ca-controller | |
openshift-service-ca |
kubelet |
service-ca-5d96446959-k8dqm |
Started |
Started container service-ca-controller | |
openshift-service-ca |
replicaset-controller |
service-ca-5d96446959 |
SuccessfulCreate |
Created pod: service-ca-5d96446959-k8dqm | |
openshift-service-ca |
deployment-controller |
service-ca |
ScalingReplicaSet |
Scaled up replica set service-ca-5d96446959 to 1 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-6f7cd4b84d-vl2fv_172c88cd-2f19-4461-b104-7c90e83df196 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-6f7cd4b84d-vl2fv_172c88cd-2f19-4461-b104-7c90e83df196 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]",Progressing changed from Unknown to False ("NodeInstallerProgressing: 1 nodes are at revision 0"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 nodes are at revision 0") | |
(x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-nodecontroller |
kube-controller-manager-operator |
MasterNodeObserved |
Observed new master node master1 |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
FastControllerResync |
Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
kube-controller-manager-operator |
NamespaceUpdated |
Updated Namespace/openshift-kube-controller-manager because it changed | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-prunecontroller |
openshift-kube-scheduler-operator |
FastControllerResync |
Controller "PruneController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-nodecontroller |
openshift-kube-scheduler-operator |
FastControllerResync |
Controller "NodeController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-controller-manager-installer because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-backingresourcecontroller-backingresourcecontroller |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator |
SecretCreated |
Created Secret/csr-signer -n openshift-kube-controller-manager-operator because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.126.10:2379 | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
RoutingConfigSubdomainChanged |
Domain changed from "" to "apps.test-cluster.redhat.com" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"audit\" not found" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: openshiftcontrollermanagers.operator.openshift.io/cluster: observed generation is 0, desired generation is 2." to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/revision-status-1 -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: " to "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nRevisionControllerDegraded: configmaps \"audit\" not found" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]interface{}(\n-\u00a0\tnil,\n+\u00a0\t{\n+\u00a0\t\t\"corsAllowedOrigins\": []interface{}{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+\u00a0\t\t\"oauthConfig\": map[string]interface{}{\n+\u00a0\t\t\t\"assetPublicURL\": string(\"\"),\n+\u00a0\t\t\t\"loginURL\": string(\"https://api.test-cluster.redhat.com:6443\"),\n+\u00a0\t\t\t\"templates\": map[string]interface{}{\n+\u00a0\t\t\t\t\"error\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t\t\"login\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t\t\"providerSelection\": string(\"/var/config/system/secrets/v4-0-\"...),\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"tokenConfig\": map[string]interface{}{\n+\u00a0\t\t\t\t\"accessTokenMaxAgeSeconds\": float64(86400),\n+\u00a0\t\t\t\t\"authorizeTokenMaxAgeSeconds\": float64(300),\n+\u00a0\t\t\t},\n+\u00a0\t\t},\n+\u00a0\t\t\"serverArguments\": map[string]interface{}{\n+\u00a0\t\t\t\"audit-log-format\": []interface{}{string(\"json\")},\n+\u00a0\t\t\t\"audit-log-maxbackup\": []interface{}{string(\"10\")},\n+\u00a0\t\t\t\"audit-log-maxsize\": []interface{}{string(\"100\")},\n+\u00a0\t\t\t\"audit-log-path\": []interface{}{string(\"/var/log/oauth-server/audit.log\")},\n+\u00a0\t\t\t\"audit-policy-file\": []interface{}{string(\"/var/run/configmaps/audit/audit.\"...)},\n+\u00a0\t\t},\n+\u00a0\t\t\"servingInfo\": map[string]interface{}{\n+\u00a0\t\t\t\"cipherSuites\": []interface{}{\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_S\"...),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM\"...),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_S\"...), ...,\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+\u00a0\t\t},\n+\u00a0\t\t\"volumesToMount\": map[string]interface{}{\"identityProviders\": string(\"{}\")},\n+\u00a0\t},\n\u00a0\u00a0)\n" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTokenConfig |
accessTokenMaxAgeSeconds changed from %!d(float64=0) to %!d(float64=86400) | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAPIServerURL |
loginURL changed from to https://api.test-cluster.redhat.com:6443 | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTemplates |
templates changed to map["error":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/errors.html" "login":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/login.html" "providerSelection":"/var/config/system/secrets/v4-0-config-system-ocp-branding-template/providers.html"] | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAuditProfile |
AuditProfile changed from '%!s(<nil>)' to 'map[audit-log-format:[json] audit-log-maxbackup:[10] audit-log-maxsize:[100] audit-log-path:[/var/log/oauth-server/audit.log] audit-policy-file:[/var/run/configmaps/audit/audit.yaml]]' | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: Progressing changed from True to False ("Progressing: All service-ca-operator deployments updated") | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorStatusChanged |
Status for clusteroperator/service-ca changed: status.versions changed from [] to [{"operator" "4.12.2"}] | |
openshift-service-ca-operator |
service-ca-operator-status-controller-statussyncer_service-ca |
service-ca-operator |
OperatorVersionChanged |
clusteroperator/service-ca version "operator" changed from "" to "4.12.2" | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
RevisionTriggered |
new revision 2 triggered by "configmap \"audit\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-apiserver because it was missing | |
(x13) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
RequiredInstallerResourcesMissing |
secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0 |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/revision-status-1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmaps \"kube-scheduler-pod\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" | |
openshift-service-ca |
service-ca-controller-apiservicecabundleinjector |
service-ca |
FastControllerResync |
Controller "APIServiceCABundleInjector" resync interval is set to 0s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.126.10:2379 | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveAPIAudiences |
service account issuer changed from to https://kubernetes.default.svc | |
openshift-service-ca |
service-ca-controller-service-serving-cert-controller-serviceservingcertcontroller |
service-ca |
FastControllerResync |
Controller "ServiceServingCertController" resync interval is set to 0s which might lead to client request throttling | |
(x4) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6cbc757bbf-85bgq |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-service-ca |
service-ca-controller-crdcabundleinjector |
service-ca |
FastControllerResync |
Controller "CRDCABundleInjector" resync interval is set to 0s which might lead to client request throttling | |
openshift-service-ca |
service-ca-controller-mutatingwebhookcabundleinjector |
service-ca |
FastControllerResync |
Controller "MutatingWebhookCABundleInjector" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] | |
openshift-service-ca |
service-ca-controller-service-serving-cert-update-controller-serviceservingcertupdatecontroller |
service-ca |
FastControllerResync |
Controller "ServiceServingCertUpdateController" resync interval is set to 0s which might lead to client request throttling | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthAPIServer") of observed config: "\u00a0\u00a0map[string]interface{}(\n-\u00a0\tnil,\n+\u00a0\t{\n+\u00a0\t\t\"apiServerArguments\": map[string]interface{}{\n+\u00a0\t\t\t\"api-audiences\": []interface{}{string(\"https://kubernetes.default.svc\")},\n+\u00a0\t\t\t\"cors-allowed-origins\": []interface{}{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n+\u00a0\t\t\t\"etcd-servers\": []interface{}{string(\"https://192.168.126.10:2379\")},\n+\u00a0\t\t\t\"tls-cipher-suites\": []interface{}{\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM\"...),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_S\"...),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM\"...),\n+\u00a0\t\t\t\tstring(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_S\"...), ...,\n+\u00a0\t\t\t},\n+\u00a0\t\t\t\"tls-min-version\": string(\"VersionTLS12\"),\n+\u00a0\t\t},\n+\u00a0\t},\n\u00a0\u00a0)\n" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
kube-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-controller-manager -n kube-system because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
kube-controller-manager-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
kube-controller-manager-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-election-lock-cluster-policy-controller -n openshift-kube-controller-manager because it was missing | |
openshift-service-ca |
service-ca-controller-validatingwebhookcabundleinjector |
service-ca |
FastControllerResync |
Controller "ValidatingWebhookCABundleInjector" resync interval is set to 0s which might lead to client request throttling | |
openshift-service-ca |
service-ca-controller-legacyvulnerableconfigmapcabundleinjector |
service-ca |
FastControllerResync |
Controller "LegacyVulnerableConfigMapCABundleInjector" resync interval is set to 0s which might lead to client request throttling | |
openshift-service-ca |
service-ca-controller |
service-ca-controller-lock |
LeaderElection |
service-ca-5d96446959-k8dqm_5ccfa7c8-00b4-482b-aa3f-bf58756c6a45 became leader | |
openshift-service-ca |
service-ca-controller |
service-ca-controller-lock |
LeaderElection |
service-ca-5d96446959-k8dqm_5ccfa7c8-00b4-482b-aa3f-bf58756c6a45 became leader | |
openshift-service-ca |
service-ca-controller-configmapcabundleinjector |
service-ca |
FastControllerResync |
Controller "ConfigMapCABundleInjector" resync interval is set to 0s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserverworkloadcontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-0,kube-scheduler-cert-syncer-kubeconfig-0,kube-scheduler-pod-0,scheduler-kubeconfig-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0]\nRevisionControllerDegraded: configmaps \"kube-scheduler-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-1,kube-scheduler-cert-syncer-kubeconfig-1,kube-scheduler-pod-1,scheduler-kubeconfig-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1]\nRevisionControllerDegraded: configmaps \"kube-scheduler-pod\" not found",Progressing changed from False to True ("NodeInstallerProgressing: 1 nodes are at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 nodes are at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 nodes are at revision 0; 0 nodes have achieved new revision 1" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from Unknown to False ("All is well") | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources |
openshift-apiserver-operator |
NamespaceUpdated |
Updated Namespace/openshift-apiserver because it changed | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources |
openshift-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/revision-status-2 -n openshift-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
kube-controller-manager-operator |
ServiceCreated |
Created Service/kube-controller-manager -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-resource-sync-controller-resourcesynccontroller |
openshift-apiserver-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/pv-recycler-controller -n openshift-infra because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-staticresourcecontroller |
authentication-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/audit -n openshift-authentication: namespaces "openshift-authentication" not found | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/kube-controller-manager-sa -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-scheduler because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/audit-2 -n openshift-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftAuthenticatorCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
CSRCreated |
A csr "system:openshift:openshift-authenticator-k67rx" is created for OpenShiftAuthenticatorCertRequester | |
openshift-authentication-operator |
oauth-apiserver-webhook-authenticator-cert-approver-OpenShiftAuthenticator-webhookauthenticatorcertapprover_openshiftauthenticator |
authentication-operator |
CSRApproval |
The CSR "system:openshift:openshift-authenticator-k67rx" has been approved | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/revision-status-1 -n openshift-oauth-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-backingresourcecontroller-backingresourcecontroller |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-scheduler-installer because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
openshift-kube-scheduler-operator |
NamespaceUpdated |
Updated Namespace/openshift-kube-scheduler because it changed | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/revision-status-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-controller-manager-recovery because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
openshift-kube-scheduler-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:leader-locking-kube-scheduler -n kube-system because it was missing | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-oauth-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler:public-2 because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
kube-controller-manager-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig -n openshift-kube-scheduler because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
RevisionCreate |
Revision 1 created because configmap "audit" not found | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
openshift-kube-scheduler-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
kube-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:cluster-csr-approver-controller because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
openshift-kube-scheduler-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:sa-listing-configmaps -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-staticresourcecontroller |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-staticresourcecontroller |
authentication-operator |
NamespaceUpdated |
Updated Namespace/openshift-oauth-apiserver because it changed | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-oauth-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources |
openshift-apiserver-operator |
ServiceCreated |
Created Service/api -n openshift-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: \nRevisionControllerDegraded: configmaps \"audit\" not found" to "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
kube-controller-manager-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:kube-controller-manager:gce-cloud-provider because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
kube-controller-manager-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:kube-controller-manager:gce-cloud-provider because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found" to "APIServicesAvailable: PreconditionNotReady" | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-879c4799-ds2g4 |
Started |
Started container kube-rbac-proxy | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserverworkloadcontroller |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/image-import-ca -n openshift-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-staticresourcecontroller |
authentication-operator |
NamespaceCreated |
Created Namespace/openshift-authentication because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-staticresourcecontroller |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:openshift-authentication because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources |
openshift-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-apiserver-sa -n openshift-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca -n openshift-kube-scheduler because it was missing | |
(x6) | openshift-marketplace |
kubelet |
marketplace-operator-75746f848d-v4htq |
FailedMount |
MountVolume.SetUp failed for volume "marketplace-operator-metrics" : secret "marketplace-operator-metrics" not found |
openshift-dns-operator |
multus |
dns-operator-6cb7cc46cf-njb62 |
AddedInterface |
Add eth0 [10.128.0.6/23] from ovn-kubernetes | |
openshift-dns-operator |
kubelet |
dns-operator-6cb7cc46cf-njb62 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da1dec5c084b77969ed1b7995a292c7ac431cdd711a708bfbe1f40628515466c" | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-879c4799-ds2g4 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec53f44c080dc784adb01a4e3b8257adffaf79a6e38f683d26bf1b384d6b7156" | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-879c4799-ds2g4 |
Created |
Created container kube-rbac-proxy | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-879c4799-ds2g4 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" already present on machine | |
openshift-multus |
multus |
multus-admission-controller-7b9c64854b-bwsvm |
AddedInterface |
Add eth0 [10.128.0.11/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
multus-admission-controller-7b9c64854b-bwsvm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f87f071c3aa8b3932f33cd2dec201abbf7a116e70eeb0df53f93cccc0c3f4041" | |
openshift-authentication-operator |
oauth-apiserver-openshiftauthenticatorcertrequester |
authentication-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftAuthenticatorCertRequester is available | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
openshift-kube-scheduler-operator |
ServiceCreated |
Created Service/scheduler -n openshift-kube-scheduler because it was missing | |
(x5) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-749d766b67-gc5pf |
FailedMount |
MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" : secret "control-plane-machine-set-operator-tls" not found |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-staticresourcecontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-trusted-ca-bundle -n openshift-authentication because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-staticresourcecontroller |
authentication-operator |
ServiceCreated |
Created Service/api -n openshift-oauth-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiserverstaticresources-apiserverstaticresources |
openshift-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/trusted-ca-bundle -n openshift-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionCreateFailed |
Failed to create revision 1: configmaps "kube-controller-manager-pod" not found | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-56868c8696 to 1 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig -n openshift-kube-scheduler because it was missing | |
openshift-apiserver |
default-scheduler |
apiserver-56868c8696-s9c4s |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-56868c8696-s9c4s to master1 | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserverworkloadcontroller |
openshift-apiserver-operator |
DeploymentCreated |
Created Deployment.apps/apiserver -n openshift-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-oauthapiservercontrollerworkloadcontroller |
authentication-operator |
DeploymentCreated |
Created Deployment.apps/apiserver -n openshift-oauth-apiserver because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.") | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-staticresourcecontroller |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-ocp-branding-template -n openshift-authentication because it was missing | |
(x7) | openshift-oauth-apiserver |
replicaset-controller |
apiserver-779d7f6576 |
FailedCreate |
Error creating: pods "apiserver-779d7f6576-" is forbidden: error looking up service account openshift-oauth-apiserver/oauth-apiserver-sa: serviceaccount "oauth-apiserver-sa" not found |
(x3) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-0,config-0,controller-manager-kubeconfig-0,kube-controller-cert-syncer-kubeconfig-0,kube-controller-manager-pod-0,recycler-config-0,service-ca-0,serviceaccount-ca-0, secrets: localhost-recovery-client-token-0,service-account-private-key-0 |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-staticresourcecontroller |
authentication-operator |
ServiceAccountCreated |
Created ServiceAccount/oauth-apiserver-sa -n openshift-oauth-apiserver because it was missing | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-779d7f6576 to 1 | |
(x10) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 2 triggered by "configmap \"kube-scheduler-pod\" not found" |
openshift-apiserver |
replicaset-controller |
apiserver-56868c8696 |
SuccessfulCreate |
Created pod: apiserver-56868c8696-s9c4s | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/openshift-kube-scheduler-sa -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
openshift-kube-scheduler-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-scheduler-recovery because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: waiting for observed configuration to be available\nAPIServerDeploymentDegraded: " to "All is well",Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2.",Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-staticresourcecontroller |
authentication-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-staticresourcecontroller |
authentication-operator |
ServiceAccountCreated |
Created ServiceAccount/oauth-openshift -n openshift-authentication because it was missing | |
openshift-authentication-operator |
oauth-apiserver-apiserverstaticresources-staticresourcecontroller |
authentication-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:useroauthaccesstoken-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
RevisionCreateFailed |
Failed to create revision 1: configmaps "audit" not found | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-oauth-apiserver |
default-scheduler |
apiserver-779d7f6576-4c5ff |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-779d7f6576-4c5ff to master1 | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-779d7f6576 |
SuccessfulCreate |
Created pod: apiserver-779d7f6576-4c5ff | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 2 triggered by "configmap \"kube-scheduler-pod-1\" not found" | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-staticresourcecontroller |
authentication-operator |
RoleCreated |
Created Role.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-staticresourcecontroller |
authentication-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:oauth-servercert-trust -n openshift-config-managed because it was missing | |
(x3) | openshift-apiserver |
kubelet |
apiserver-56868c8696-s9c4s |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "All is well" to "AuthenticatorCertKeyProgressing: All is well" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
openshift-kube-scheduler-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-scheduler because it was missing | |
(x3) | openshift-oauth-apiserver |
kubelet |
apiserver-779d7f6576-4c5ff |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : secret "serving-cert" not found |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-staticresourcecontroller |
authentication-operator |
ServiceCreated |
Created Service/oauth-openshift -n openshift-authentication because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-1,kube-scheduler-cert-syncer-kubeconfig-1,kube-scheduler-pod-1,scheduler-kubeconfig-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1]\nRevisionControllerDegraded: configmaps \"kube-scheduler-pod\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-1,kube-scheduler-cert-syncer-kubeconfig-1,kube-scheduler-pod-1,scheduler-kubeconfig-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1]\nRevisionControllerDegraded: configmaps \"kube-scheduler-pod\" not found\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-879c4799-ds2g4 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec53f44c080dc784adb01a4e3b8257adffaf79a6e38f683d26bf1b384d6b7156" in 5.138496181s | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-879c4799-ds2g4 |
Created |
Created container machine-approver-controller | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapUpdated |
Updated ConfigMap/revision-status-2 -n openshift-kube-scheduler: cause by changes in data.reason | |
openshift-dns-operator |
kubelet |
dns-operator-6cb7cc46cf-njb62 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:da1dec5c084b77969ed1b7995a292c7ac431cdd711a708bfbe1f40628515466c" in 5.170855419s | |
(x6) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
RequiredInstallerResourcesMissing |
secrets: kube-scheduler-client-cert-key, configmaps: config-1,kube-scheduler-cert-syncer-kubeconfig-1,kube-scheduler-pod-1,scheduler-kubeconfig-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1 |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-kubecontrollermanagerstaticresources-kubecontrollermanagerstaticresources |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-scheduler because it was missing | |
openshift-multus |
kubelet |
multus-admission-controller-7b9c64854b-bwsvm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f87f071c3aa8b3932f33cd2dec201abbf7a116e70eeb0df53f93cccc0c3f4041" in 5.029736135s | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (no pods found with labels \"apiserver=true,app=openshift-apiserver-a,openshift-apiserver-anti-affinity=true,revision=2\")",Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 1, desired generation is 2." | |
openshift-multus |
kubelet |
multus-admission-controller-7b9c64854b-bwsvm |
Created |
Created container multus-admission-controller | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/revision-status-2 -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-56868c8696 to 0 from 1 | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-675fc6b586 to 1 from 0 | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-2 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver |
replicaset-controller |
apiserver-56868c8696 |
SuccessfulDelete |
Deleted pod: apiserver-56868c8696-s9c4s | |
openshift-apiserver |
replicaset-controller |
apiserver-675fc6b586 |
SuccessfulCreate |
Created pod: apiserver-675fc6b586-hhb4g | |
openshift-dns-operator |
kubelet |
dns-operator-6cb7cc46cf-njb62 |
Started |
Started container kube-rbac-proxy | |
openshift-dns-operator |
kubelet |
dns-operator-6cb7cc46cf-njb62 |
Created |
Created container kube-rbac-proxy | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserverworkloadcontroller |
openshift-apiserver-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-apiserver because it changed | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 0, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 0, desired generation is 1." | |
openshift-dns-operator |
kubelet |
dns-operator-6cb7cc46cf-njb62 |
Created |
Created container dns-operator | |
openshift-dns-operator |
kubelet |
dns-operator-6cb7cc46cf-njb62 |
Started |
Started container dns-operator | |
openshift-cluster-machine-approver |
master1_d2a18184-a819-4b53-88e3-a276b66ed3b5 |
cluster-machine-approver-leader |
LeaderElection |
master1_d2a18184-a819-4b53-88e3-a276b66ed3b5 became leader | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nWebhookAuthenticatorControllerDegraded: failed to retrieve service openshift-oauth-apiserver/api: service \"api\" not found\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-dns-operator |
kubelet |
dns-operator-6cb7cc46cf-njb62 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" already present on machine | |
openshift-cluster-machine-approver |
kubelet |
machine-approver-879c4799-ds2g4 |
Started |
Started container machine-approver-controller | |
openshift-multus |
kubelet |
multus-admission-controller-7b9c64854b-bwsvm |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-7b9c64854b-bwsvm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-7b9c64854b-bwsvm |
Created |
Created container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-7b9c64854b-bwsvm |
Started |
Started container kube-rbac-proxy | |
openshift-dns |
default-scheduler |
dns-default-7fzqj |
Scheduled |
Successfully assigned openshift-dns/dns-default-7fzqj to master1 | |
openshift-dns |
daemonset-controller |
dns-default |
SuccessfulCreate |
Created pod: dns-default-7fzqj | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-dns |
kubelet |
node-resolver-h7psn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f0cdc00b1b1a3c17411e50653253b9f6bb5329ea4fb82ad983790a6dbf2d9ad" | |
openshift-dns |
default-scheduler |
node-resolver-h7psn |
Scheduled |
Successfully assigned openshift-dns/node-resolver-h7psn to master1 | |
openshift-dns |
kubelet |
dns-default-7fzqj |
FailedMount |
MountVolume.SetUp failed for volume "metrics-tls" : secret "dns-default-metrics-tls" not found | |
openshift-dns |
daemonset-controller |
node-resolver |
SuccessfulCreate |
Created pod: node-resolver-h7psn | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-scheduler because it was missing | |
openshift-dns |
multus |
dns-default-7fzqj |
AddedInterface |
Add eth0 [10.128.0.21/23] from ovn-kubernetes | |
openshift-dns |
kubelet |
dns-default-7fzqj |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dfdf833d03dac36b747951107a25ab6424eb387bb140f344d4be8d8c7f4e895f" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-2 -n openshift-kube-scheduler because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (no pods found with labels \"apiserver=true,app=openshift-apiserver-a,openshift-apiserver-anti-affinity=true,revision=2\")" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()",Progressing changed from True to False ("All is well") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: service \"oauth-openshift\" not found\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-2 -n openshift-kube-scheduler because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.serving-cert.secret | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.serving-cert.secret | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]",Progressing changed from False to True ("NodeInstallerProgressing: 1 nodes are at revision 0; 0 nodes have achieved new revision 1"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 nodes are at revision 0; 0 nodes have achieved new revision 1") | |
(x3) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionCreate |
Revision 1 created because configmap "kube-scheduler-pod-1" not found | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6c77d44985 |
SuccessfulCreate |
Created pod: route-controller-manager-6c77d44985-7k8lf | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-1,kube-scheduler-cert-syncer-kubeconfig-1,kube-scheduler-pod-1,scheduler-kubeconfig-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: secrets: kube-scheduler-client-cert-key\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found",Progressing message changed from "NodeInstallerProgressing: 1 nodes are at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 nodes are at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 nodes are at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 nodes are at revision 0; 0 nodes have achieved new revision 2" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapUpdated |
Updated ConfigMap/kube-scheduler-pod -n openshift-kube-scheduler: cause by changes in data.pod.yaml | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 3 triggered by "configmap/kube-scheduler-pod has changed" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: secrets: kube-scheduler-client-cert-key\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: secrets: kube-scheduler-client-cert-key" | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-6c77d44985 to 1 from 0 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-779794f74 to 0 from 1 | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-779794f74-q82d8 |
FailedScheduling |
skip schedule deleting pod: openshift-route-controller-manager/route-controller-manager-779794f74-q82d8 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-779794f74 |
SuccessfulDelete |
Deleted pod: route-controller-manager-779794f74-q82d8 | |
openshift-controller-manager |
default-scheduler |
controller-manager-78ddfb869c-hfkrq |
FailedScheduling |
skip schedule deleting pod: openshift-controller-manager/controller-manager-78ddfb869c-hfkrq | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-controller-manager |
replicaset-controller |
controller-manager-5d9b9687f |
SuccessfulCreate |
Created pod: controller-manager-5d9b9687f-w8g4x | |
(x3) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-5d9b9687f to 1 from 0 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-78ddfb869c |
SuccessfulDelete |
Deleted pod: controller-manager-78ddfb869c-hfkrq | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-1,kube-scheduler-cert-syncer-kubeconfig-1,kube-scheduler-pod-1,scheduler-kubeconfig-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1]\nRevisionControllerDegraded: configmaps \"kube-scheduler-pod\" not found\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-1,kube-scheduler-cert-syncer-kubeconfig-1,kube-scheduler-pod-1,scheduler-kubeconfig-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1]\nTargetConfigControllerDegraded: \"serviceaccount/localhost-recovery-client\": serviceaccounts \"localhost-recovery-client\" not found" | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-78ddfb869c to 0 from 1 | |
openshift-dns |
kubelet |
dns-default-7fzqj |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dfdf833d03dac36b747951107a25ab6424eb387bb140f344d4be8d8c7f4e895f" in 4.878894468s | |
(x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObservedConfigWriteError |
Failed to write observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/revision-status-3 -n openshift-kube-scheduler because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-dns |
kubelet |
node-resolver-h7psn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9f0cdc00b1b1a3c17411e50653253b9f6bb5329ea4fb82ad983790a6dbf2d9ad" in 5.57446114s | |
openshift-network-diagnostics |
multus |
network-check-target-7n4vr |
AddedInterface |
Add eth0 [10.128.0.4/23] from ovn-kubernetes | |
openshift-dns |
kubelet |
dns-default-7fzqj |
Created |
Created container kube-rbac-proxy | |
openshift-dns |
kubelet |
dns-default-7fzqj |
Started |
Started container kube-rbac-proxy | |
openshift-dns |
kubelet |
node-resolver-h7psn |
Created |
Created container dns-node-resolver | |
openshift-authentication-operator |
cluster-authentication-operator-openshiftauthenticationstaticresources-staticresourcecontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit -n openshift-authentication because it was missing | |
openshift-dns |
kubelet |
dns-default-7fzqj |
Created |
Created container dns | |
openshift-dns |
kubelet |
dns-default-7fzqj |
Started |
Started container dns | |
openshift-dns |
kubelet |
node-resolver-h7psn |
Started |
Started container dns-node-resolver | |
openshift-dns |
kubelet |
dns-default-7fzqj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" already present on machine | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-3 -n openshift-kube-scheduler because it was missing | |
(x3) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
(x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveServiceCAConfigMap |
observed change in config |
(x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]interface{}{ +Â "extendedArguments": map[string]interface{}{ +Â "cluster-cidr": []interface{}{string("10.128.0.0/14")}, +Â "cluster-name": []interface{}{string("test-cluster-2m245")}, +Â "feature-gates": []interface{}{ +Â string("APIPriorityAndFairness=true"), +Â string("RotateKubeletServerCertificate=true"), +Â string("DownwardAPIHugePages=true"), string("CSIMigrationAzureFile=false"), +Â string("CSIMigrationvSphere=false"), +Â }, +Â "service-cluster-ip-range": []interface{}{string("172.30.0.0/16")}, +Â }, +Â "serviceServingCert": map[string]interface{}{ +Â "certFile": string("/etc/kubernetes/static-pod-resources/configmaps/service-ca/ca-bundle.crt"), +Â }, +Â "servingInfo": map[string]interface{}{ +Â "cipherSuites": []interface{}{ +Â string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), +Â string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), +Â string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), +Â string("TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"), +Â }, +Â "minTLSVersion": string("VersionTLS12"), +Â }, Â Â } |
(x3) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-scheduler because it was missing | |
(x3) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-config-observer-configobserver |
kube-controller-manager-operator |
ObserveFeatureFlagsUpdated |
Updated extendedArguments.feature-gates to APIPriorityAndFairness=true,RotateKubeletServerCertificate=true,DownwardAPIHugePages=true,CSIMigrationAzureFile=false,CSIMigrationvSphere=false |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-scheduler because it was missing | |
(x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionCreateFailed |
Failed to create revision 2: configmaps "kube-controller-manager-pod" not found |
openshift-marketplace |
kubelet |
marketplace-operator-75746f848d-v4htq |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fb7a1e5f6616311d94b625dd3b452348bf75577b824f58a92883139f8f233681" | |
openshift-marketplace |
multus |
marketplace-operator-75746f848d-v4htq |
AddedInterface |
Add eth0 [10.128.0.8/23] from ovn-kubernetes | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
(x15) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMissing |
no observedConfig |
openshift-machine-api |
multus |
control-plane-machine-set-operator-749d766b67-gc5pf |
AddedInterface |
Add eth0 [10.128.0.5/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-749d766b67-gc5pf |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a09b3bee316f15d4adac8d392f514c1491bdf37760b36f3a8714e563833ca7c" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:56389->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:56389->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:55569->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:55569->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:59648->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionCreate |
Revision 2 created because configmap/kube-scheduler-pod has changed | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:35827->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:35827->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 nodes are at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 nodes are at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 nodes are at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 nodes are at revision 0; 0 nodes have achieved new revision 3" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:59648->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:35827->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:35827->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:50149->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:50149->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:50628->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:50628->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:33929->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:33929->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:33929->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]" | |
openshift-marketplace |
kubelet |
marketplace-operator-75746f848d-v4htq |
Started |
Started container marketplace-operator | |
openshift-marketplace |
kubelet |
marketplace-operator-75746f848d-v4htq |
Created |
Created container marketplace-operator | |
openshift-marketplace |
kubelet |
marketplace-operator-75746f848d-v4htq |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fb7a1e5f6616311d94b625dd3b452348bf75577b824f58a92883139f8f233681" in 4.539215042s | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:33929->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:37653->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: endpoints \"api\" not found\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:37653->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:36198->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config -n openshift-kube-controller-manager because it was missing | |
(x2) | openshift-marketplace |
kubelet |
marketplace-operator-75746f848d-v4htq |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.8:8080/healthz": dial tcp 10.128.0.8:8080: connect: connection refused |
(x2) | openshift-marketplace |
kubelet |
marketplace-operator-75746f848d-v4htq |
ProbeError |
Readiness probe error: Get "http://10.128.0.8:8080/healthz": dial tcp 10.128.0.8:8080: connect: connection refused body: |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-749d766b67-gc5pf |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a09b3bee316f15d4adac8d392f514c1491bdf37760b36f3a8714e563833ca7c" in 4.441021485s | |
openshift-machine-api |
control-plane-machine-set-operator-749d766b67-gc5pf_a0dca78c-6b6b-4e50-82ee-1f1c6c294807 |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-749d766b67-gc5pf_a0dca78c-6b6b-4e50-82ee-1f1c6c294807 became leader | |
openshift-network-operator |
kubelet |
network-operator-767fc6c7f6-wph8h |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5e07e3a1c8bfa3f66ddbdf1bb6b12f48587434f8a37f075d6a02435dfa18dc2" already present on machine | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
master1_f506081f-63a2-461c-9d62-1a3dbfbf45d6 became leader | |
(x2) | openshift-network-operator |
kubelet |
network-operator-767fc6c7f6-wph8h |
Started |
Started container network-operator |
(x2) | openshift-network-operator |
kubelet |
network-operator-767fc6c7f6-wph8h |
Created |
Created container network-operator |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:36198->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:44360->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]" | |
openshift-network-operator |
network-operator-loggingsyncer |
network-operator |
FastControllerResync |
Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling | |
openshift-network-operator |
network-operator |
network-operator-lock |
LeaderElection |
master1_f506081f-63a2-461c-9d62-1a3dbfbf45d6 became leader | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nAPIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-779d7f6576-4c5ff pod)",Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-signer-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No endpoints found for oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nAPIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-779d7f6576-4c5ff pod)" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nAPIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-779d7f6576-4c5ff pod)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nOAuthServiceDegraded: Unable to get oauth server service: service \"oauth-openshift\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nAPIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-779d7f6576-4c5ff pod)" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nAPIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-779d7f6576-4c5ff pod)" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-ca -n openshift-kube-controller-manager-operator because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/csr-controller-ca -n openshift-config-managed because it was missing | |
(x25) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1 |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: read udp 10.128.0.13:44360->172.30.0.10:53: read: connection refused\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]" | |
openshift-multus |
default-scheduler |
multus-admission-controller-5c7bffcb4b-nxvkv |
Scheduled |
Successfully assigned openshift-multus/multus-admission-controller-5c7bffcb4b-nxvkv to master1 | |
openshift-multus |
replicaset-controller |
multus-admission-controller-5c7bffcb4b |
SuccessfulCreate |
Created pod: multus-admission-controller-5c7bffcb4b-nxvkv | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled up replica set multus-admission-controller-5c7bffcb4b to 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/csr-signer -n openshift-kube-controller-manager because it was missing | |
openshift-multus |
kubelet |
multus-admission-controller-5c7bffcb4b-nxvkv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f87f071c3aa8b3932f33cd2dec201abbf7a116e70eeb0df53f93cccc0c3f4041" already present on machine | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 2, desired generation is 3.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-multus |
replicaset-controller |
multus-admission-controller-7b9c64854b |
SuccessfulDelete |
Deleted pod: multus-admission-controller-7b9c64854b-bwsvm | |
openshift-multus |
kubelet |
multus-admission-controller-7b9c64854b-bwsvm |
Killing |
Stopping container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-7b9c64854b-bwsvm |
Killing |
Stopping container kube-rbac-proxy | |
openshift-multus |
multus |
multus-admission-controller-5c7bffcb4b-nxvkv |
AddedInterface |
Add eth0 [10.128.0.22/23] from ovn-kubernetes | |
openshift-multus |
deployment-controller |
multus-admission-controller |
ScalingReplicaSet |
Scaled down replica set multus-admission-controller-7b9c64854b to 0 from 1 | |
openshift-multus |
kubelet |
multus-admission-controller-5c7bffcb4b-nxvkv |
Created |
Created container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-5c7bffcb4b-nxvkv |
Started |
Started container multus-admission-controller | |
openshift-multus |
kubelet |
multus-admission-controller-5c7bffcb4b-nxvkv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" already present on machine | |
openshift-multus |
kubelet |
multus-admission-controller-5c7bffcb4b-nxvkv |
Created |
Created container kube-rbac-proxy | |
openshift-multus |
kubelet |
multus-admission-controller-5c7bffcb4b-nxvkv |
Started |
Started container kube-rbac-proxy | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager because it was missing | |
(x20) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 2 triggered by "configmap \"kube-controller-manager-pod\" not found" |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: configmap \"v4-0-config-system-service-ca\" not found\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nAPIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-779d7f6576-4c5ff pod)" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nAPIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-779d7f6576-4c5ff pod)" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
RevisionTriggered |
new revision 2 triggered by "configmap \"audit-1\" not found" | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/revision-status-2 -n openshift-oauth-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod -n openshift-kube-controller-manager because it was missing | |
(x6) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
RequiredInstallerResourcesMissing |
secrets: kube-scheduler-client-cert-key |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/audit-2 -n openshift-oauth-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 2 triggered by "configmap \"kube-controller-manager-pod-1\" not found" | |
(x3) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: aggregator-client-ca,client-ca, secrets: kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1 |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/revision-status-2 -n openshift-kube-controller-manager: cause by changes in data.reason | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-2 -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
RevisionCreate |
Revision 1 created because configmap "audit-1" not found | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: csr-signer,kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nAPIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-779d7f6576-4c5ff pod)" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-779d7f6576-4c5ff pod)" | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-779d7f6576 |
SuccessfulDelete |
Deleted pod: apiserver-779d7f6576-4c5ff | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-779d7f6576-4c5ff pod)" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (no pods found with labels \"apiserver=true,app=openshift-oauth-apiserver,oauth-apiserver-anti-affinity=true,revision=1\")",Progressing changed from False to True ("APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 1, desired generation is 2.") | |
openshift-oauth-apiserver |
default-scheduler |
apiserver-6464d7bff-7mfcm |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-6464d7bff |
SuccessfulCreate |
Created pod: apiserver-6464d7bff-7mfcm | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-779d7f6576 to 0 from 1 | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-6464d7bff to 1 from 0 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-68b6d6d454 |
SuccessfulCreate |
Created pod: apiserver-68b6d6d454-ltjtf | |
openshift-oauth-apiserver |
replicaset-controller |
apiserver-6464d7bff |
SuccessfulDelete |
Deleted pod: apiserver-6464d7bff-7mfcm | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (no pods found with labels \"apiserver=true,app=openshift-oauth-apiserver,oauth-apiserver-anti-affinity=true,revision=1\")" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (no pods found with labels \"apiserver=true,app=openshift-oauth-apiserver,oauth-apiserver-anti-affinity=true,revision=2\")",Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 1, desired generation is 2." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-oauth-apiserver: observed generation is 2, desired generation is 3." | |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-68b6d6d454 to 1 from 0 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-2 -n openshift-kube-controller-manager because it was missing | |
(x3) | openshift-authentication-operator |
oauth-apiserver-oauthapiservercontrollerworkloadcontroller |
authentication-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-oauth-apiserver because it changed |
openshift-oauth-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-6464d7bff to 0 from 1 | |
openshift-oauth-apiserver |
default-scheduler |
apiserver-6464d7bff-7mfcm |
FailedScheduling |
skip schedule deleting pod: openshift-oauth-apiserver/apiserver-6464d7bff-7mfcm | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-2 -n openshift-kube-controller-manager because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (no pods found with labels \"apiserver=true,app=openshift-oauth-apiserver,oauth-apiserver-anti-affinity=true,revision=2\")" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()",Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionCreate |
Revision 1 created because configmap "kube-controller-manager-pod-1" not found | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nNodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: configmaps \"kube-controller-manager-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]" | |
(x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: aggregator-client-ca,client-ca, secrets: kube-controller-manager-client-cert-key |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-1,config-1,controller-manager-kubeconfig-1,kube-controller-cert-syncer-kubeconfig-1,kube-controller-manager-pod-1,recycler-config-1,service-ca-1,serviceaccount-ca-1, secrets: localhost-recovery-client-token-1,service-account-private-key-1]" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: kube-controller-manager-client-cert-key]",Progressing message changed from "NodeInstallerProgressing: 1 nodes are at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 nodes are at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 nodes are at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 nodes are at revision 0; 0 nodes have achieved new revision 2" | |
(x21) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerStuck |
unexpected addresses: 192.168.126.10 |
(x8) | openshift-controller-manager |
kubelet |
controller-manager-5dfb447f4c-qhfxd |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
(x8) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6cbc757bbf-85bgq |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-service-ca-operator |
kubelet |
service-ca-operator-76d7c5458c-wgsgp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:14794ac4b4b5e1bb2728d253b939578a03730cf26ba5cf795c8e2d26b9737dd6" already present on machine | |
openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-5c6c84d584-4z2d9 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:87666cc451e16c135276f6405cd7d0c2ce76fd5f19f02a9654c23bb9651c54b3" already present on machine | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-5c6c84d584-4z2d9_8df0982a-448c-4938-a2db-7a87fba44ce7 became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator-unsupportedconfigoverridescontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling | |
openshift-service-ca-operator |
service-ca-operator-service-ca-operator-servicecaoperator |
service-ca-operator |
FastControllerResync |
Controller "ServiceCAOperator" resync interval is set to 0s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-revisioncontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserverworkloadcontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "OpenShiftAPIServerWorkloadController" resync interval is set to 0s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
FastControllerResync |
Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling | |
openshift-service-ca-operator |
service-ca-operator-loggingsyncer |
service-ca-operator |
FastControllerResync |
Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator |
openshift-apiserver-operator-lock |
LeaderElection |
openshift-apiserver-operator-5c6c84d584-4z2d9_8df0982a-448c-4938-a2db-7a87fba44ce7 became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "ConnectivityCheckController" resync interval is set to 0s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-loggingsyncer |
openshift-apiserver-operator |
FastControllerResync |
Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-audit-policy-controller-auditpolicycontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
(x2) | openshift-controller-manager |
default-scheduler |
controller-manager-5d9b9687f-w8g4x |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. |
(x2) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-5c6c84d584-4z2d9 |
Started |
Started container openshift-apiserver-operator |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-76d7c5458c-wgsgp_a567ac04-00fc-4d01-b7b5-65e5696eff72 became leader | |
openshift-service-ca-operator |
service-ca-operator |
service-ca-operator-lock |
LeaderElection |
service-ca-operator-76d7c5458c-wgsgp_a567ac04-00fc-4d01-b7b5-65e5696eff72 became leader | |
openshift-apiserver-operator |
openshift-apiserver-operator-secret-revision-prune-controller-secretrevisionprunecontroller |
openshift-apiserver-operator |
FastControllerResync |
Controller "SecretRevisionPruneController" resync interval is set to 0s which might lead to client request throttling | |
(x2) | openshift-service-ca-operator |
kubelet |
service-ca-operator-76d7c5458c-wgsgp |
Created |
Created container service-ca-operator |
(x2) | openshift-service-ca-operator |
kubelet |
service-ca-operator-76d7c5458c-wgsgp |
Started |
Started container service-ca-operator |
(x2) | openshift-apiserver-operator |
kubelet |
openshift-apiserver-operator-5c6c84d584-4z2d9 |
Created |
Created container openshift-apiserver-operator |
(x8) | openshift-apiserver |
kubelet |
apiserver-56868c8696-s9c4s |
FailedMount |
MountVolume.SetUp failed for volume "audit" : configmap "audit-0" not found |
(x8) | openshift-oauth-apiserver |
kubelet |
apiserver-779d7f6576-4c5ff |
FailedMount |
MountVolume.SetUp failed for volume "audit-policies" : configmap "audit-0" not found |
openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-6f7cd4b84d-vl2fv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e045fad043f28570b754619999ffb356bedee81ff842c56a32b1b13588fc1651" already present on machine | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-prunecontroller |
openshift-kube-scheduler-operator |
FastControllerResync |
Controller "PruneController" resync interval is set to 0s which might lead to client request throttling | |
(x2) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-6f7cd4b84d-vl2fv |
Started |
Started container kube-scheduler-operator-container |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
FastControllerResync |
Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-6f7cd4b84d-vl2fv_2a82c6a8-aec8-4d96-89aa-65a74ed31209 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller-installercontroller |
openshift-kube-scheduler-operator |
FastControllerResync |
Controller "InstallerController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-loggingsyncer |
openshift-kube-scheduler-operator |
FastControllerResync |
Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: secrets: kube-scheduler-client-cert-key" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-3,kube-scheduler-cert-syncer-kubeconfig-3,kube-scheduler-pod-3,scheduler-kubeconfig-3,serviceaccount-ca-3]" | |
(x2) | openshift-kube-scheduler-operator |
kubelet |
openshift-kube-scheduler-operator-6f7cd4b84d-vl2fv |
Created |
Created container kube-scheduler-operator-container |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-guardcontroller |
openshift-kube-scheduler-operator |
FastControllerResync |
Controller "GuardController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-unsupportedconfigoverridescontroller |
openshift-kube-scheduler-operator |
FastControllerResync |
Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-lock |
LeaderElection |
openshift-kube-scheduler-operator-6f7cd4b84d-vl2fv_2a82c6a8-aec8-4d96-89aa-65a74ed31209 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-nodecontroller |
openshift-kube-scheduler-operator |
FastControllerResync |
Controller "NodeController" resync interval is set to 0s which might lead to client request throttling | |
(x8) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
RequiredInstallerResourcesMissing |
secrets: kube-scheduler-client-cert-key, configmaps: config-3,kube-scheduler-cert-syncer-kubeconfig-3,kube-scheduler-pod-3,scheduler-kubeconfig-3,serviceaccount-ca-3 |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [secrets: kube-scheduler-client-cert-key, configmaps: config-3,kube-scheduler-cert-syncer-kubeconfig-3,kube-scheduler-pod-3,scheduler-kubeconfig-3,serviceaccount-ca-3]" to "NodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: secrets: kube-scheduler-client-cert-key" | |
openshift-authentication-operator |
kubelet |
authentication-operator-68df59f464-ffd6s |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a1252ab4a94ef96c90c19a926c6c10b1c73186377f408414c8a3aa1949a0a75" already present on machine | |
openshift-authentication-operator |
oauth-apiserver-oauthapiservercontrollerworkloadcontroller |
authentication-operator |
FastControllerResync |
Controller "OAuthAPIServerControllerWorkloadController" resync interval is set to 0s which might lead to client request throttling | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-68df59f464-ffd6s_b20cf609-350e-4f03-bb96-30e7c83a8c21 became leader | |
openshift-authentication-operator |
cluster-authentication-operator-oauthserverworkloadcontroller |
authentication-operator |
FastControllerResync |
Controller "OAuthServerWorkloadController" resync interval is set to 0s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
authentication-operator |
FastControllerResync |
Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-secret-revision-prune-controller-secretrevisionprunecontroller |
authentication-operator |
FastControllerResync |
Controller "SecretRevisionPruneController" resync interval is set to 0s which might lead to client request throttling | |
openshift-authentication-operator |
cluster-authentication-operator-loggingsyncer |
authentication-operator |
FastControllerResync |
Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling | |
(x2) | openshift-authentication-operator |
kubelet |
authentication-operator-68df59f464-ffd6s |
Started |
Started container authentication-operator |
openshift-authentication-operator |
cluster-authentication-operator-unsupportedconfigoverridescontroller |
authentication-operator |
FastControllerResync |
Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
FastControllerResync |
Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling | |
(x2) | openshift-authentication-operator |
kubelet |
authentication-operator-68df59f464-ffd6s |
Created |
Created container authentication-operator |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-68df59f464-ffd6s_b20cf609-350e-4f03-bb96-30e7c83a8c21 became leader | |
openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-fbcb7858d-95djl |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:773fe01f949872eaae7daee9bac53f06ca4d375e3f8d6207a9a3eccaa4ab9f98" already present on machine | |
(x2) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-fbcb7858d-95djl |
Created |
Created container kube-storage-version-migrator-operator |
(x2) | openshift-kube-storage-version-migrator-operator |
kubelet |
kube-storage-version-migrator-operator-fbcb7858d-95djl |
Started |
Started container kube-storage-version-migrator-operator |
openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-8bb498f8b-dsrn8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81a043e61c07b8e93c6b082aa920d61ffa69762bcc2ef1018360026d62c11b18" already present on machine | |
openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-967d9d7c4-5t48s |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53c526dc7766f65b2de93215a5f609fdc2f790717c07d15ffcbf5d4ac79d002e" already present on machine | |
(x2) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-8bb498f8b-dsrn8 |
Created |
Created container kube-controller-manager-operator |
(x2) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-967d9d7c4-5t48s |
Started |
Started container openshift-controller-manager-operator |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-loggingsyncer |
kube-controller-manager-operator |
FastControllerResync |
Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
FastControllerResync |
Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-guardcontroller |
kube-controller-manager-operator |
FastControllerResync |
Controller "GuardController" resync interval is set to 0s which might lead to client request throttling | |
(x2) | openshift-kube-controller-manager-operator |
kubelet |
kube-controller-manager-operator-8bb498f8b-dsrn8 |
Started |
Started container kube-controller-manager-operator |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-8bb498f8b-dsrn8_b6dadb78-18b8-43cb-8c88-fae4f44b5fab became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator |
kube-controller-manager-operator-lock |
LeaderElection |
kube-controller-manager-operator-8bb498f8b-dsrn8_b6dadb78-18b8-43cb-8c88-fae4f44b5fab became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-unsupportedconfigoverridescontroller |
kube-controller-manager-operator |
FastControllerResync |
Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-prunecontroller |
kube-controller-manager-operator |
FastControllerResync |
Controller "PruneController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-nodecontroller |
kube-controller-manager-operator |
FastControllerResync |
Controller "NodeController" resync interval is set to 0s which might lead to client request throttling | |
(x2) | openshift-controller-manager-operator |
kubelet |
openshift-controller-manager-operator-967d9d7c4-5t48s |
Created |
Created container openshift-controller-manager-operator |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller-installercontroller |
kube-controller-manager-operator |
FastControllerResync |
Controller "InstallerController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: kube-controller-manager-client-cert-key]" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-2,config-2,controller-manager-kubeconfig-2,kube-controller-cert-syncer-kubeconfig-2,kube-controller-manager-pod-2,recycler-config-2,service-ca-2,serviceaccount-ca-2]" | |
(x8) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: aggregator-client-ca,client-ca, secrets: kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-2,config-2,controller-manager-kubeconfig-2,kube-controller-cert-syncer-kubeconfig-2,kube-controller-manager-pod-2,recycler-config-2,service-ca-2,serviceaccount-ca-2 |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: kube-controller-manager-client-cert-key, configmaps: cluster-policy-controller-config-2,config-2,controller-manager-kubeconfig-2,kube-controller-cert-syncer-kubeconfig-2,kube-controller-manager-pod-2,recycler-config-2,service-ca-2,serviceaccount-ca-2]" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: kube-controller-manager-client-cert-key]" | |
openshift-authentication-operator |
oauth-apiserver-webhook-authenticator-controller-webhookauthenticatorcontroller |
authentication-operator |
SecretCreated |
Created Secret/webhook-authentication-integrated-oauth -n openshift-config because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nWebhookAuthenticatorControllerDegraded: failed to read service-ca crt bundle: open /var/run/configmaps/service-ca-bundle/service-ca.crt: no such file or directory\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()" to "OAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found\nAPIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()" | |
openshift-cluster-version |
kubelet |
cluster-version-operator-6f997f7ccd-p8rgc |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-release@sha256:31c7741fc7bb73ff752ba43f5acf014b8fadd69196fc522241302de918066cb1" | |
openshift-controller-manager |
kubelet |
controller-manager-5dfb447f4c-qhfxd |
FailedMount |
Unable to attach or mount volumes: unmounted volumes=[client-ca], unattached volumes=[proxy-ca-bundles kube-api-access-kllpz config client-ca serving-cert]: timed out waiting for the condition | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6cbc757bbf-85bgq |
FailedMount |
Unable to attach or mount volumes: unmounted volumes=[client-ca], unattached volumes=[serving-cert kube-api-access-452fs config client-ca]: timed out waiting for the condition | |
openshift-cluster-version |
kubelet |
cluster-version-operator-6f997f7ccd-p8rgc |
Created |
Created container cluster-version-operator | |
openshift-cluster-version |
kubelet |
cluster-version-operator-6f997f7ccd-p8rgc |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-release@sha256:31c7741fc7bb73ff752ba43f5acf014b8fadd69196fc522241302de918066cb1" in 7.595068266s | |
openshift-cluster-version |
kubelet |
cluster-version-operator-6f997f7ccd-p8rgc |
Started |
Started container cluster-version-operator | |
(x3) | openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6c77d44985-7k8lf |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. |
openshift-controller-manager |
default-scheduler |
controller-manager-5d9b9687f-w8g4x |
Scheduled |
Successfully assigned openshift-controller-manager/controller-manager-5d9b9687f-w8g4x to master1 | |
openshift-route-controller-manager |
default-scheduler |
route-controller-manager-6c77d44985-7k8lf |
Scheduled |
Successfully assigned openshift-route-controller-manager/route-controller-manager-6c77d44985-7k8lf to master1 | |
(x4) | openshift-oauth-apiserver |
default-scheduler |
apiserver-68b6d6d454-ltjtf |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. |
(x4) | openshift-apiserver |
default-scheduler |
apiserver-675fc6b586-hhb4g |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. |
openshift-multus |
multus |
network-metrics-daemon-d5jcm |
AddedInterface |
Add eth0 [10.128.0.3/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-56868c8696-s9c4s |
FailedMount |
Unable to attach or mount volumes: unmounted volumes=[audit], unattached volumes=[config etcd-serving-ca image-import-ca audit-dir trusted-ca-bundle serving-cert audit etcd-client node-pullsecrets encryption-config kube-api-access-wqdpm]: timed out waiting for the condition | |
openshift-oauth-apiserver |
kubelet |
apiserver-779d7f6576-4c5ff |
FailedMount |
Unable to attach or mount volumes: unmounted volumes=[audit-policies], unattached volumes=[etcd-client etcd-serving-ca trusted-ca-bundle serving-cert encryption-config audit-dir kube-api-access-vwszw audit-policies]: timed out waiting for the condition | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from False to True ("APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found") | |
openshift-apiserver |
default-scheduler |
apiserver-675fc6b586-hhb4g |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-675fc6b586-hhb4g to master1 | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded changed from False to True ("APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded changed from False to True ("InstallerControllerDegraded: missing required resources: secrets: kube-scheduler-client-cert-key") | |
openshift-oauth-apiserver |
default-scheduler |
apiserver-68b6d6d454-ltjtf |
Scheduled |
Successfully assigned openshift-oauth-apiserver/apiserver-68b6d6d454-ltjtf to master1 | |
openshift-apiserver |
multus |
apiserver-675fc6b586-hhb4g |
AddedInterface |
Add eth0 [10.128.0.25/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from False to True ("GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: kube-controller-manager-client-cert-key]") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver ()\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-68b6d6d454-ltjtf pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-apiserver |
kubelet |
apiserver-675fc6b586-hhb4g |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47bc752254f826905ac36cc2eb1819373a3045603e5dfa03c7f9e6d73c3fd9f9" | |
openshift-oauth-apiserver |
multus |
apiserver-68b6d6d454-ltjtf |
AddedInterface |
Add eth0 [10.128.0.26/23] from ovn-kubernetes | |
openshift-oauth-apiserver |
kubelet |
apiserver-68b6d6d454-ltjtf |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dfca545c1b42ae20c6465e61cf16a44f9411d9ed30af1f9017ed6da0d7ebd216" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-675fc6b586-hhb4g pod)" | |
(x14) | openshift-kube-storage-version-migrator |
replicaset-controller |
migrator-5c54d8d69d |
FailedCreate |
Error creating: pods "migrator-5c54d8d69d-" is forbidden: error fetching namespace "openshift-kube-storage-version-migrator": unable to find annotation openshift.io/sa.scc.uid-range |
openshift-apiserver |
kubelet |
apiserver-675fc6b586-hhb4g |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-675fc6b586-hhb4g |
Created |
Created container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-68b6d6d454-ltjtf |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dfca545c1b42ae20c6465e61cf16a44f9411d9ed30af1f9017ed6da0d7ebd216" in 5.823932735s | |
openshift-oauth-apiserver |
kubelet |
apiserver-68b6d6d454-ltjtf |
Created |
Created container fix-audit-permissions | |
openshift-oauth-apiserver |
kubelet |
apiserver-68b6d6d454-ltjtf |
Started |
Started container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-675fc6b586-hhb4g |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47bc752254f826905ac36cc2eb1819373a3045603e5dfa03c7f9e6d73c3fd9f9" in 5.796016689s | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-675fc6b586-hhb4g pod)" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-675fc6b586-hhb4g pod)" | |
openshift-apiserver |
kubelet |
apiserver-675fc6b586-hhb4g |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-675fc6b586-hhb4g |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47bc752254f826905ac36cc2eb1819373a3045603e5dfa03c7f9e6d73c3fd9f9" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-675fc6b586-hhb4g |
Created |
Created container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-675fc6b586-hhb4g |
Started |
Started container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-675fc6b586-hhb4g |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-675fc6b586-hhb4g |
Created |
Created container openshift-apiserver-check-endpoints | |
openshift-oauth-apiserver |
kubelet |
apiserver-68b6d6d454-ltjtf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dfca545c1b42ae20c6465e61cf16a44f9411d9ed30af1f9017ed6da0d7ebd216" already present on machine | |
openshift-oauth-apiserver |
kubelet |
apiserver-68b6d6d454-ltjtf |
Created |
Created container oauth-apiserver | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (2 containers are waiting in pending apiserver-68b6d6d454-ltjtf pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is waiting in pending apiserver-68b6d6d454-ltjtf pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-oauth-apiserver |
kubelet |
apiserver-68b6d6d454-ltjtf |
Started |
Started container oauth-apiserver | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is waiting in pending apiserver-68b6d6d454-ltjtf pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-68b6d6d454-ltjtf pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-apiserver |
check-endpoint-checkendpointsstop |
master1 |
FastControllerResync |
Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling | |
openshift-apiserver |
check-endpoint-checkendpointstimetostart |
master1 |
FastControllerResync |
Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-675fc6b586-hhb4g pod)" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-675fc6b586-hhb4g pod)" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
Created <unknown>/v1.authorization.openshift.io because it was missing | ||
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
authentication-operator |
Created <unknown>/v1.user.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
Created <unknown>/v1.build.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
Created <unknown>/v1.route.openshift.io because it was missing | ||
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.26:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.26:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.26:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.26:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
Created <unknown>/v1.apps.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
Created <unknown>/v1.quota.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
Created <unknown>/v1.image.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
Created <unknown>/v1.project.openshift.io because it was missing | ||
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: \"oauth.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"user.openshift.io.v1\" is not ready: an attempt failed with statusCode = 404, err = the server could not find the requested resource\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
authentication-operator |
Created <unknown>/v1.oauth.openshift.io because it was missing | ||
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
authentication-operator |
OpenShiftAPICheckFailed |
"user.openshift.io.v1" failed with an attempt failed with statusCode = 404, err = the server could not find the requested resource | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
authentication-operator |
OpenShiftAPICheckFailed |
"oauth.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"image.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
(x2) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "oauth-apiserver" changed from "" to "4.12.2" |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-oauth-apiserver (container is not ready in apiserver-68b6d6d454-ltjtf pod)\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found",Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-oauth-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.26:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.26:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.26:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.26:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.26:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.26:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.26:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.26:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.12.2"}] to [{"operator" "4.12.2"} {"oauth-apiserver" "4.12.2"}] | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"security.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"authorization.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
Created <unknown>/v1.template.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"template.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"apps.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"route.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"quota.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"project.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"build.openshift.io.v1" failed with an attempt failed with statusCode = 503, err = the server is currently unable to handle the request | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
Created <unknown>/v1.security.openshift.io because it was missing | ||
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/template.openshift.io/v1: 401" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: status.versions changed from [{"operator" "4.12.2"}] to [{"operator" "4.12.2"} {"openshift-apiserver" "4.12.2"}] | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorVersionChanged |
clusteroperator/openshift-apiserver version "openshift-apiserver" changed from "" to "4.12.2" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded changed from True to False ("All is well"),Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" to "APIServicesAvailable: \"apps.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"authorization.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"build.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"image.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"project.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"quota.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"route.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"security.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request\nAPIServicesAvailable: \"template.openshift.io.v1\" is not ready: an attempt failed with statusCode = 503, err = the server is currently unable to handle the request" | |
kube-system |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master1_80c36476-1450-45f4-8bc5-165803c480d7 became leader | |
kube-system |
podsecurity-admission-label-sync-controller-pod-security-admission-label-synchronization-controller-pod-security-admission-label-synchronization-controller |
bootstrap-kube-controller-manager-master1 |
FastControllerResync |
Controller "pod-security-admission-label-synchronization-controller" resync interval is set to 0s which might lead to client request throttling | |
kube-system |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master1_80c36476-1450-45f4-8bc5-165803c480d7 became leader | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
FastControllerResync |
Controller "namespace-security-allocation-controller" resync interval is set to 0s which might lead to client request throttling | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-kube-storage-version-migrator namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-dns namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-ovn-kubernetes namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-node namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-host-network namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-authentication namespace | |
kube-system |
cluster-policy-controller-namespace-security-allocation-controller |
bootstrap-kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-multus namespace | |
openshift-kube-storage-version-migrator |
default-scheduler |
migrator-5c54d8d69d-tfnbw |
Scheduled |
Successfully assigned openshift-kube-storage-version-migrator/migrator-5c54d8d69d-tfnbw to master1 | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5c54d8d69d-tfnbw |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e36c1c4e383fd252168aa2cb465236aa642062446aa3a026f06ea4a4afb52d7f" | |
openshift-kube-storage-version-migrator |
multus |
migrator-5c54d8d69d-tfnbw |
AddedInterface |
Add eth0 [10.128.0.27/23] from ovn-kubernetes | |
openshift-kube-storage-version-migrator |
replicaset-controller |
migrator-5c54d8d69d |
SuccessfulCreate |
Created pod: migrator-5c54d8d69d-tfnbw | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5c54d8d69d-tfnbw |
Created |
Created container migrator | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5c54d8d69d-tfnbw |
Started |
Started container migrator | |
openshift-kube-storage-version-migrator |
kubelet |
migrator-5c54d8d69d-tfnbw |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e36c1c4e383fd252168aa2cb465236aa642062446aa3a026f06ea4a4afb52d7f" in 4.435945341s | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6c77d44985-7k8lf |
FailedMount |
Unable to attach or mount volumes: unmounted volumes=[client-ca], unattached volumes=[kube-api-access-s7lqk config client-ca serving-cert]: timed out waiting for the condition | |
openshift-machine-api |
control-plane-machine-set-operator-749d766b67-gc5pf_705c060c-414e-48e0-9324-d93348ee135c |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-749d766b67-gc5pf_705c060c-414e-48e0-9324-d93348ee135c became leader | |
openshift-controller-manager |
kubelet |
controller-manager-5d9b9687f-w8g4x |
FailedMount |
Unable to attach or mount volumes: unmounted volumes=[client-ca], unattached volumes=[config client-ca serving-cert proxy-ca-bundles kube-api-access-s5fcs]: timed out waiting for the condition | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-loggingsyncer |
kube-storage-version-migrator-operator |
FastControllerResync |
Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-static-conditions-controller-staticconditionscontroller |
kube-storage-version-migrator-operator |
FastControllerResync |
Controller "StaticConditionsController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-lock |
LeaderElection |
kube-storage-version-migrator-operator-fbcb7858d-95djl_5c85c600-27cd-427b-99f1-d6f34fa06a97 became leader | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
(x2) | openshift-operator-lifecycle-manager |
controllermanager |
packageserver-pdb |
NoPods |
No matching pods found |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-967d9d7c4-5t48s_3decab72-c3e2-43c6-9fb6-6b85fbd99797 became leader | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator-lock |
LeaderElection |
openshift-controller-manager-operator-967d9d7c4-5t48s_3decab72-c3e2-43c6-9fb6-6b85fbd99797 became leader | |
openshift-machine-api |
kubelet |
control-plane-machine-set-operator-749d766b67-gc5pf |
BackOff |
Back-off restarting failed container | |
(x2) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-749d766b67-gc5pf |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a09b3bee316f15d4adac8d392f514c1491bdf37760b36f3a8714e563833ca7c" already present on machine |
(x3) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-749d766b67-gc5pf |
Created |
Created container control-plane-machine-set-operator |
(x3) | openshift-machine-api |
kubelet |
control-plane-machine-set-operator-749d766b67-gc5pf |
Started |
Started container control-plane-machine-set-operator |
openshift-cluster-version |
openshift-cluster-version |
version |
LoadPayload |
Loading payload version="4.12.2" image="quay.io/openshift-release-dev/ocp-release@sha256:31c7741fc7bb73ff752ba43f5acf014b8fadd69196fc522241302de918066cb1" | |
openshift-cluster-version |
openshift-cluster-version |
version |
RetrievePayload |
Retrieving and verifying payload version="4.12.2" image="quay.io/openshift-release-dev/ocp-release@sha256:31c7741fc7bb73ff752ba43f5acf014b8fadd69196fc522241302de918066cb1" | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master1_02cb39cb-09d7-41d3-b0fb-abce3deb5edd became leader | |
openshift-cluster-version |
openshift-cluster-version |
version |
LeaderElection |
master1_02cb39cb-09d7-41d3-b0fb-abce3deb5edd became leader | |
(x2) | openshift-controller-manager |
kubelet |
controller-manager-5d9b9687f-w8g4x |
FailedMount |
Unable to attach or mount volumes: unmounted volumes=[client-ca], unattached volumes=[kube-api-access-s5fcs config client-ca serving-cert proxy-ca-bundles]: timed out waiting for the condition |
openshift-cluster-version |
openshift-cluster-version |
version |
PayloadLoaded |
Payload loaded version="4.12.2" image="quay.io/openshift-release-dev/ocp-release@sha256:31c7741fc7bb73ff752ba43f5acf014b8fadd69196fc522241302de918066cb1" architecture="amd64" | |
(x11) | openshift-controller-manager |
kubelet |
controller-manager-5d9b9687f-w8g4x |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
(x2) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6c77d44985-7k8lf |
FailedMount |
Unable to attach or mount volumes: unmounted volumes=[client-ca], unattached volumes=[config client-ca serving-cert kube-api-access-s7lqk]: timed out waiting for the condition |
(x11) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6c77d44985-7k8lf |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
(x3) | openshift-network-diagnostics |
default-scheduler |
network-check-source-b94cc7564-hh9xp |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. |
openshift-machine-config-operator |
kubelet |
machine-config-operator-7bc6cdd784-g4xqh |
Started |
Started container machine-config-operator | |
openshift-machine-config-operator |
default-scheduler |
machine-config-operator-7bc6cdd784-g4xqh |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-operator-7bc6cdd784-g4xqh to master1 | |
openshift-cloud-credential-operator |
default-scheduler |
cloud-credential-operator-fd47f96b9-fd7mn |
Scheduled |
Successfully assigned openshift-cloud-credential-operator/cloud-credential-operator-fd47f96b9-fd7mn to master1 | |
openshift-machine-config-operator |
multus |
machine-config-operator-7bc6cdd784-g4xqh |
AddedInterface |
Add eth0 [10.128.0.28/23] from ovn-kubernetes | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-7bc6cdd784-g4xqh |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fafaae3445cd29a8ba685901a338b8539877d15f149466cc7b4e42fdca60c40" already present on machine | |
openshift-cloud-credential-operator |
replicaset-controller |
cloud-credential-operator-fd47f96b9 |
SuccessfulCreate |
Created pod: cloud-credential-operator-fd47f96b9-fd7mn | |
openshift-cloud-credential-operator |
deployment-controller |
cloud-credential-operator |
ScalingReplicaSet |
Scaled up replica set cloud-credential-operator-fd47f96b9 to 1 | |
openshift-machine-config-operator |
deployment-controller |
machine-config-operator |
ScalingReplicaSet |
Scaled up replica set machine-config-operator-7bc6cdd784 to 1 | |
openshift-machine-config-operator |
kubelet |
machine-config-operator-7bc6cdd784-g4xqh |
Created |
Created container machine-config-operator | |
openshift-machine-config-operator |
replicaset-controller |
machine-config-operator-7bc6cdd784 |
SuccessfulCreate |
Created pod: machine-config-operator-7bc6cdd784-g4xqh | |
openshift-monitoring |
deployment-controller |
cluster-monitoring-operator |
ScalingReplicaSet |
Scaled up replica set cluster-monitoring-operator-6f98b45499 to 1 | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-6f98b45499-tskfn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" already present on machine | |
openshift-machine-api |
multus |
cluster-baremetal-operator-8bd65644d-wntnm |
AddedInterface |
Add eth0 [10.128.0.31/23] from ovn-kubernetes | |
openshift-monitoring |
multus |
cluster-monitoring-operator-6f98b45499-tskfn |
AddedInterface |
Add eth0 [10.128.0.30/23] from ovn-kubernetes | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/controllerconfigs.machineconfiguration.openshift.io because it was missing | |
openshift-machine-api |
replicaset-controller |
cluster-baremetal-operator-8bd65644d |
SuccessfulCreate |
Created pod: cluster-baremetal-operator-8bd65644d-wntnm | |
openshift-machine-api |
deployment-controller |
cluster-baremetal-operator |
ScalingReplicaSet |
Scaled up replica set cluster-baremetal-operator-8bd65644d to 1 | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-fd47f96b9-fd7mn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e2981cdba6d1e6787c1b5b048bba246cc307650a53ef680dc44593e6227333f1" | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-fd47f96b9-fd7mn |
Created |
Created container kube-rbac-proxy | |
openshift-monitoring |
default-scheduler |
cluster-monitoring-operator-6f98b45499-tskfn |
Scheduled |
Successfully assigned openshift-monitoring/cluster-monitoring-operator-6f98b45499-tskfn to master1 | |
openshift-machine-api |
default-scheduler |
cluster-baremetal-operator-8bd65644d-wntnm |
Scheduled |
Successfully assigned openshift-machine-api/cluster-baremetal-operator-8bd65644d-wntnm to master1 | |
openshift-monitoring |
replicaset-controller |
cluster-monitoring-operator-6f98b45499 |
SuccessfulCreate |
Created pod: cluster-monitoring-operator-6f98b45499-tskfn | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-fd47f96b9-fd7mn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" already present on machine | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-fd47f96b9-fd7mn |
Started |
Started container kube-rbac-proxy | |
openshift-cloud-credential-operator |
multus |
cloud-credential-operator-fd47f96b9-fd7mn |
AddedInterface |
Add eth0 [10.128.0.29/23] from ovn-kubernetes | |
default |
machineconfigoperator |
machine-config |
OperatorVersionChanged |
clusteroperator/machine-config-operator started a version change from [] to [{operator 4.12.2}] | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n openshift-machine-config-operator because it was missing | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-6f98b45499-tskfn |
Created |
Created container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-6f98b45499-tskfn |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-6f98b45499-tskfn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8b396d60a7de757d04a86a334d1b86faa3121df769903d76d8c98a25c3621705" | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-8bd65644d-wntnm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c4e5f13cd4d2a9556b980a8a6790c237685b007f7ea7723191bf1633d8d88e27" | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/master-user-data-managed -n openshift-machine-api because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/worker-user-data-managed -n openshift-machine-api because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-daemon-events because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-daemon-events -n default because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-daemon because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-daemon -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/cookie-secret -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-g24jx |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fafaae3445cd29a8ba685901a338b8539877d15f149466cc7b4e42fdca60c40" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-g24jx |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f968922564c3eea1c69d6bbe529d8970784d6cae8935afaf674d9fa7c0f72ea3" | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-g24jx |
Started |
Started container machine-config-daemon | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-daemon |
SuccessfulCreate |
Created pod: machine-config-daemon-g24jx | |
openshift-machine-config-operator |
default-scheduler |
machine-config-daemon-g24jx |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-daemon-g24jx to master1 | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-g24jx |
Created |
Created container machine-config-daemon | |
openshift-config-operator |
default-scheduler |
openshift-config-operator-6b6746cf56-7zmgd |
Scheduled |
Successfully assigned openshift-config-operator/openshift-config-operator-6b6746cf56-7zmgd to master1 | |
openshift-machine-api |
default-scheduler |
cluster-autoscaler-operator-56b65b888d-pgphn |
Scheduled |
Successfully assigned openshift-machine-api/cluster-autoscaler-operator-56b65b888d-pgphn to master1 | |
openshift-config-operator |
deployment-controller |
openshift-config-operator |
ScalingReplicaSet |
Scaled up replica set openshift-config-operator-6b6746cf56 to 1 | |
openshift-machine-api |
replicaset-controller |
cluster-autoscaler-operator-56b65b888d |
SuccessfulCreate |
Created pod: cluster-autoscaler-operator-56b65b888d-pgphn | |
openshift-machine-api |
deployment-controller |
cluster-autoscaler-operator |
ScalingReplicaSet |
Scaled up replica set cluster-autoscaler-operator-56b65b888d to 1 | |
openshift-etcd-operator |
deployment-controller |
etcd-operator |
ScalingReplicaSet |
Scaled up replica set etcd-operator-78596fc689 to 1 | |
openshift-etcd-operator |
replicaset-controller |
etcd-operator-78596fc689 |
SuccessfulCreate |
Created pod: etcd-operator-78596fc689-zvm86 | |
openshift-machine-api |
multus |
cluster-autoscaler-operator-56b65b888d-pgphn |
AddedInterface |
Add eth0 [10.128.0.32/23] from ovn-kubernetes | |
openshift-config-operator |
replicaset-controller |
openshift-config-operator-6b6746cf56 |
SuccessfulCreate |
Created pod: openshift-config-operator-6b6746cf56-7zmgd | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-56b65b888d-pgphn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" already present on machine | |
openshift-etcd-operator |
default-scheduler |
etcd-operator-78596fc689-zvm86 |
Scheduled |
Successfully assigned openshift-etcd-operator/etcd-operator-78596fc689-zvm86 to master1 | |
openshift-config-operator |
kubelet |
openshift-config-operator-6b6746cf56-7zmgd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f6ba6ec29ae317b65ccae96aae4338eed31430f09c536e09ac1e36d9f11b208e" | |
openshift-etcd-operator |
multus |
etcd-operator-78596fc689-zvm86 |
AddedInterface |
Add eth0 [10.128.0.33/23] from ovn-kubernetes | |
openshift-etcd-operator |
kubelet |
etcd-operator-78596fc689-zvm86 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8090f9dd771f4f292e508b5ffca3aca3b4e6226aed25e131e49a9b6596b0b451" | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-56b65b888d-pgphn |
Created |
Created container kube-rbac-proxy | |
openshift-config-operator |
multus |
openshift-config-operator-6b6746cf56-7zmgd |
AddedInterface |
Add eth0 [10.128.0.34/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-56b65b888d-pgphn |
Started |
Started container kube-rbac-proxy | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-56b65b888d-pgphn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af36c8fe819208e55cc0346c504d641e31a0a1575420a21a6d108a67cbb978df" | |
openshift-cluster-samples-operator |
default-scheduler |
cluster-samples-operator-c868985c6-69zt7 |
Scheduled |
Successfully assigned openshift-cluster-samples-operator/cluster-samples-operator-c868985c6-69zt7 to master1 | |
openshift-insights |
replicaset-controller |
insights-operator-847896d87d |
SuccessfulCreate |
Created pod: insights-operator-847896d87d-xsmtv | |
openshift-insights |
deployment-controller |
insights-operator |
ScalingReplicaSet |
Scaled up replica set insights-operator-847896d87d to 1 | |
openshift-cloud-controller-manager-operator |
replicaset-controller |
cluster-cloud-controller-manager-operator-54fdfd4884 |
SuccessfulCreate |
Created pod: cluster-cloud-controller-manager-operator-54fdfd4884-5nzhm | |
openshift-cloud-controller-manager-operator |
deployment-controller |
cluster-cloud-controller-manager-operator |
ScalingReplicaSet |
Scaled up replica set cluster-cloud-controller-manager-operator-54fdfd4884 to 1 | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-operator-66559c5fb7 |
SuccessfulCreate |
Created pod: csi-snapshot-controller-operator-66559c5fb7-jp867 | |
openshift-insights |
default-scheduler |
insights-operator-847896d87d-xsmtv |
Scheduled |
Successfully assigned openshift-insights/insights-operator-847896d87d-xsmtv to master1 | |
openshift-cloud-controller-manager-operator |
default-scheduler |
cluster-cloud-controller-manager-operator-54fdfd4884-5nzhm |
Scheduled |
Successfully assigned openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-54fdfd4884-5nzhm to master1 | |
openshift-cluster-samples-operator |
deployment-controller |
cluster-samples-operator |
ScalingReplicaSet |
Scaled up replica set cluster-samples-operator-c868985c6 to 1 | |
openshift-cluster-samples-operator |
replicaset-controller |
cluster-samples-operator-c868985c6 |
SuccessfulCreate |
Created pod: cluster-samples-operator-c868985c6-69zt7 | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-controller-operator-66559c5fb7-jp867 |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-operator-66559c5fb7-jp867 to master1 | |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-controller-operator |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-controller-operator-66559c5fb7 to 1 | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-operator-66559c5fb7-jp867 |
AddedInterface |
Add eth0 [10.128.0.35/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-6f98b45499-tskfn |
Created |
Created container cluster-monitoring-operator | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-admission-webhook-849b6cd6bf |
SuccessfulCreate |
Created pod: prometheus-operator-admission-webhook-849b6cd6bf-gjxgr | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-66559c5fb7-jp867 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:961706a0d75013fcef5f3bbf59754ed23549316fba391249b22529d6a97f1cb2" | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-6f98b45499-tskfn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8b396d60a7de757d04a86a334d1b86faa3121df769903d76d8c98a25c3621705" in 5.94112827s | |
kube-system |
cluster-policy-controller-webhook-authenticator-cert-approver-csr-approver-controller-webhookauthenticatorcertapprover_csr-approver-controller |
bootstrap-kube-controller-manager-master1 |
CSRApproval |
The CSR "system:openshift:openshift-monitoring-wgh8m" has been approved | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-54fdfd4884-5nzhm |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13397fef9671257021455712bf8242685325c97dbc6700c988bd6ab5e68ff57e" | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-fd47f96b9-fd7mn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e2981cdba6d1e6787c1b5b048bba246cc307650a53ef680dc44593e6227333f1" in 6.555000411s | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-8bd65644d-wntnm |
Started |
Started container baremetal-kube-rbac-proxy | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
ClientCertificateCreated |
A new client certificate for OpenShiftMonitoringClientCertRequester is available | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
CSRCreated |
A csr "system:openshift:openshift-monitoring-wgh8m" is created for OpenShiftMonitoringClientCertRequester | |
openshift-monitoring |
cluster-monitoring-operator-openshiftmonitoringclientcertrequester |
cluster-monitoring-operator |
NoValidCertificateFound |
No valid client certificate for OpenShiftMonitoringClientCertRequester is found: unable to parse certificate: data does not contain any valid RSA or ECDSA certificates | |
openshift-monitoring |
deployment-controller |
prometheus-operator-admission-webhook |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-admission-webhook-849b6cd6bf to 1 | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-8bd65644d-wntnm |
Created |
Created container baremetal-kube-rbac-proxy | |
openshift-monitoring |
kubelet |
cluster-monitoring-operator-6f98b45499-tskfn |
Started |
Started container cluster-monitoring-operator | |
openshift-monitoring |
default-scheduler |
prometheus-operator-admission-webhook-849b6cd6bf-gjxgr |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-fd47f96b9-fd7mn |
Created |
Created container cloud-credential-operator | |
openshift-insights |
multus |
insights-operator-847896d87d-xsmtv |
AddedInterface |
Add eth0 [10.128.0.37/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-8bd65644d-wntnm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" already present on machine | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-8bd65644d-wntnm |
Started |
Started container cluster-baremetal-operator | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-8bd65644d-wntnm |
Created |
Created container cluster-baremetal-operator | |
openshift-insights |
kubelet |
insights-operator-847896d87d-xsmtv |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aec165a1c80946b96c6ba401ff249e31554a3cce8ab2f996b9f6618dbe9bc84a" | |
openshift-machine-api |
kubelet |
cluster-baremetal-operator-8bd65644d-wntnm |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c4e5f13cd4d2a9556b980a8a6790c237685b007f7ea7723191bf1633d8d88e27" in 6.093769044s | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-c868985c6-69zt7 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e5cf6294e213c4dfbd16d7f5e0bd3071703a0fde2342eb09b3957eb6a2b6b3d" | |
openshift-cluster-samples-operator |
multus |
cluster-samples-operator-c868985c6-69zt7 |
AddedInterface |
Add eth0 [10.128.0.36/23] from ovn-kubernetes | |
openshift-cloud-credential-operator |
kubelet |
cloud-credential-operator-fd47f96b9-fd7mn |
Started |
Started container cloud-credential-operator | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-g24jx |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f968922564c3eea1c69d6bbe529d8970784d6cae8935afaf674d9fa7c0f72ea3" in 4.723032598s | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-g24jx |
Created |
Created container oauth-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-g24jx |
Started |
Started container oauth-proxy | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
multus |
machine-config-controller-59786d68b6-6m572 |
AddedInterface |
Add eth0 [10.128.0.38/23] from ovn-kubernetes | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller-events because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/machine-config-controller-events -n default because it was missing | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-59786d68b6-6m572 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fafaae3445cd29a8ba685901a338b8539877d15f149466cc7b4e42fdca60c40" already present on machine | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-controller because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-controller -n openshift-machine-config-operator because it was missing | |
openshift-kube-apiserver-operator |
replicaset-controller |
kube-apiserver-operator-67fd98d7b4 |
SuccessfulCreate |
Created pod: kube-apiserver-operator-67fd98d7b4-kscch | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-controller because it was missing | |
openshift-machine-config-operator |
deployment-controller |
machine-config-controller |
ScalingReplicaSet |
Scaled up replica set machine-config-controller-59786d68b6 to 1 | |
openshift-kube-apiserver-operator |
deployment-controller |
kube-apiserver-operator |
ScalingReplicaSet |
Scaled up replica set kube-apiserver-operator-67fd98d7b4 to 1 | |
openshift-kube-apiserver-operator |
default-scheduler |
kube-apiserver-operator-67fd98d7b4-kscch |
Scheduled |
Successfully assigned openshift-kube-apiserver-operator/kube-apiserver-operator-67fd98d7b4-kscch to master1 | |
openshift-machine-config-operator |
default-scheduler |
machine-config-controller-59786d68b6-6m572 |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-controller-59786d68b6-6m572 to master1 | |
openshift-machine-config-operator |
replicaset-controller |
machine-config-controller-59786d68b6 |
SuccessfulCreate |
Created pod: machine-config-controller-59786d68b6-6m572 | |
openshift-cluster-node-tuning-operator |
replicaset-controller |
cluster-node-tuning-operator-79c4cfd957 |
SuccessfulCreate |
Created pod: cluster-node-tuning-operator-79c4cfd957-5vbb6 | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-56b65b888d-pgphn |
Created |
Created container cluster-autoscaler-operator | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-56b65b888d-pgphn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af36c8fe819208e55cc0346c504d641e31a0a1575420a21a6d108a67cbb978df" in 4.884445317s | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-59786d68b6-6m572 |
Created |
Created container oauth-proxy | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-59786d68b6-6m572 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f968922564c3eea1c69d6bbe529d8970784d6cae8935afaf674d9fa7c0f72ea3" already present on machine | |
openshift-kube-apiserver-operator |
multus |
kube-apiserver-operator-67fd98d7b4-kscch |
AddedInterface |
Add eth0 [10.128.0.39/23] from ovn-kubernetes | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.26:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.26:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.26:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.26:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nReadyIngressNodesAvailable: Authentication requires functional ingress which requires at least one schedulable and ready node. Got 0 worker nodes, 1 master nodes, 0 custom target nodes (none are schedulable or ready for ingress pods)." to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.26:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.26:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.26:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.26:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" | |
openshift-monitoring |
default-scheduler |
prometheus-operator-admission-webhook-849b6cd6bf-gjxgr |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-admission-webhook-849b6cd6bf-gjxgr to master1 | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-59786d68b6-6m572 |
Started |
Started container machine-config-controller | |
openshift-machine-api |
kubelet |
cluster-autoscaler-operator-56b65b888d-pgphn |
Started |
Started container cluster-autoscaler-operator | |
openshift-network-diagnostics |
default-scheduler |
network-check-source-b94cc7564-hh9xp |
Scheduled |
Successfully assigned openshift-network-diagnostics/network-check-source-b94cc7564-hh9xp to master1 | |
openshift-cluster-node-tuning-operator |
deployment-controller |
cluster-node-tuning-operator |
ScalingReplicaSet |
Scaled up replica set cluster-node-tuning-operator-79c4cfd957 to 1 | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-59786d68b6-6m572 |
Started |
Started container oauth-proxy | |
openshift-cluster-node-tuning-operator |
default-scheduler |
cluster-node-tuning-operator-79c4cfd957-5vbb6 |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-79c4cfd957-5vbb6 to master1 | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-67fd98d7b4-kscch |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" already present on machine | |
openshift-machine-config-operator |
kubelet |
machine-config-controller-59786d68b6-6m572 |
Created |
Created container machine-config-controller | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-webhooksupportabilitycontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "webhookSupportabilityController" resync interval is set to 0s which might lead to client request throttling | |
openshift-ingress-operator |
deployment-controller |
ingress-operator |
ScalingReplicaSet |
Scaled up replica set ingress-operator-6dbf96bf9c to 1 | |
openshift-operator-lifecycle-manager |
replicaset-controller |
olm-operator-79f8b8bdc4 |
SuccessfulCreate |
Created pod: olm-operator-79f8b8bdc4-dm9dl | |
openshift-operator-lifecycle-manager |
deployment-controller |
package-server-manager |
ScalingReplicaSet |
Scaled up replica set package-server-manager-55cc8b7f to 1 | |
openshift-ingress-operator |
default-scheduler |
ingress-operator-6dbf96bf9c-6rdr8 |
Scheduled |
Successfully assigned openshift-ingress-operator/ingress-operator-6dbf96bf9c-6rdr8 to master1 | |
openshift-operator-lifecycle-manager |
default-scheduler |
package-server-manager-55cc8b7f-lssld |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/package-server-manager-55cc8b7f-lssld to master1 | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-67fd98d7b4-kscch |
Created |
Created container kube-apiserver-operator | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-nodecontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "NodeController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-prunecontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "PruneController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller-installercontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "InstallerController" resync interval is set to 0s which might lead to client request throttling | |
openshift-config-operator |
kubelet |
openshift-config-operator-6b6746cf56-7zmgd |
Created |
Created container openshift-config-operator | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "ConnectivityCheckController" resync interval is set to 0s which might lead to client request throttling | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-849b6cd6bf-gjxgr |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30784b4b00568946c30c1830da739d61193a622cc3a17286fe91885f0c93af9f" | |
openshift-monitoring |
multus |
prometheus-operator-admission-webhook-849b6cd6bf-gjxgr |
AddedInterface |
Add eth0 [10.128.0.41/23] from ovn-kubernetes | |
openshift-etcd-operator |
kubelet |
etcd-operator-78596fc689-zvm86 |
Started |
Started container etcd-operator | |
openshift-etcd-operator |
kubelet |
etcd-operator-78596fc689-zvm86 |
Created |
Created container etcd-operator | |
openshift-etcd-operator |
kubelet |
etcd-operator-78596fc689-zvm86 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8090f9dd771f4f292e508b5ffca3aca3b4e6226aed25e131e49a9b6596b0b451" in 5.88810069s | |
openshift-ingress-operator |
replicaset-controller |
ingress-operator-6dbf96bf9c |
SuccessfulCreate |
Created pod: ingress-operator-6dbf96bf9c-6rdr8 | |
openshift-operator-lifecycle-manager |
deployment-controller |
olm-operator |
ScalingReplicaSet |
Scaled up replica set olm-operator-79f8b8bdc4 to 1 | |
openshift-config-operator |
kubelet |
openshift-config-operator-6b6746cf56-7zmgd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f6ba6ec29ae317b65ccae96aae4338eed31430f09c536e09ac1e36d9f11b208e" in 5.67157374s | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-unsupportedconfigoverridescontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling | |
openshift-network-diagnostics |
kubelet |
network-check-source-b94cc7564-hh9xp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5e07e3a1c8bfa3f66ddbdf1bb6b12f48587434f8a37f075d6a02435dfa18dc2" already present on machine | |
openshift-network-diagnostics |
multus |
network-check-source-b94cc7564-hh9xp |
AddedInterface |
Add eth0 [10.128.0.40/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-loggingsyncer |
kube-apiserver-operator |
FastControllerResync |
Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-67fd98d7b4-kscch_89dd195b-5a5f-476d-8b4f-e2b469e798a4 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-eventwatchcontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "EventWatchController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-guardcontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "GuardController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-67fd98d7b4-kscch |
Started |
Started container kube-apiserver-operator | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubelet-version-skew-controller-kubeletversionskewcontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "KubeletVersionSkewController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-operator-lifecycle-manager |
default-scheduler |
olm-operator-79f8b8bdc4-dm9dl |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/olm-operator-79f8b8bdc4-dm9dl to master1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-feature-upgradeable-featureupgradeablecontroller |
kube-apiserver-operator |
FastControllerResync |
Controller "FeatureUpgradeableController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator |
kube-apiserver-operator-lock |
LeaderElection |
kube-apiserver-operator-67fd98d7b4-kscch_89dd195b-5a5f-476d-8b4f-e2b469e798a4 became leader | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-79c4cfd957-5vbb6 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5097e405f3dc5e0bd7e6072d3d93cbfcd45d3d74771003c48e689b2f8c4d3850" | |
openshift-operator-lifecycle-manager |
replicaset-controller |
package-server-manager-55cc8b7f |
SuccessfulCreate |
Created pod: package-server-manager-55cc8b7f-lssld | |
openshift-cluster-node-tuning-operator |
multus |
cluster-node-tuning-operator-79c4cfd957-5vbb6 |
AddedInterface |
Add eth0 [10.128.0.42/23] from ovn-kubernetes | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-nodecontroller |
etcd-operator |
FastControllerResync |
Controller "NodeController" resync interval is set to 0s which might lead to client request throttling | |
openshift-config-operator |
config-operator-loggingsyncer |
openshift-config-operator |
FastControllerResync |
Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling | |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-6b6746cf56-7zmgd_3cd544a2-1ae4-4b94-99ba-2c78037ec503 became leader | |
openshift-network-diagnostics |
check-endpoint-checkendpointstimetostart |
master1 |
FastControllerResync |
Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling | |
openshift-network-diagnostics |
check-endpoint-checkendpointsstop |
master1 |
FastControllerResync |
Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling | |
openshift-config-operator |
config-operator |
config-operator-lock |
LeaderElection |
openshift-config-operator-6b6746cf56-7zmgd_3cd544a2-1ae4-4b94-99ba-2c78037ec503 became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-78596fc689-zvm86_4491d0ec-05a1-49c2-aeb4-86a851c59ed8 became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator |
openshift-cluster-etcd-operator-lock |
LeaderElection |
etcd-operator-78596fc689-zvm86_4491d0ec-05a1-49c2-aeb4-86a851c59ed8 became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded set to False ("EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values"),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to True ("All is well"),status.versions changed from [] to [{"raw-internal" "4.12.2"}] | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-loggingsyncer |
etcd-operator |
OperatorLogLevelChange |
Operator log level changed from "Debug" to "Normal" | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
FastControllerResync |
Controller "ConfigOperatorController" resync interval is set to 10s which might lead to client request throttling | |
openshift-config-operator |
config-operator-configoperatorcontroller |
openshift-config-operator |
ConfigOperatorStatusChanged |
Operator conditions defaulted: [{OperatorAvailable True 2023-02-13 14:52:02 +0000 UTC AsExpected } {OperatorProgressing False 2023-02-13 14:52:02 +0000 UTC AsExpected } {OperatorUpgradeable True 2023-02-13 14:52:02 +0000 UTC AsExpected }] | |
(x2) | openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorVersionChanged |
clusteroperator/config-operator version "operator" changed from "" to "4.12.2" |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
FastControllerResync |
Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-prunecontroller |
etcd-operator |
FastControllerResync |
Controller "PruneController" resync interval is set to 0s which might lead to client request throttling | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorStatusChanged |
Status for clusteroperator/config-operator changed: Degraded set to Unknown (""),Progressing set to False ("All is well"),Available set to True ("All is well"),Upgradeable set to True ("All is well"),status.relatedObjects changed from [] to [{"operator.openshift.io" "configs" "" "cluster"} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-operator"}],status.versions changed from [] to [{"operator" "4.12.2"}] | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorStatusChanged |
Status for clusteroperator/config-operator changed: Degraded changed from Unknown to False ("All is well") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),status.relatedObjects changed from [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""}] to [{"operator.openshift.io" "kubeapiservers" "" "cluster"} {"apiextensions.k8s.io" "customresourcedefinitions" "" ""} {"security.openshift.io" "securitycontextconstraints" "" ""} {"" "namespaces" "" "openshift-config"} {"" "namespaces" "" "openshift-config-managed"} {"" "namespaces" "" "openshift-kube-apiserver-operator"} {"" "namespaces" "" "openshift-kube-apiserver"} {"admissionregistration.k8s.io" "mutatingwebhookconfigurations" "" ""} {"admissionregistration.k8s.io" "validatingwebhookconfigurations" "" ""} {"controlplane.operator.openshift.io" "podnetworkconnectivitychecks" "openshift-kube-apiserver" ""} {"apiserver.openshift.io" "apirequestcounts" "" ""} {"config.openshift.io" "nodes" "" "cluster"}],status.versions changed from [] to [{"raw-internal" "4.12.2"}] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-nodecontroller |
kube-apiserver-operator |
MasterNodeObserved |
Observed new master node master1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-serviceaccountissuercontroller |
kube-apiserver-operator |
ServiceAccountIssuer |
Issuer set to default value "https://kubernetes.default.svc" | |
openshift-machine-config-operator |
daemonset-controller |
machine-config-server |
SuccessfulCreate |
Created pod: machine-config-server-nggfk | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SignerUpdateRequired |
"node-system-admin-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: missing notAfter | |
(x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "raw-internal" changed from "" to "4.12.2" |
openshift-etcd-operator |
openshift-cluster-etcd-operator-unsupportedconfigoverridescontroller |
etcd-operator |
FastControllerResync |
Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling | |
openshift-machine-config-operator |
default-scheduler |
machine-config-server-nggfk |
Scheduled |
Successfully assigned openshift-machine-config-operator/machine-config-server-nggfk to master1 | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
SecretCreated |
Created Secret/node-bootstrapper-token -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/node-bootstrapper -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ServiceAccountCreated |
Created ServiceAccount/machine-config-server -n openshift-machine-config-operator because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system-bootstrap-node-renewal because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/machine-config-server because it was missing | |
openshift-machine-config-operator |
machine-config-operator |
machine-config-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/machine-config-server because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-loggingsyncer |
etcd-operator |
FastControllerResync |
Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-guardcontroller |
etcd-operator |
FastControllerResync |
Controller "GuardController" resync interval is set to 0s which might lead to client request throttling | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-unsupportedconfigoverridescontroller |
etcd-operator |
FastControllerResync |
Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling | |
(x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
etcds.operator.openshift.io "cluster" not found |
openshift-ingress-operator |
multus |
ingress-operator-6dbf96bf9c-6rdr8 |
AddedInterface |
Add eth0 [10.128.0.43/23] from ovn-kubernetes | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller-installercontroller |
etcd-operator |
FastControllerResync |
Controller "InstallerController" resync interval is set to 0s which might lead to client request throttling | |
(x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorVersionChanged |
clusteroperator/etcd version "raw-internal" changed from "" to "4.12.2" |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"etcd-pod\" not found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
RequiredInstallerResourcesMissing |
configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, secrets: etcd-all-certs, configmaps: etcd-endpoints-0,etcd-metrics-proxy-client-ca-0,etcd-metrics-proxy-serving-ca-0,etcd-peer-client-ca-0,etcd-pod-0,etcd-serving-ca-0, secrets: etcd-all-certs-0 | |
openshift-operator-lifecycle-manager |
multus |
package-server-manager-55cc8b7f-lssld |
AddedInterface |
Add eth0 [10.128.0.44/23] from ovn-kubernetes | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-nodecontroller |
etcd-operator |
MasterNodeObserved |
Observed new master node master1 | |
openshift-operator-lifecycle-manager |
deployment-controller |
catalog-operator |
ScalingReplicaSet |
Scaled up replica set catalog-operator-76c4c9dd94 to 1 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-peer-master1 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-serving-master1 -n openshift-etcd because it was missing | |
(x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"loadbalancer-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded changed from Unknown to False ("NodeControllerDegraded: All master nodes are ready"),Upgradeable changed from Unknown to True ("KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-apiserver-to-kubelet-client-ca" in "openshift-kube-apiserver-operator" requires a new cert | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SignerUpdateRequired |
"localhost-recovery-serving-signer" in "openshift-kube-apiserver-operator" requires a new signing cert/key pair: missing notAfter | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"service-network-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert | |
openshift-operator-lifecycle-manager |
multus |
olm-operator-79f8b8bdc4-dm9dl |
AddedInterface |
Add eth0 [10.128.0.45/23] from ovn-kubernetes | |
(x4) | openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-control-plane-signer-ca" in "openshift-kube-apiserver-operator" requires a new cert |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-serving-metrics-master1 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-cert-signer-controller-etcdcertsignercontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready" | |
openshift-operator-lifecycle-manager |
replicaset-controller |
catalog-operator-76c4c9dd94 |
SuccessfulCreate |
Created pod: catalog-operator-76c4c9dd94-xj8sz | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"kube-apiserver-aggregator-client-ca" in "openshift-config-managed" requires a new cert | |
openshift-operator-lifecycle-manager |
default-scheduler |
catalog-operator-76c4c9dd94-xj8sz |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/catalog-operator-76c4c9dd94-xj8sz to master1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"localhost-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 1 triggered by "configmap \"kube-apiserver-pod\" not found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metrics-proxy-client-ca -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/revision-status-1 -n openshift-etcd because it was missing | |
openshift-image-registry |
replicaset-controller |
cluster-image-registry-operator-7f8cfbbb59 |
SuccessfulCreate |
Created pod: cluster-image-registry-operator-7f8cfbbb59-nfqbn | |
openshift-image-registry |
default-scheduler |
cluster-image-registry-operator-7f8cfbbb59-nfqbn |
Scheduled |
Successfully assigned openshift-image-registry/cluster-image-registry-operator-7f8cfbbb59-nfqbn to master1 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources |
etcd-operator |
NamespaceUpdated |
Updated Namespace/openshift-etcd because it changed | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller |
etcd-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-backingresourcecontroller-backingresourcecontroller |
etcd-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-etcd-installer because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
RequiredInstallerResourcesMissing |
configmaps: etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,restore-etcd-pod, configmaps: etcd-endpoints-0,etcd-metrics-proxy-client-ca-0,etcd-metrics-proxy-serving-ca-0,etcd-peer-client-ca-0,etcd-pod-0,etcd-serving-ca-0, secrets: etcd-all-certs-0 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, secrets: etcd-all-certs, configmaps: etcd-endpoints-0,etcd-metrics-proxy-client-ca-0,etcd-metrics-proxy-serving-ca-0,etcd-peer-client-ca-0,etcd-pod-0,etcd-serving-ca-0, secrets: etcd-all-certs-0]",Progressing changed from Unknown to False ("NodeInstallerProgressing: 1 nodes are at revision 0"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 nodes are at revision 0") | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: secrets: kube-scheduler-client-cert-key" to "InstallerControllerDegraded: missing required resources: secrets: kube-scheduler-client-cert-key\nNodeControllerDegraded: The master nodes not ready: node \"master1\" not ready since 2023-02-13 14:52:05 +0000 UTC because KubeletNotReady (container runtime status check may not have completed yet)" | |
(x10) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
RequiredInstallerResourcesMissing |
secrets: kube-scheduler-client-cert-key |
default |
machineconfigcontroller-rendercontroller |
master |
RenderedConfigGenerated |
rendered-master-4d84e62e5a4303ab1b6720a1e03cdc40 successfully generated (release version: 4.12.2, controller version: e3dc9430ac753ac0440f55910370da22c65637f1) | |
default |
kubelet |
master1 |
Starting |
Starting kubelet. | |
default |
kubelet |
master1 |
NodeHasSufficientMemory |
Node master1 status is now: NodeHasSufficientMemory | |
default |
kubelet |
master1 |
NodeHasNoDiskPressure |
Node master1 status is now: NodeHasNoDiskPressure | |
default |
kubelet |
master1 |
NodeHasSufficientPID |
Node master1 status is now: NodeHasSufficientPID | |
default |
kubelet |
master1 |
NodeNotReady |
Node master1 status is now: NodeNotReady | |
default |
machineconfigcontroller-rendercontroller |
worker |
RenderedConfigGenerated |
rendered-worker-bd7bdb9e49243812440a51b656072457 successfully generated (release version: 4.12.2, controller version: e3dc9430ac753ac0440f55910370da22c65637f1) | |
(x11) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: aggregator-client-ca,client-ca, secrets: kube-controller-manager-client-cert-key |
openshift-image-registry |
deployment-controller |
cluster-image-registry-operator |
ScalingReplicaSet |
Scaled up replica set cluster-image-registry-operator-7f8cfbbb59 to 1 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metrics-proxy-serving-ca -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources |
etcd-operator |
ServiceAccountCreated |
Created ServiceAccount/etcd-sa -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-ca-bundle -n openshift-etcd-operator: cause by changes in data.ca-bundle.crt | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: kube-controller-manager-client-cert-key]" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: kube-controller-manager-client-cert-key]\nNodeControllerDegraded: The master nodes not ready: node \"master1\" not ready since 2023-02-13 14:52:05 +0000 UTC because KubeletNotReady (container runtime status check may not have completed yet)" | |
default |
kubelet |
master1 |
NodeAllocatableEnforced |
Updated Node Allocatable limit across pods | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/cluster-config-v1 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: The master nodes not ready: node \"master1\" not ready since 2023-02-13 14:52:05 +0000 UTC because KubeletNotReady (container runtime status check may not have completed yet)" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
SecretDeleted |
Deleted Secret/etcd-client -n openshift-etcd-operator | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-boundsatokensignercontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-849b6cd6bf-gjxgr |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:30784b4b00568946c30c1830da739d61193a622cc3a17286fe91885f0c93af9f" already present on machine | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcdstaticresources-etcdstaticresources |
etcd-operator |
ServiceUpdated |
Updated Service/etcd -n openshift-etcd because it changed | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-ca-bundle -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-peer-client-ca -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver because it was missing | |
openshift-cluster-storage-operator |
replicaset-controller |
cluster-storage-operator-578bfd8476 |
SuccessfulCreate |
Created pod: cluster-storage-operator-578bfd8476-l4s7l | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-66559c5fb7-jp867 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:961706a0d75013fcef5f3bbf59754ed23549316fba391249b22529d6a97f1cb2" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-54fdfd4884-5nzhm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13397fef9671257021455712bf8242685325c97dbc6700c988bd6ab5e68ff57e" already present on machine | |
openshift-kube-apiserver-operator |
kubelet |
kube-apiserver-operator-67fd98d7b4-kscch |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-7f8cfbbb59-nfqbn |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-cluster-storage-operator |
default-scheduler |
cluster-storage-operator-578bfd8476-l4s7l |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-resource-sync-controller-resourcesynccontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-etcd-operator because it was missing | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-79f8b8bdc4-dm9dl |
FailedMount |
MountVolume.SetUp failed for volume "profile-collector-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-76c4c9dd94-xj8sz |
FailedMount |
MountVolume.SetUp failed for volume "profile-collector-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-79f8b8bdc4-dm9dl |
FailedMount |
MountVolume.SetUp failed for volume "srv-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-machine-config-operator |
kubelet |
machine-config-daemon-g24jx |
FailedMount |
MountVolume.SetUp failed for volume "proxy-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/revision-status-1 -n openshift-kube-apiserver because it was missing | |
openshift-insights |
kubelet |
insights-operator-847896d87d-xsmtv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aec165a1c80946b96c6ba401ff249e31554a3cce8ab2f996b9f6618dbe9bc84a" already present on machine | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-7f8cfbbb59-nfqbn |
FailedMount |
MountVolume.SetUp failed for volume "image-registry-operator-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-cluster-storage-operator |
deployment-controller |
cluster-storage-operator |
ScalingReplicaSet |
Scaled up replica set cluster-storage-operator-578bfd8476 to 1 | |
(x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: aggregator-client-ca,check-endpoints-kubeconfig,client-ca,control-plane-node-kubeconfig, secrets: aggregator-client,bound-service-account-signing-key,check-endpoints-client-cert-key,control-plane-node-admin-client-cert-key,external-loadbalancer-serving-certkey,internal-loadbalancer-serving-certkey,kubelet-client,localhost-serving-cert-certkey,node-kubeconfigs,service-network-serving-certkey, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 |
openshift-ingress-operator |
kubelet |
ingress-operator-6dbf96bf9c-6rdr8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6275a171f6d5a523627963860415ed0e43f1728f2dd897c49412600bf64bc9c3" | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-79f8b8bdc4-dm9dl |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1d71ba084c63e2d6b3140b9cbada2b50bb6589a39a526dedb466945d284c73e" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kube-scheduler-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: missing notAfter | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-control-plane-signer-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-55cc8b7f-lssld |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1d71ba084c63e2d6b3140b9cbada2b50bb6589a39a526dedb466945d284c73e" | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-c868985c6-69zt7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e5cf6294e213c4dfbd16d7f5e0bd3071703a0fde2342eb09b3957eb6a2b6b3d" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"node-system-admin-ca" in "openshift-kube-apiserver-operator" requires a new cert | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-79c4cfd957-5vbb6 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5097e405f3dc5e0bd7e6072d3d93cbfcd45d3d74771003c48e689b2f8c4d3850" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-system-admin-signer -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-to-kubelet-client-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/loadbalancer-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-machine-api |
default-scheduler |
machine-api-operator-df4db9c9b-6rkdn |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"aggregator-client" in "openshift-kube-apiserver" requires a new target cert/key pair: missing notAfter | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-aggregator-client-ca -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"localhost-serving-cert-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: missing notAfter | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/localhost-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"service-network-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: missing notAfter | |
openshift-machine-api |
replicaset-controller |
machine-api-operator-df4db9c9b |
SuccessfulCreate |
Created pod: machine-api-operator-df4db9c9b-6rkdn | |
openshift-machine-api |
deployment-controller |
machine-api-operator |
ScalingReplicaSet |
Scaled up replica set machine-api-operator-df4db9c9b to 1 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.26:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.26:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.26:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.26:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/service-network-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.26:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.26:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.26:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.26:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: endpoints for service/api in \"openshift-oauth-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: endpoints for service/api in \"openshift-oauth-apiserver\" have no addresses with port name \"https\"\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: endpoints for service/api in \"openshift-oauth-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: endpoints for service/api in \"openshift-oauth-apiserver\" have no addresses with port name \"https\"\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kubelet-client" in "openshift-kube-apiserver" requires a new target cert/key pair: missing notAfter | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"external-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: missing notAfter | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-c868985c6-69zt7 |
Started |
Started container cluster-samples-operator-watch | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-79c4cfd957-5vbb6 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5097e405f3dc5e0bd7e6072d3d93cbfcd45d3d74771003c48e689b2f8c4d3850" in 1.90285196s | |
openshift-ingress-operator |
kubelet |
ingress-operator-6dbf96bf9c-6rdr8 |
Created |
Created container kube-rbac-proxy | |
openshift-ingress-operator |
kubelet |
ingress-operator-6dbf96bf9c-6rdr8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" already present on machine | |
openshift-ingress-operator |
kubelet |
ingress-operator-6dbf96bf9c-6rdr8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6275a171f6d5a523627963860415ed0e43f1728f2dd897c49412600bf64bc9c3" in 1.961506439s | |
(x2) | openshift-apiserver-operator |
openshift-apiserver-operator-config-observer-configobserver |
openshift-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: Â Â map[string]interface{}{ +Â "projectConfig": map[string]interface{}{"projectRequestMessage": string("")}, Â Â "routingConfig": map[string]interface{}{"subdomain": string("apps.test-cluster.redhat.com")}, Â Â "servingInfo": map[string]interface{}{"cipherSuites": []interface{}{string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), ...}, "minTLSVersion": string("VersionTLS12")}, Â Â "storageConfig": map[string]interface{}{"urls": []interface{}{string("https://192.168.126.10:2379")}}, Â Â } |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/aggregator-client-ca -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-675fc6b586-hhb4g pod)",Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/template.openshift.io/v1: 401" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/template.openshift.io/v1: 401" | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-7f8cfbbb59-nfqbn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e2eeaa28dd8f578270448360ada5a2c2f74e353d658a9abfbe2d9bb930c5f229" | |
openshift-image-registry |
multus |
cluster-image-registry-operator-7f8cfbbb59-nfqbn |
AddedInterface |
Add eth0 [10.128.0.47/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-849b6cd6bf-gjxgr |
Created |
Created container prometheus-operator-admission-webhook | |
openshift-monitoring |
kubelet |
prometheus-operator-admission-webhook-849b6cd6bf-gjxgr |
Started |
Started container prometheus-operator-admission-webhook | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-55cc8b7f-lssld |
Started |
Started container package-server-manager | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-55cc8b7f-lssld |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1d71ba084c63e2d6b3140b9cbada2b50bb6589a39a526dedb466945d284c73e" in 1.963703167s | |
openshift-cloud-controller-manager-operator |
master1_8f24e85a-ce5a-4d4f-834d-7174de32990d |
cluster-cloud-controller-manager-leader |
LeaderElection |
master1_8f24e85a-ce5a-4d4f-834d-7174de32990d became leader | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-54fdfd4884-5nzhm |
Created |
Created container cluster-cloud-controller-manager | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-54fdfd4884-5nzhm |
Started |
Started container cluster-cloud-controller-manager | |
openshift-insights |
kubelet |
insights-operator-847896d87d-xsmtv |
Started |
Started container insights-operator | |
openshift-insights |
kubelet |
insights-operator-847896d87d-xsmtv |
Created |
Created container insights-operator | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-54fdfd4884-5nzhm |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13397fef9671257021455712bf8242685325c97dbc6700c988bd6ab5e68ff57e" already present on machine | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-54fdfd4884-5nzhm |
Created |
Created container config-sync-controllers | |
openshift-operator-lifecycle-manager |
multus |
catalog-operator-76c4c9dd94-xj8sz |
AddedInterface |
Add eth0 [10.128.0.46/23] from ovn-kubernetes | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-79c4cfd957-5vbb6 |
Created |
Created container cluster-node-tuning-operator | |
openshift-cluster-node-tuning-operator |
kubelet |
cluster-node-tuning-operator-79c4cfd957-5vbb6 |
Started |
Started container cluster-node-tuning-operator | |
(x3) | openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/kube-control-plane-signer-ca -n openshift-kube-apiserver-operator: configmaps "kube-control-plane-signer-ca" already exists |
openshift-machine-config-operator |
kubelet |
machine-config-server-nggfk |
Started |
Started container machine-config-server | |
openshift-machine-config-operator |
kubelet |
machine-config-server-nggfk |
Created |
Created container machine-config-server | |
openshift-operator-lifecycle-manager |
kubelet |
package-server-manager-55cc8b7f-lssld |
Created |
Created container package-server-manager | |
openshift-machine-config-operator |
kubelet |
machine-config-server-nggfk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fafaae3445cd29a8ba685901a338b8539877d15f149466cc7b4e42fdca60c40" already present on machine | |
(x2) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "APIServicesAvailable: PreconditionNotReady\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.26:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.26:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.26:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.26:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-c868985c6-69zt7 |
Created |
Created container cluster-samples-operator | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-c868985c6-69zt7 |
Started |
Started container cluster-samples-operator | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-c868985c6-69zt7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e5cf6294e213c4dfbd16d7f5e0bd3071703a0fde2342eb09b3957eb6a2b6b3d" already present on machine | |
openshift-cluster-samples-operator |
kubelet |
cluster-samples-operator-c868985c6-69zt7 |
Created |
Created container cluster-samples-operator-watch | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-76c4c9dd94-xj8sz |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1d71ba084c63e2d6b3140b9cbada2b50bb6589a39a526dedb466945d284c73e" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/loadbalancer-serving-ca -n openshift-kube-apiserver-operator: configmaps "loadbalancer-serving-ca" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-signer -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
CABundleUpdateRequired |
"localhost-recovery-serving-ca" in "openshift-kube-apiserver-operator" requires a new cert | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-boundsatokensignercontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/bound-service-account-signing-key -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca -n openshift-kube-apiserver because it was missing | |
openshift-ingress-operator |
kubelet |
ingress-operator-6dbf96bf9c-6rdr8 |
Started |
Started container kube-rbac-proxy | |
openshift-cluster-samples-operator |
file-change-watchdog |
cluster-samples-operator |
FileChangeWatchdogStarted |
Started watching files for process cluster-samples-operator[7] | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-66559c5fb7-jp867 |
Created |
Created container csi-snapshot-controller-operator | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-operator-66559c5fb7-jp867 |
Started |
Started container csi-snapshot-controller-operator | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotwebhookcontroller-deployment-controller--csisnapshotwebhookcontroller |
csi-snapshot-controller-operator |
DeploymentCreated |
Created Deployment.apps/csi-snapshot-webhook -n openshift-cluster-storage-operator because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotcontroller-deployment-controller--csisnapshotcontroller |
csi-snapshot-controller-operator |
DeploymentCreated |
Created Deployment.apps/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded changed from Unknown to False ("All is well") | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-staticresourcecontroller |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"check-endpoints-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: missing notAfter | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-staticresourcecontroller |
csi-snapshot-controller-operator |
ServiceAccountCreated |
Created ServiceAccount/csi-snapshot-controller -n openshift-cluster-storage-operator because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
ValidatingWebhookConfigurationCreated |
Created ValidatingWebhookConfiguration.admissionregistration.k8s.io/snapshot.storage.k8s.io because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"operator.openshift.io" "csisnapshotcontrollers" "" "cluster"}] | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-loggingsyncer |
csi-snapshot-controller-operator |
FastControllerResync |
Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator-lock |
LeaderElection |
csi-snapshot-controller-operator-66559c5fb7-jp867_723bad50-aac9-49a6-bd4a-6471fd08da54 became leader | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator-lock |
LeaderElection |
csi-snapshot-controller-operator-66559c5fb7-jp867_723bad50-aac9-49a6-bd4a-6471fd08da54 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller-cert-rotation-controller-certrotationcontroller |
kube-apiserver-operator |
RotationError |
configmaps "kube-control-plane-signer-ca" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:openshift-kube-apiserver-installer because it was missing | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-controller-6587df558d |
SuccessfulCreate |
Created pod: csi-snapshot-controller-6587df558d-whn9q | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-controller-6587df558d-whn9q |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-backingresourcecontroller-backingresourcecontroller |
kube-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/installer-sa -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-audit-policy-controller-auditpolicycontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies -n openshift-kube-apiserver because it was missing | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-76c4c9dd94-xj8sz |
Created |
Created container catalog-operator | |
openshift-operator-lifecycle-manager |
kubelet |
catalog-operator-76c4c9dd94-xj8sz |
Started |
Started container catalog-operator | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-79f8b8bdc4-dm9dl |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1d71ba084c63e2d6b3140b9cbada2b50bb6589a39a526dedb466945d284c73e" in 3.630292267s | |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-webhook |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-webhook-8688766d4c to 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"master1\" not ready since 2023-02-13 14:52:05 +0000 UTC because KubeletNotReady (container runtime status check may not have completed yet)" to "NodeControllerDegraded: The master nodes not ready: node \"master1\" not ready since 2023-02-13 14:52:05 +0000 UTC because KubeletNotReady (container runtime status check may not have completed yet)\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists" | |
openshift-marketplace |
default-scheduler |
redhat-marketplace-m5lxp |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-cluster-storage-operator |
replicaset-controller |
csi-snapshot-webhook-8688766d4c |
SuccessfulCreate |
Created pod: csi-snapshot-webhook-8688766d4c-7cvdl | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-cloud-controller-manager-operator |
kubelet |
cluster-cloud-controller-manager-operator-54fdfd4884-5nzhm |
Started |
Started container config-sync-controllers | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-staticresourcecontroller |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotstaticresourcecontroller-staticresourcecontroller |
csi-snapshot-controller-operator |
ServiceCreated |
Created Service/csi-snapshot-webhook -n openshift-cluster-storage-operator because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from Unknown to True ("CSISnapshotControllerProgressing: Waiting for Deployment to act on changes"),Available changed from Unknown to False ("CSISnapshotControllerAvailable: Waiting for Deployment") | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotgueststaticresourcecontroller-staticresourcecontroller |
csi-snapshot-controller-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io because it was missing | |
openshift-marketplace |
default-scheduler |
redhat-operators-hsggr |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-cloud-controller-manager-operator |
master1_b1299d6a-40f8-4a46-a1ed-b8a834769676 |
cluster-cloud-config-sync-leader |
LeaderElection |
master1_b1299d6a-40f8-4a46-a1ed-b8a834769676 became leader | |
openshift-cluster-storage-operator |
deployment-controller |
csi-snapshot-controller |
ScalingReplicaSet |
Scaled up replica set csi-snapshot-controller-6587df558d to 1 | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-webhook-8688766d4c-7cvdl |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/template.openshift.io/v1: 401" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserverworkloadcontroller |
openshift-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-apiserver: cause by changes in data.config.yaml | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator |
csi-snapshot-controller-operator |
ValidatingWebhookConfigurationUpdated |
Updated ValidatingWebhookConfiguration.admissionregistration.k8s.io/snapshot.storage.k8s.io because it changed | |
openshift-marketplace |
default-scheduler |
community-operators-svbqj |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-marketplace |
default-scheduler |
certified-operators-4sdw2 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-79f8b8bdc4-dm9dl |
Created |
Created container olm-operator | |
openshift-operator-lifecycle-manager |
kubelet |
olm-operator-79f8b8bdc4-dm9dl |
Started |
Started container olm-operator | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/node-system-admin-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"node-system-admin-client" in "openshift-kube-apiserver-operator" requires a new target cert/key pair: missing notAfter | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/template.openshift.io/v1: 401" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
RevisionCreateFailed |
Failed to create revision 1: configmaps "etcd-pod" not found | |
openshift-monitoring |
replicaset-controller |
prometheus-operator-7985bb87b6 |
SuccessfulCreate |
Created pod: prometheus-operator-7985bb87b6-tw658 | |
openshift-monitoring |
default-scheduler |
prometheus-operator-7985bb87b6-tw658 |
FailedScheduling |
0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling. | |
openshift-monitoring |
deployment-controller |
prometheus-operator |
ScalingReplicaSet |
Scaled up replica set prometheus-operator-7985bb87b6 to 1 | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to act on changes" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
ServiceCreated |
Created Service/apiserver -n openshift-kube-apiserver because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-csisnapshotwebhookcontroller-deployment-controller--csisnapshotwebhookcontroller |
csi-snapshot-controller-operator |
DeploymentUpdated |
Updated Deployment.apps/csi-snapshot-webhook -n openshift-cluster-storage-operator because it changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kube-scheduler-client-cert-key -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/external-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/service-network-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-resource-sync-controller-resourcesynccontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/kube-scheduler-client-cert-key -n openshift-kube-scheduler because it was missing | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "CSISnapshotWebhookControllerDegraded: Operation cannot be fulfilled on csisnapshotcontrollers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods" to "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods\nCSISnapshotWebhookControllerProgressing: Waiting for Deployment to deploy pods",Available message changed from "CSISnapshotControllerAvailable: Waiting for Deployment" to "CSISnapshotControllerAvailable: Waiting for Deployment\nCSISnapshotWebhookControllerAvailable: Waiting for Deployment" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotWebhookControllerDegraded: Operation cannot be fulfilled on csisnapshotcontrollers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "All is well" | |
openshift-cluster-storage-operator |
default-scheduler |
cluster-storage-operator-578bfd8476-l4s7l |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/cluster-storage-operator-578bfd8476-l4s7l to master1 | |
openshift-marketplace |
default-scheduler |
community-operators-svbqj |
Scheduled |
Successfully assigned openshift-marketplace/community-operators-svbqj to master1 | |
openshift-machine-api |
default-scheduler |
machine-api-operator-df4db9c9b-6rkdn |
Scheduled |
Successfully assigned openshift-machine-api/machine-api-operator-df4db9c9b-6rkdn to master1 | |
(x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: client-ca, secrets: kube-controller-manager-client-cert-key |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kubelet-client -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-serving-cert-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: kube-controller-manager-client-cert-key]" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: kube-controller-manager-client-cert-key]" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-resource-sync-controller-resourcesynccontroller |
openshift-kube-scheduler-operator |
SecretCreateFailed |
Failed to create Secret/kube-scheduler-client-cert-key -n openshift-kube-scheduler: secrets "kube-scheduler-client-cert-key" already exists | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: secrets: kube-scheduler-client-cert-key\nNodeControllerDegraded: The master nodes not ready: node \"master1\" not ready since 2023-02-13 14:52:05 +0000 UTC because KubeletNotReady (container runtime status check may not have completed yet)" to "InstallerControllerDegraded: missing required resources: secrets: kube-scheduler-client-cert-key" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: secrets: kube-scheduler-client-cert-key" to "InstallerControllerDegraded: missing required resources: secrets: kube-scheduler-client-cert-key\nResourceSyncControllerDegraded: secrets \"kube-scheduler-client-cert-key\" already exists" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: secrets: kube-scheduler-client-cert-key\nResourceSyncControllerDegraded: secrets \"kube-scheduler-client-cert-key\" already exists" to "InstallerControllerDegraded: missing required resources: secrets: kube-scheduler-client-cert-key" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/aggregator-client -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"master1\" not ready since 2023-02-13 14:52:05 +0000 UTC because KubeletNotReady (container runtime status check may not have completed yet)\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists" to "NodeControllerDegraded: The master nodes not ready: node \"master1\" not ready since 2023-02-13 14:52:05 +0000 UTC because KubeletNotReady (container runtime status check may not have completed yet)\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nWorkerLatencyProfileDegraded: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/localhost-recovery-serving-ca -n openshift-kube-apiserver-operator because it was missing | |
openshift-marketplace |
default-scheduler |
redhat-marketplace-m5lxp |
Scheduled |
Successfully assigned openshift-marketplace/redhat-marketplace-m5lxp to master1 | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-controller-6587df558d-whn9q |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-controller-6587df558d-whn9q to master1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"localhost-recovery-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: missing notAfter | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: kube-controller-manager-client-cert-key]\nNodeControllerDegraded: The master nodes not ready: node \"master1\" not ready since 2023-02-13 14:52:05 +0000 UTC because KubeletNotReady (container runtime status check may not have completed yet)" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: [configmaps: aggregator-client-ca,client-ca, secrets: kube-controller-manager-client-cert-key]" | |
default |
kubelet |
master1 |
NodeReady |
Node master1 status is now: NodeReady | |
openshift-marketplace |
multus |
redhat-operators-hsggr |
AddedInterface |
Add eth0 [10.128.0.48/23] from ovn-kubernetes | |
openshift-monitoring |
default-scheduler |
prometheus-operator-7985bb87b6-tw658 |
Scheduled |
Successfully assigned openshift-monitoring/prometheus-operator-7985bb87b6-tw658 to master1 | |
openshift-marketplace |
default-scheduler |
certified-operators-4sdw2 |
Scheduled |
Successfully assigned openshift-marketplace/certified-operators-4sdw2 to master1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs -n openshift-config-managed because it was missing | |
openshift-cluster-storage-operator |
default-scheduler |
csi-snapshot-webhook-8688766d4c-7cvdl |
Scheduled |
Successfully assigned openshift-cluster-storage-operator/csi-snapshot-webhook-8688766d4c-7cvdl to master1 | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/template.openshift.io/v1: 401" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/template.openshift.io/v1: 401" | |
openshift-marketplace |
default-scheduler |
redhat-operators-hsggr |
Scheduled |
Successfully assigned openshift-marketplace/redhat-operators-hsggr to master1 | |
openshift-cluster-storage-operator |
multus |
cluster-storage-operator-578bfd8476-l4s7l |
AddedInterface |
Add eth0 [10.128.0.56/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
machine-api-operator-df4db9c9b-6rkdn |
Started |
Started container kube-rbac-proxy | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-webhook-8688766d4c-7cvdl |
AddedInterface |
Add eth0 [10.128.0.55/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-operator-7985bb87b6-tw658 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:503846d640ded8b0deedc7c69647320065055d3d2a423993259692362c5d5b86" | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-webhook-8688766d4c-7cvdl |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f876250993619037cbf206da00d0419c545269799f3b29848a9d1bc0e88aad30" | |
openshift-machine-api |
multus |
machine-api-operator-df4db9c9b-6rkdn |
AddedInterface |
Add eth0 [10.128.0.50/23] from ovn-kubernetes | |
openshift-machine-api |
kubelet |
machine-api-operator-df4db9c9b-6rkdn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" already present on machine | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled down replica set apiserver-675fc6b586 to 0 from 1 | |
openshift-apiserver |
default-scheduler |
apiserver-6c9d449c6-bt726 |
FailedScheduling |
0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. | |
openshift-image-registry |
image-registry-operator |
openshift-master-controllers |
LeaderElection |
cluster-image-registry-operator-7f8cfbbb59-nfqbn_5ccb9211-54ae-4fae-b3df-bf1bb93038f9 became leader | |
openshift-image-registry |
image-registry-operator |
openshift-master-controllers |
LeaderElection |
cluster-image-registry-operator-7f8cfbbb59-nfqbn_5ccb9211-54ae-4fae-b3df-bf1bb93038f9 became leader | |
openshift-image-registry |
image-registry-operator-loggingsyncer |
cluster-image-registry-operator |
FastControllerResync |
Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling | |
openshift-machine-api |
kubelet |
machine-api-operator-df4db9c9b-6rkdn |
Created |
Created container kube-rbac-proxy | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-7f8cfbbb59-nfqbn |
Started |
Started container cluster-image-registry-operator | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeTargetRevisionChanged |
Updating node "master1" from revision 0 to 3 because node master1 static pod not found | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready") | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/check-endpoints-client-cert-key -n openshift-kube-apiserver because it was missing | |
openshift-monitoring |
multus |
prometheus-operator-7985bb87b6-tw658 |
AddedInterface |
Add eth0 [10.128.0.52/23] from ovn-kubernetes | |
openshift-apiserver-operator |
openshift-apiserver-operator-openshiftapiserverworkloadcontroller |
openshift-apiserver-operator |
DeploymentUpdated |
Updated Deployment.apps/apiserver -n openshift-apiserver because it changed | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing changed from False to True ("OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 2, desired generation is 3.") | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-675fc6b586-hhb4g pod)" to "All is well",Progressing message changed from "OperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 2, desired generation is 3." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 3.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 2, desired generation is 3.",Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/template.openshift.io/v1: 401" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/template.openshift.io/v1: 401" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/template.openshift.io/v1: 401" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" | |
openshift-marketplace |
kubelet |
certified-operators-4sdw2 |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.12" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-node-reader because it was missing | |
openshift-apiserver |
replicaset-controller |
apiserver-6c9d449c6 |
SuccessfulCreate |
Created pod: apiserver-6c9d449c6-bt726 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-crd-reader because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints-node-reader because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
ClusterRoleCreated |
Created ClusterRole.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-marketplace |
multus |
certified-operators-4sdw2 |
AddedInterface |
Add eth0 [10.128.0.53/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
redhat-marketplace-m5lxp |
AddedInterface |
Add eth0 [10.128.0.49/23] from ovn-kubernetes | |
openshift-apiserver |
kubelet |
apiserver-675fc6b586-hhb4g |
Killing |
Stopping container openshift-apiserver | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-578bfd8476-l4s7l |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1a9ffe4c3d12fd672271f098a10a111ab5b3d145b7e2da447ef1aaab5189c12" | |
openshift-apiserver |
kubelet |
apiserver-675fc6b586-hhb4g |
Killing |
Stopping container openshift-apiserver-check-endpoints | |
openshift-apiserver |
replicaset-controller |
apiserver-675fc6b586 |
SuccessfulDelete |
Deleted pod: apiserver-675fc6b586-hhb4g | |
openshift-cluster-storage-operator |
multus |
csi-snapshot-controller-6587df558d-whn9q |
AddedInterface |
Add eth0 [10.128.0.51/23] from ovn-kubernetes | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-6587df558d-whn9q |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:631012b7d9f911558fa49e34402be56a1587a09e58ad645ce2de37aaa20eb468" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.25:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.25:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"" to "APIServicesAvailable: PreconditionNotReady" | |
openshift-marketplace |
kubelet |
redhat-operators-hsggr |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.12" | |
openshift-machine-api |
kubelet |
machine-api-operator-df4db9c9b-6rkdn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:303fe68053354fb40b73196c2c950e5305cf4cd7b9109824b6aa33d3aeedb988" | |
openshift-apiserver |
deployment-controller |
apiserver |
ScalingReplicaSet |
Scaled up replica set apiserver-6c9d449c6 to 1 from 0 | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-7f8cfbbb59-nfqbn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e2eeaa28dd8f578270448360ada5a2c2f74e353d658a9abfbe2d9bb930c5f229" in 5.761040923s | |
openshift-image-registry |
kubelet |
cluster-image-registry-operator-7f8cfbbb59-nfqbn |
Created |
Created container cluster-image-registry-operator | |
openshift-marketplace |
kubelet |
redhat-marketplace-m5lxp |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.12" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-system-admin-client -n openshift-kube-apiserver-operator because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/check-endpoints-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:kube-apiserver-check-endpoints -n kube-system because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/system:openshift:controller:check-endpoints -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/control-plane-node-kubeconfig -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
RoleBindingCreated |
Created RoleBinding.rbac.authorization.k8s.io/authentication-reader-for-authenticated-users -n kube-system because it was missing | |
openshift-marketplace |
kubelet |
community-operators-svbqj |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.12" | |
openshift-marketplace |
multus |
community-operators-svbqj |
AddedInterface |
Add eth0 [10.128.0.54/23] from ovn-kubernetes | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-metrics-proxy-client-ca,etcd-metrics-proxy-serving-ca,etcd-peer-client-ca,etcd-scripts,etcd-serving-ca,restore-etcd-pod, secrets: etcd-all-certs, configmaps: etcd-endpoints-0,etcd-metrics-proxy-client-ca-0,etcd-metrics-proxy-serving-ca-0,etcd-peer-client-ca-0,etcd-pod-0,etcd-serving-ca-0, secrets: etcd-all-certs-0]" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-scripts,restore-etcd-pod, configmaps: etcd-endpoints-0,etcd-metrics-proxy-client-ca-0,etcd-metrics-proxy-serving-ca-0,etcd-peer-client-ca-0,etcd-pod-0,etcd-serving-ca-0, secrets: etcd-all-certs-0]" | |
openshift-kube-scheduler |
multus |
installer-3-master1 |
AddedInterface |
Add eth0 [10.128.0.57/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
CustomResourceDefinitionUpdated |
Updated CustomResourceDefinition.apiextensions.k8s.io/apirequestcounts.apiserver.openshift.io because it changed | |
openshift-kube-scheduler |
kubelet |
installer-3-master1 |
Started |
Started container installer | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-3-master1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-prioritylevel-storage-version-migration because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
ServiceAccountCreated |
Created ServiceAccount/localhost-recovery-client -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
ClusterRoleBindingCreated |
Created ClusterRoleBinding.rbac.authorization.k8s.io/system:openshift:operator:kube-apiserver-recovery because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-3-master1 |
Created |
Created container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
StorageVersionMigrationCreated |
Created StorageVersionMigration.migration.k8s.io/flowcontrol-flowschema-storage-version-migration because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-3-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e045fad043f28570b754619999ffb356bedee81ff842c56a32b1b13588fc1651" already present on machine | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 3.\nOperatorConfigProgressing: openshiftapiserveroperatorconfigs/instance: observed generation is 2, desired generation is 3." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 2, desired generation is 3." | |
(x6) | openshift-kube-apiserver-operator |
kube-apiserver-operator-kubeapiserverstaticresources-kubeapiserverstaticresources |
kube-apiserver-operator |
PrometheusRuleCreated |
Created PrometheusRule.monitoring.coreos.com/v1 because it was missing |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()",Progressing changed from True to False ("All is well"),Available message changed from "APIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"master1\" not ready since 2023-02-13 14:52:05 +0000 UTC because KubeletNotReady (container runtime status check may not have completed yet)\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nWorkerLatencyProfileDegraded: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: The master nodes not ready: node \"master1\" not ready since 2023-02-13 14:52:05 +0000 UTC because KubeletNotReady (container runtime status check may not have completed yet)\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nWorkerLatencyProfileDegraded: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionCreateFailed |
Failed to create revision 1: configmaps "kube-apiserver-pod" not found | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: aggregator-client-ca,client-ca, secrets: control-plane-node-admin-client-cert-key,internal-loadbalancer-serving-certkey,node-kubeconfigs, configmaps: bound-sa-token-signing-certs-0,config-0,etcd-serving-ca-0,kube-apiserver-audit-policies-0,kube-apiserver-cert-syncer-kubeconfig-0,kube-apiserver-pod-0,kubelet-serving-ca-0,sa-token-signing-certs-0, secrets: etcd-client-0,localhost-recovery-client-token-0,localhost-recovery-serving-certkey-0 | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-79c4cfd957-5vbb6_9d38800c-1bd1-4acb-84bf-8a61a87f09a2 |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-79c4cfd957-5vbb6_9d38800c-1bd1-4acb-84bf-8a61a87f09a2 became leader | |
openshift-cluster-node-tuning-operator |
cluster-node-tuning-operator-79c4cfd957-5vbb6_9d38800c-1bd1-4acb-84bf-8a61a87f09a2 |
node-tuning-operator-lock |
LeaderElection |
cluster-node-tuning-operator-79c4cfd957-5vbb6_9d38800c-1bd1-4acb-84bf-8a61a87f09a2 became leader | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-6587df558d-whn9q |
Created |
Created container snapshot-controller | |
openshift-operator-lifecycle-manager |
default-scheduler |
packageserver-5ffd8fd46d-h4lhn |
Scheduled |
Successfully assigned openshift-operator-lifecycle-manager/packageserver-5ffd8fd46d-h4lhn to master1 | |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
RequirementsUnknown |
requirements not yet checked | |
openshift-cluster-node-tuning-operator |
daemonset-controller |
tuned |
SuccessfulCreate |
Created pod: tuned-q94d7 | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-q94d7 |
Started |
Started container tuned | |
openshift-cluster-storage-operator |
snapshot-controller-leader/csi-snapshot-controller-6587df558d-whn9q |
snapshot-controller-leader |
LeaderElection |
csi-snapshot-controller-6587df558d-whn9q became leader | |
openshift-operator-lifecycle-manager |
package-server-manager-55cc8b7f-lssld_9eb8d464-a4ed-498f-b3f0-cf2c90831c67 |
packageserver-controller-lock |
LeaderElection |
package-server-manager-55cc8b7f-lssld_9eb8d464-a4ed-498f-b3f0-cf2c90831c67 became leader | |
openshift-operator-lifecycle-manager |
deployment-controller |
packageserver |
ScalingReplicaSet |
Scaled up replica set packageserver-5ffd8fd46d to 1 | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-6587df558d-whn9q |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:631012b7d9f911558fa49e34402be56a1587a09e58ad645ce2de37aaa20eb468" in 5.309132954s | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-q94d7 |
Created |
Created container tuned | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-q94d7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5097e405f3dc5e0bd7e6072d3d93cbfcd45d3d74771003c48e689b2f8c4d3850" already present on machine | |
openshift-cluster-node-tuning-operator |
default-scheduler |
tuned-q94d7 |
Scheduled |
Successfully assigned openshift-cluster-node-tuning-operator/tuned-q94d7 to master1 | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-controller-6587df558d-whn9q |
Started |
Started container snapshot-controller | |
(x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-env-var-controller |
etcd-operator |
EnvVarControllerUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again |
openshift-operator-lifecycle-manager |
replicaset-controller |
packageserver-5ffd8fd46d |
SuccessfulCreate |
Created pod: packageserver-5ffd8fd46d-h4lhn | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing message changed from "CSISnapshotControllerProgressing: Waiting for Deployment to deploy pods\nCSISnapshotWebhookControllerProgressing: Waiting for Deployment to deploy pods" to "CSISnapshotWebhookControllerProgressing: Waiting for Deployment to deploy pods",Available message changed from "CSISnapshotControllerAvailable: Waiting for Deployment\nCSISnapshotWebhookControllerAvailable: Waiting for Deployment" to "CSISnapshotWebhookControllerAvailable: Waiting for Deployment" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-scripts,restore-etcd-pod, configmaps: etcd-endpoints-0,etcd-metrics-proxy-client-ca-0,etcd-metrics-proxy-serving-ca-0,etcd-peer-client-ca-0,etcd-pod-0,etcd-serving-ca-0, secrets: etcd-all-certs-0]" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-scripts,restore-etcd-pod, configmaps: etcd-endpoints-0,etcd-metrics-proxy-client-ca-0,etcd-metrics-proxy-serving-ca-0,etcd-peer-client-ca-0,etcd-pod-0,etcd-serving-ca-0, secrets: etcd-all-certs-0]\nEnvVarControllerDegraded: no supported cipherSuites not found in observedConfig" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver ()" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-675fc6b586-hhb4g pod)" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorVersionChanged |
clusteroperator/csi-snapshot-controller version "operator" changed from "" to "4.12.2" | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-5ffd8fd46d-h4lhn |
Created |
Created container packageserver | |
(x11) | openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
RequiredInstallerResourcesMissing |
configmaps: etcd-scripts,restore-etcd-pod, configmaps: etcd-endpoints-0,etcd-metrics-proxy-client-ca-0,etcd-metrics-proxy-serving-ca-0,etcd-peer-client-ca-0,etcd-pod-0,etcd-serving-ca-0, secrets: etcd-all-certs-0 |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorVersionChanged |
clusteroperator/csi-snapshot-controller version "csi-snapshot-controller" changed from "" to "4.12.2" | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-5ffd8fd46d-h4lhn |
Started |
Started container packageserver | |
openshift-operator-lifecycle-manager |
multus |
packageserver-5ffd8fd46d-h4lhn |
AddedInterface |
Add eth0 [10.128.0.58/23] from ovn-kubernetes | |
openshift-operator-lifecycle-manager |
kubelet |
packageserver-5ffd8fd46d-h4lhn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1d71ba084c63e2d6b3140b9cbada2b50bb6589a39a526dedb466945d284c73e" already present on machine | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: status.versions changed from [] to [{"operator" "4.12.2"} {"csi-snapshot-controller" "4.12.2"}] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"kube-controller-manager-client-cert-key" in "openshift-config-managed" requires a new target cert/key pair: missing notAfter | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/kube-controller-manager-client-cert-key -n openshift-config-managed because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/kube-controller-manager-client-cert-key -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/revision-status-2 -n openshift-etcd because it was missing | |
openshift-machine-api |
cluster-autoscaler-operator-56b65b888d-pgphn_678c2991-54a6-4dd0-84e4-68d36b310d89 |
cluster-autoscaler-operator-leader |
LeaderElection |
cluster-autoscaler-operator-56b65b888d-pgphn_678c2991-54a6-4dd0-84e4-68d36b310d89 became leader | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"control-plane-node-admin-client-cert-key" in "openshift-kube-apiserver" requires a new target cert/key pair: missing notAfter | |
openshift-monitoring |
kubelet |
prometheus-operator-7985bb87b6-tw658 |
Created |
Created container prometheus-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-7985bb87b6-tw658 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-operator-7985bb87b6-tw658 |
Started |
Started container prometheus-operator | |
openshift-monitoring |
kubelet |
prometheus-operator-7985bb87b6-tw658 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:503846d640ded8b0deedc7c69647320065055d3d2a423993259692362c5d5b86" in 10.683444159s | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-webhook-8688766d4c-7cvdl |
Started |
Started container webhook | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-webhook-8688766d4c-7cvdl |
Created |
Created container webhook | |
openshift-cluster-storage-operator |
kubelet |
csi-snapshot-webhook-8688766d4c-7cvdl |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f876250993619037cbf206da00d0419c545269799f3b29848a9d1bc0e88aad30" in 10.538377373s | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-scripts,restore-etcd-pod, configmaps: etcd-endpoints-0,etcd-metrics-proxy-client-ca-0,etcd-metrics-proxy-serving-ca-0,etcd-peer-client-ca-0,etcd-pod-0,etcd-serving-ca-0, secrets: etcd-all-certs-0]\nEnvVarControllerDegraded: no supported cipherSuites not found in observedConfig" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-scripts,restore-etcd-pod, configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1]\nEnvVarControllerDegraded: no supported cipherSuites not found in observedConfig",Progressing changed from False to True ("NodeInstallerProgressing: 1 nodes are at revision 0; 0 nodes have achieved new revision 1"),Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 nodes are at revision 0" to "StaticPodsAvailable: 0 nodes are active; 1 nodes are at revision 0; 0 nodes have achieved new revision 1" | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-578bfd8476-l4s7l |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a1a9ffe4c3d12fd672271f098a10a111ab5b3d145b7e2da447ef1aaab5189c12" in 10.494354494s | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/control-plane-node-admin-client-cert-key -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
TargetUpdateRequired |
"internal-loadbalancer-serving-certkey" in "openshift-kube-apiserver" requires a new target cert/key pair: missing notAfter | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-578bfd8476-l4s7l |
Created |
Created container cluster-storage-operator | |
openshift-cluster-storage-operator |
kubelet |
cluster-storage-operator-578bfd8476-l4s7l |
Started |
Started container cluster-storage-operator | |
openshift-cluster-storage-operator |
cluster-storage-operator-csidriverstarter |
cluster-storage-operator |
FastControllerResync |
Controller "CSIDriverStarter" resync interval is set to 0s which might lead to client request throttling | |
default |
machineconfigcontroller-nodecontroller |
master |
AnnotationChange |
Node master1 now has machineconfiguration.openshift.io/state=Done | |
openshift-cluster-storage-operator |
cluster-storage-operator-loggingsyncer |
cluster-storage-operator |
FastControllerResync |
Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling | |
openshift-cluster-storage-operator |
cluster-storage-operator-snapshotcrdcontroller |
cluster-storage-operator |
FastControllerResync |
Controller "SnapshotCRDController" resync interval is set to 0s which might lead to client request throttling | |
openshift-cluster-storage-operator |
cluster-storage-operator-defaultstorageclasscontroller |
cluster-storage-operator |
FastControllerResync |
Controller "DefaultStorageClassController" resync interval is set to 0s which might lead to client request throttling | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator-lock |
LeaderElection |
cluster-storage-operator-578bfd8476-l4s7l_1854ec9d-4b35-4d21-9e16-1d0b6f710685 became leader | |
openshift-cluster-storage-operator |
cluster-storage-operator |
cluster-storage-operator-lock |
LeaderElection |
cluster-storage-operator-578bfd8476-l4s7l_1854ec9d-4b35-4d21-9e16-1d0b6f710685 became leader | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorVersionChanged |
clusteroperator/storage version "operator" changed from "" to "4.12.2" | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),status.relatedObjects changed from [] to [{"" "namespaces" "" "openshift-cluster-storage-operator"} {"" "namespaces" "" "openshift-cluster-csi-drivers"} {"operator.openshift.io" "storages" "" "cluster"} {"rbac.authorization.k8s.io" "clusterrolebindings" "" "cluster-storage-operator-role"} {"sharedresource.openshift.io" "sharedconfigmaps" "" ""} {"sharedresource.openshift.io" "sharedsecrets" "" ""}],status.versions changed from [] to [{"operator" "4.12.2"}] | |
openshift-cluster-storage-operator |
cluster-storage-operator-vsphereproblemdetectorstarter |
cluster-storage-operator |
FastControllerResync |
Controller "VSphereProblemDetectorStarter" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-cert-rotation-controller |
kube-apiserver-operator |
SecretCreated |
Created Secret/internal-loadbalancer-serving-certkey -n openshift-kube-apiserver because it was missing | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Degraded changed from Unknown to False ("All is well") | |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Progressing changed from Unknown to False ("All is well"),Available changed from Unknown to True ("DefaultStorageClassControllerAvailable: No default StorageClass for this platform") | |
default |
machineconfigdaemon |
master1 |
Uncordon |
Update completed for config rendered-master-4d84e62e5a4303ab1b6720a1e03cdc40 and node has been uncordoned | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well") | |
default |
machineconfigdaemon |
master1 |
NodeDone |
Setting node master1, currentConfig rendered-master-4d84e62e5a4303ab1b6720a1e03cdc40 to Done | |
default |
machineconfigdaemon |
master1 |
ConfigDriftMonitorStarted |
Config Drift Monitor started, watching against rendered-master-4d84e62e5a4303ab1b6720a1e03cdc40 | |
openshift-monitoring |
kubelet |
prometheus-operator-7985bb87b6-tw658 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-operator-7985bb87b6-tw658 |
Created |
Created container kube-rbac-proxy | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-scripts,restore-etcd-pod, configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1]\nEnvVarControllerDegraded: no supported cipherSuites not found in observedConfig" to "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-scripts,restore-etcd-pod, configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1]\nEnvVarControllerDegraded: no supported cipherSuites not found in observedConfig\nRevisionControllerDegraded: configmaps \"etcd-pod\" not found" | |
(x3) | openshift-apiserver |
kubelet |
apiserver-675fc6b586-hhb4g |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-cluster-storage-operator |
cluster-storage-operator-status-controller-statussyncer_storage |
cluster-storage-operator |
OperatorStatusChanged |
Status for clusteroperator/storage changed: Upgradeable changed from Unknown to True ("All is well") | |
(x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObservedConfigWriteError |
Failed to write observed config: Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again |
(x3) | openshift-apiserver |
kubelet |
apiserver-675fc6b586-hhb4g |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]informer-sync ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/max-in-flight-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/image.openshift.io-apiserver-caches ok [+]poststarthook/authorization.openshift.io-bootstrapclusterroles ok [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok [+]poststarthook/project.openshift.io-projectcache ok [+]poststarthook/project.openshift.io-projectauthorizationcache ok [+]poststarthook/openshift.io-startinformers ok [+]poststarthook/openshift.io-restmapperupdater ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [-]shutdown failed: reason withheld readyz check failed |
openshift-monitoring |
daemonset-controller |
node-exporter |
SuccessfulCreate |
Created pod: node-exporter-rqgmw | |
openshift-monitoring |
kubelet |
node-exporter-rqgmw |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ecf76246d81adfe3f52fdb54a7bddf6b892ea6900521d71553d16f2918a2cac" | |
openshift-monitoring |
deployment-controller |
openshift-state-metrics |
ScalingReplicaSet |
Scaled up replica set openshift-state-metrics-5ff95d844f to 1 | |
openshift-monitoring |
replicaset-controller |
kube-state-metrics-75455b796c |
SuccessfulCreate |
Created pod: kube-state-metrics-75455b796c-45w6j | |
openshift-monitoring |
default-scheduler |
kube-state-metrics-75455b796c-45w6j |
Scheduled |
Successfully assigned openshift-monitoring/kube-state-metrics-75455b796c-45w6j to master1 | |
openshift-monitoring |
deployment-controller |
kube-state-metrics |
ScalingReplicaSet |
Scaled up replica set kube-state-metrics-75455b796c to 1 | |
openshift-monitoring |
default-scheduler |
node-exporter-rqgmw |
Scheduled |
Successfully assigned openshift-monitoring/node-exporter-rqgmw to master1 | |
openshift-monitoring |
multus |
openshift-state-metrics-5ff95d844f-hdc8s |
AddedInterface |
Add eth0 [10.128.0.59/23] from ovn-kubernetes | |
openshift-monitoring |
default-scheduler |
openshift-state-metrics-5ff95d844f-hdc8s |
Scheduled |
Successfully assigned openshift-monitoring/openshift-state-metrics-5ff95d844f-hdc8s to master1 | |
openshift-monitoring |
replicaset-controller |
openshift-state-metrics-5ff95d844f |
SuccessfulCreate |
Created pod: openshift-state-metrics-5ff95d844f-hdc8s | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5ff95d844f-hdc8s |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"master1\" not ready since 2023-02-13 14:52:05 +0000 UTC because KubeletNotReady (container runtime status check may not have completed yet)\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nWorkerLatencyProfileDegraded: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found" to "NodeControllerDegraded: The master nodes not ready: node \"master1\" not ready since 2023-02-13 14:52:05 +0000 UTC because KubeletNotReady (container runtime status check may not have completed yet)\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nWorkerLatencyProfileDegraded: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5ff95d844f-hdc8s |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
multus |
kube-state-metrics-75455b796c-45w6j |
AddedInterface |
Add eth0 [10.128.0.60/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5ff95d844f-hdc8s |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" already present on machine | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5ff95d844f-hdc8s |
Created |
Created container kube-rbac-proxy-main | |
openshift-ingress-operator |
certificate_controller |
router-ca |
CreatedWildcardCACert |
Created a default wildcard CA certificate | |
default |
machineconfigoperator |
machine-config |
OperatorVersionChanged |
clusteroperator/machine-config-operator version changed from [] to [{operator 4.12.2}] | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: getting cache client could not retrieve endpoints: node lister not synced\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-scripts,restore-etcd-pod, configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1]\nEnvVarControllerDegraded: no supported cipherSuites not found in observedConfig\nRevisionControllerDegraded: configmaps \"etcd-pod\" not found" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-scripts,restore-etcd-pod, configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1]\nEnvVarControllerDegraded: no supported cipherSuites not found in observedConfig\nRevisionControllerDegraded: configmaps \"etcd-pod\" not found" | |
openshift-monitoring |
kubelet |
kube-state-metrics-75455b796c-45w6j |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f2f70f1bd12128213b7b131782a4e76df20cbc224b13c69fff7ec71787b5499e" | |
openshift-ingress |
deployment-controller |
router-default |
ScalingReplicaSet |
Scaled up replica set router-default-f8bd48fbf to 1 | |
openshift-ingress |
replicaset-controller |
router-default-f8bd48fbf |
SuccessfulCreate |
Created pod: router-default-f8bd48fbf-2mnbd | |
openshift-ingress |
default-scheduler |
router-default-f8bd48fbf-2mnbd |
Scheduled |
Successfully assigned openshift-ingress/router-default-f8bd48fbf-2mnbd to master1 | |
openshift-ingress |
kubelet |
router-default-f8bd48fbf-2mnbd |
FailedMount |
MountVolume.SetUp failed for volume "default-certificate" : secret "router-certs-default" not found | |
openshift-ingress-operator |
ingress_controller |
default |
Admitted |
ingresscontroller passed validation | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: secret \"v4-0-config-system-router-certs\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-scripts,restore-etcd-pod, configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1]\nEnvVarControllerDegraded: no supported cipherSuites not found in observedConfig\nRevisionControllerDegraded: configmaps \"etcd-pod\" not found" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-scripts,restore-etcd-pod, configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1]\nEnvVarControllerDegraded: no supported cipherSuites not found in observedConfig\nRevisionControllerDegraded: configmaps \"etcd-pod\" not found\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-config-managed |
certificate_publisher_controller |
router-certs |
PublishedRouterCertificates |
Published router certificates | |
openshift-authentication-operator |
cluster-authentication-operator-routercertsdomainvalidationcontroller |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-router-certs -n openshift-authentication because it was missing | |
openshift-ingress-operator |
certificate_controller |
default |
CreatedDefaultCertificate |
Created default wildcard certificate "router-certs-default" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveRouterSecret |
namedCertificates changed to []interface {}{map[string]interface {}{"certFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.test-cluster.redhat.com", "keyFile":"/var/config/system/secrets/v4-0-config-system-router-certs/apps.test-cluster.redhat.com", "names":[]interface {}{"*.apps.test-cluster.redhat.com"}}} | |
(x3) | openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 |
openshift-config-managed |
certificate_publisher_controller |
default-ingress-cert |
PublishedRouterCA |
Published "default-ingress-cert" in "openshift-config-managed" | |
(x3) | openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: " map[string]interface{}{\n \t\"corsAllowedOrigins\": []interface{}{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n \t\"oauthConfig\": map[string]interface{}{\"assetPublicURL\": string(\"\"), \"loginURL\": string(\"https://api.test-cluster.redhat.com:6443\"), \"templates\": map[string]interface{}{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)}, \"tokenConfig\": map[string]interface{}{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)}},\n \t\"serverArguments\": map[string]interface{}{\"audit-log-format\": []interface{}{string(\"json\")}, \"audit-log-maxbackup\": []interface{}{string(\"10\")}, \"audit-log-maxsize\": []interface{}{string(\"100\")}, \"audit-log-path\": []interface{}{string(\"/var/log/oauth-server/audit.log\")}, ...},\n \t\"servingInfo\": map[string]interface{}{\n \t\t\"cipherSuites\": []interface{}{string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\"), string(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"), ...},\n \t\t\"minTLSVersion\": string(\"VersionTLS12\"),\n+ \t\t\"namedCertificates\": []interface{}{\n+ \t\t\tmap[string]interface{}{\n+ \t\t\t\t\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+ \t\t\t\t\"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...),\n+ \t\t\t\t\"names\": []interface{}{string(\"*.apps.test-cluster.redhat.com\")},\n+ \t\t\t},\n+ \t\t},\n \t},\n \t\"volumesToMount\": map[string]interface{}{\"identityProviders\": string(\"{}\")},\n }\n" | |
(x3) | openshift-etcd-operator |
openshift-cluster-etcd-operator-config-observer-configobserver |
etcd-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]interface{}{ + "controlPlane": map[string]interface{}{"replicas": float64(1)}, + "servingInfo": map[string]interface{}{ + "cipherSuites": []interface{}{ + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"), + }, + "minTLSVersion": string("VersionTLS12"), + }, } |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready\nRouterCertsDegraded: neither the custom secret/v4-0-config-system-router-certs -n openshift-authentication or default secret/v4-0-config-system-custom-router-certs -n openshift-authentication could be retrieved: secret \"v4-0-config-system-router-certs\" not found" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/restore-etcd-pod -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-target-config-controller-targetconfigcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-scripts -n openshift-etcd because it was missing | |
openshift-monitoring |
default-scheduler |
telemeter-client-8ffdbd7d6-h294p |
Scheduled |
Successfully assigned openshift-monitoring/telemeter-client-8ffdbd7d6-h294p to master1 | |
openshift-monitoring |
deployment-controller |
telemeter-client |
ScalingReplicaSet |
Scaled up replica set telemeter-client-8ffdbd7d6 to 1 | |
(x8) | openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
RequiredInstallerResourcesMissing |
configmaps: etcd-scripts,restore-etcd-pod, configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1 |
(x13) | openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
RevisionTriggered |
new revision 2 triggered by "configmap \"etcd-pod\" not found" |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-675fc6b586-hhb4g pod)" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-6c9d449c6-bt726 pod)" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/revision-status-2 -n openshift-etcd: cause by changes in data.reason | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-2 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-2 -n openshift-etcd because it was missing | |
openshift-monitoring |
replicaset-controller |
telemeter-client-8ffdbd7d6 |
SuccessfulCreate |
Created pod: telemeter-client-8ffdbd7d6-h294p | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-peer-client-ca-2 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metrics-proxy-serving-ca-2 -n openshift-etcd because it was missing | |
openshift-apiserver |
default-scheduler |
apiserver-6c9d449c6-bt726 |
Scheduled |
Successfully assigned openshift-apiserver/apiserver-6c9d449c6-bt726 to master1 | |
openshift-authentication-operator |
cluster-authentication-operator-trust-distribution-trustdistributioncontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/oauth-serving-cert -n openshift-config-managed because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-scripts,restore-etcd-pod, configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1]\nEnvVarControllerDegraded: no supported cipherSuites not found in observedConfig\nRevisionControllerDegraded: configmaps \"etcd-pod\" not found\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-scripts,restore-etcd-pod, configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1]\nRevisionControllerDegraded: configmaps \"etcd-pod\" not found\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-2 -n openshift-etcd because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/revision-status-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-4 -n openshift-kube-scheduler because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metrics-proxy-client-ca-2 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs-2 -n openshift-etcd because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-scheduler because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-scripts,restore-etcd-pod, configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1]\nRevisionControllerDegraded: configmaps \"etcd-pod\" not found\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1]\nRevisionControllerDegraded: configmaps \"etcd-pod\" not found\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-scheduler because it was missing | |
openshift-marketplace |
kubelet |
certified-operators-4sdw2 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.12" in 23.841932721s | |
openshift-monitoring |
multus |
telemeter-client-8ffdbd7d6-h294p |
AddedInterface |
Add eth0 [10.128.0.62/23] from ovn-kubernetes | |
openshift-ingress |
kubelet |
router-default-f8bd48fbf-2mnbd |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa1ff52055ededc0386ee6b334ffe0cd9252f5878fcccf1396aee30adf6de046" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/revision-status-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 3 triggered by "configmap/serviceaccount-ca has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/aggregator-client-ca -n openshift-kube-apiserver because it was missing | |
openshift-machine-api |
kubelet |
machine-api-operator-df4db9c9b-6rkdn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:303fe68053354fb40b73196c2c950e5305cf4cd7b9109824b6aa33d3aeedb988" in 23.513729529s | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5ff95d844f-hdc8s |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06285dddb5ba9bce5a5ddd07f685f1bc766abed1e0c3890621df281ddc19ab1c" | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5ff95d844f-hdc8s |
Started |
Started container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5ff95d844f-hdc8s |
Created |
Created container kube-rbac-proxy-self | |
openshift-marketplace |
kubelet |
redhat-operators-hsggr |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.12" in 24.656687303s | |
openshift-apiserver |
multus |
apiserver-6c9d449c6-bt726 |
AddedInterface |
Add eth0 [10.128.0.61/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-hsggr |
Started |
Started container registry-server | |
openshift-monitoring |
kubelet |
node-exporter-rqgmw |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ecf76246d81adfe3f52fdb54a7bddf6b892ea6900521d71553d16f2918a2cac" in 9.475397634s | |
openshift-monitoring |
kubelet |
kube-state-metrics-75455b796c-45w6j |
Created |
Created container kube-state-metrics | |
openshift-marketplace |
kubelet |
certified-operators-4sdw2 |
Created |
Created container registry-server | |
openshift-monitoring |
kubelet |
telemeter-client-8ffdbd7d6-h294p |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:51c76ce72315ae658d91de6620d8dd4f798e6ea0c493e5d2899dd2c52fbcd931" | |
openshift-marketplace |
kubelet |
certified-operators-4sdw2 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-svbqj |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.12" in 24.085699538s | |
openshift-monitoring |
kubelet |
kube-state-metrics-75455b796c-45w6j |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" already present on machine | |
openshift-marketplace |
kubelet |
redhat-operators-hsggr |
Created |
Created container registry-server | |
openshift-monitoring |
kubelet |
node-exporter-rqgmw |
Started |
Started container init-textfile | |
openshift-marketplace |
kubelet |
redhat-marketplace-m5lxp |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-m5lxp |
Created |
Created container registry-server | |
openshift-monitoring |
kubelet |
node-exporter-rqgmw |
Created |
Created container init-textfile | |
openshift-machine-api |
kubelet |
machine-api-operator-df4db9c9b-6rkdn |
Started |
Started container machine-api-operator | |
openshift-machine-api |
kubelet |
machine-api-operator-df4db9c9b-6rkdn |
Created |
Created container machine-api-operator | |
openshift-marketplace |
kubelet |
community-operators-svbqj |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
community-operators-svbqj |
Started |
Started container registry-server | |
default |
machineapioperator |
machine-api |
Status upgrade |
Progressing towards operator: 4.12.2 | |
openshift-marketplace |
kubelet |
redhat-marketplace-m5lxp |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.12" in 24.54558441s | |
openshift-apiserver |
kubelet |
apiserver-6c9d449c6-bt726 |
Started |
Started container fix-audit-permissions | |
openshift-monitoring |
kubelet |
kube-state-metrics-75455b796c-45w6j |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f2f70f1bd12128213b7b131782a4e76df20cbc224b13c69fff7ec71787b5499e" in 8.168054028s | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-3 -n openshift-kube-controller-manager because it was missing | |
openshift-monitoring |
kubelet |
kube-state-metrics-75455b796c-45w6j |
Started |
Started container kube-state-metrics | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nScriptControllerDegraded: \"configmap/etcd-pod\": missing env var values\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1]\nRevisionControllerDegraded: configmaps \"etcd-pod\" not found\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1]\nRevisionControllerDegraded: configmaps \"etcd-pod\" not found\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-apiserver |
kubelet |
apiserver-6c9d449c6-bt726 |
Created |
Created container fix-audit-permissions | |
openshift-apiserver |
kubelet |
apiserver-6c9d449c6-bt726 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47bc752254f826905ac36cc2eb1819373a3045603e5dfa03c7f9e6d73c3fd9f9" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-rqgmw |
Started |
Started container kube-rbac-proxy | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: conflicting latestAvailableRevision 4" | |
(x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 4 triggered by "configmap/serviceaccount-ca has changed" |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-monitoring |
kubelet |
kube-state-metrics-75455b796c-45w6j |
Created |
Created container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
kube-state-metrics-75455b796c-45w6j |
Started |
Started container kube-rbac-proxy-main | |
openshift-monitoring |
kubelet |
kube-state-metrics-75455b796c-45w6j |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" already present on machine | |
openshift-monitoring |
kubelet |
kube-state-metrics-75455b796c-45w6j |
Created |
Created container kube-rbac-proxy-self | |
openshift-monitoring |
kubelet |
kube-state-metrics-75455b796c-45w6j |
Started |
Started container kube-rbac-proxy-self | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionCreate |
Revision 3 created because configmap/serviceaccount-ca has changed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: conflicting latestAvailableRevision 4" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-apiserver |
kubelet |
apiserver-6c9d449c6-bt726 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47bc752254f826905ac36cc2eb1819373a3045603e5dfa03c7f9e6d73c3fd9f9" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-6c9d449c6-bt726 |
Created |
Created container openshift-apiserver | |
openshift-apiserver |
kubelet |
apiserver-6c9d449c6-bt726 |
Started |
Started container openshift-apiserver | |
openshift-monitoring |
kubelet |
node-exporter-rqgmw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7ecf76246d81adfe3f52fdb54a7bddf6b892ea6900521d71553d16f2918a2cac" already present on machine | |
openshift-monitoring |
kubelet |
node-exporter-rqgmw |
Created |
Created container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-rqgmw |
Started |
Started container node-exporter | |
openshift-monitoring |
kubelet |
node-exporter-rqgmw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" already present on machine | |
openshift-apiserver |
kubelet |
apiserver-6c9d449c6-bt726 |
Created |
Created container openshift-apiserver-check-endpoints | |
openshift-apiserver |
kubelet |
apiserver-6c9d449c6-bt726 |
Started |
Started container openshift-apiserver-check-endpoints | |
openshift-monitoring |
kubelet |
node-exporter-rqgmw |
Created |
Created container kube-rbac-proxy | |
openshift-apiserver |
kubelet |
apiserver-6c9d449c6-bt726 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" already present on machine | |
openshift-apiserver |
check-endpoint-checkendpointsstop |
master1 |
FastControllerResync |
Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (3 containers are waiting in pending apiserver-6c9d449c6-bt726 pod)" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-6c9d449c6-bt726 pod)" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-3 -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver |
check-endpoint-checkendpointstimetostart |
master1 |
FastControllerResync |
Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/revision-status-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 nodes are at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 nodes are at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 nodes are at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 1 nodes are at revision 0; 0 nodes have achieved new revision 4" | |
openshift-kube-scheduler |
kubelet |
installer-3-master1 |
Killing |
Stopping container installer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-3 -n openshift-kube-controller-manager because it was missing | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (2 containers are waiting in pending apiserver-6c9d449c6-bt726 pod)" to "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-6c9d449c6-bt726 pod)" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1]\nRevisionControllerDegraded: configmaps \"etcd-pod\" not found\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1]\nRevisionControllerDegraded: configmaps \"etcd-pod\" not found\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionCreate |
Revision 2 created because configmap/serviceaccount-ca has changed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-3 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-4-master1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler |
multus |
installer-4-master1 |
AddedInterface |
Add eth0 [10.128.0.63/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5ff95d844f-hdc8s |
Created |
Created container openshift-state-metrics | |
openshift-monitoring |
kubelet |
telemeter-client-8ffdbd7d6-h294p |
Started |
Started container telemeter-client | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5ff95d844f-hdc8s |
Started |
Started container openshift-state-metrics | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: kube-controller-manager-client-cert-key]" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: configmaps: client-ca",Progressing message changed from "NodeInstallerProgressing: 1 nodes are at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 nodes are at revision 0; 0 nodes have achieved new revision 3",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 nodes are at revision 0; 0 nodes have achieved new revision 2" to "StaticPodsAvailable: 0 nodes are active; 1 nodes are at revision 0; 0 nodes have achieved new revision 3" | |
openshift-monitoring |
kubelet |
telemeter-client-8ffdbd7d6-h294p |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:51c76ce72315ae658d91de6620d8dd4f798e6ea0c493e5d2899dd2c52fbcd931" in 5.941773294s | |
openshift-ingress |
kubelet |
router-default-f8bd48fbf-2mnbd |
Started |
Started container router | |
openshift-kube-scheduler |
kubelet |
installer-4-master1 |
Created |
Created container installer | |
openshift-kube-scheduler |
kubelet |
installer-4-master1 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-4-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e045fad043f28570b754619999ffb356bedee81ff842c56a32b1b13588fc1651" already present on machine | |
openshift-ingress |
kubelet |
router-default-f8bd48fbf-2mnbd |
Created |
Created container router | |
openshift-ingress |
kubelet |
router-default-f8bd48fbf-2mnbd |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fa1ff52055ededc0386ee6b334ffe0cd9252f5878fcccf1396aee30adf6de046" in 6.592371498s | |
openshift-monitoring |
kubelet |
telemeter-client-8ffdbd7d6-h294p |
Created |
Created container telemeter-client | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5ff95d844f-hdc8s |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06285dddb5ba9bce5a5ddd07f685f1bc766abed1e0c3890621df281ddc19ab1c" in 6.621300274s | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: PreconditionNotReady" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/template.openshift.io/v1: 401" | |
openshift-monitoring |
kubelet |
telemeter-client-8ffdbd7d6-h294p |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1705c63614eeb3feebc11b29e6a977c28bac2401092efae1d42b655259e2629" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: endpoints for service/api in \"openshift-apiserver\" have no addresses with port name \"https\"\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/template.openshift.io/v1: 401" to "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/template.openshift.io/v1: 401" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"master1\" not ready since 2023-02-13 14:52:05 +0000 UTC because KubeletNotReady (container runtime status check may not have completed yet)\nCertRotation_CheckEndpointsClient_Degraded: configmaps \"kube-control-plane-signer-ca\" already exists\nWorkerLatencyProfileDegraded: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: The master nodes not ready: node \"master1\" not ready since 2023-02-13 14:52:05 +0000 UTC because KubeletNotReady (container runtime status check may not have completed yet)\nWorkerLatencyProfileDegraded: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
(x4) | openshift-etcd-operator |
openshift-cluster-etcd-operator-bootstrap-teardown-controller-bootstrapteardowncontroller |
etcd-operator |
BootstrapTeardownErrorUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerDeploymentDegraded: 1 of 1 requested instances are unavailable for apiserver.openshift-apiserver (container is not ready in apiserver-6c9d449c6-bt726 pod)" to "All is well",Available message changed from "APIServerDeploymentAvailable: no apiserver.openshift-apiserver pods available on any node.\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/template.openshift.io/v1: 401" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/template.openshift.io/v1: 401" | |
(x2) | openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
RevisionTriggered |
new revision 2 triggered by "configmap \"etcd-pod-1\" not found" |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1]\nRevisionControllerDegraded: configmaps \"etcd-pod\" not found\nConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on etcds.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1]\nRevisionControllerDegraded: configmaps \"etcd-pod\" not found\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1]\nRevisionControllerDegraded: configmaps \"etcd-pod\" not found\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1]\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" | |
openshift-monitoring |
kubelet |
telemeter-client-8ffdbd7d6-h294p |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1705c63614eeb3feebc11b29e6a977c28bac2401092efae1d42b655259e2629" in 4.414799987s | |
(x10) | openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
RequiredInstallerResourcesMissing |
configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1 |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
RevisionCreate |
Revision 1 created because configmap "etcd-pod-1" not found | |
openshift-monitoring |
kubelet |
telemeter-client-8ffdbd7d6-h294p |
Created |
Created container reload | |
openshift-monitoring |
kubelet |
telemeter-client-8ffdbd7d6-h294p |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
telemeter-client-8ffdbd7d6-h294p |
Started |
Started container reload | |
openshift-monitoring |
kubelet |
telemeter-client-8ffdbd7d6-h294p |
Created |
Created container kube-rbac-proxy | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeTargetRevisionChanged |
Updating node "master1" from revision 0 to 2 because node master1 static pod not found | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Progressing message changed from "NodeInstallerProgressing: 1 nodes are at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 nodes are at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 nodes are at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 nodes are at revision 0; 0 nodes have achieved new revision 2" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nInstallerControllerDegraded: missing required resources: [configmaps: etcd-endpoints-1,etcd-metrics-proxy-client-ca-1,etcd-metrics-proxy-serving-ca-1,etcd-peer-client-ca-1,etcd-pod-1,etcd-serving-ca-1, secrets: etcd-all-certs-1]\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" | |
openshift-monitoring |
kubelet |
telemeter-client-8ffdbd7d6-h294p |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" already present on machine | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"master1\" not ready since 2023-02-13 14:52:05 +0000 UTC because KubeletNotReady (container runtime status check may not have completed yet)\nWorkerLatencyProfileDegraded: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "NodeControllerDegraded: The master nodes not ready: node \"master1\" not ready since 2023-02-13 14:52:05 +0000 UTC because KubeletNotReady (container runtime status check may not have completed yet)\nWorkerLatencyProfileDegraded: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]",Progressing changed from Unknown to True ("NodeInstallerProgressing: 1 nodes are at revision 0; 0 nodes have achieved new revision 1"),Available changed from Unknown to False ("StaticPodsAvailable: 0 nodes are active; 1 nodes are at revision 0; 0 nodes have achieved new revision 1") | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-2-master1 -n openshift-etcd because it was missing | |
openshift-etcd |
kubelet |
installer-2-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8090f9dd771f4f292e508b5ffca3aca3b4e6226aed25e131e49a9b6596b0b451" already present on machine | |
openshift-etcd |
multus |
installer-2-master1 |
AddedInterface |
Add eth0 [10.128.0.64/23] from ovn-kubernetes | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries\nNodeControllerDegraded: All master nodes are ready\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nBootstrapTeardownDegraded: giving up getting a cached client after 3 tries" | |
openshift-etcd |
kubelet |
installer-2-master1 |
Started |
Started container installer | |
openshift-etcd |
kubelet |
installer-2-master1 |
Created |
Created container installer | |
(x10) | openshift-ingress |
kubelet |
router-default-f8bd48fbf-2mnbd |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: The master nodes not ready: node \"master1\" not ready since 2023-02-13 14:52:05 +0000 UTC because KubeletNotReady (container runtime status check may not have completed yet)\nWorkerLatencyProfileDegraded: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]" to "NodeControllerDegraded: The master nodes not ready: node \"master1\" not ready since 2023-02-13 14:52:05 +0000 UTC because KubeletNotReady (container runtime status check may not have completed yet)\nWorkerLatencyProfileDegraded: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]" | |
openshift-etcd |
kubelet |
etcd-bootstrap-member-master1 |
Killing |
Stopping container etcdctl | |
openshift-oauth-apiserver |
kubelet |
apiserver-68b6d6d454-ltjtf |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.26:8443/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) | |
openshift-oauth-apiserver |
kubelet |
apiserver-68b6d6d454-ltjtf |
ProbeError |
Readiness probe error: Get "https://10.128.0.26:8443/readyz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) body: | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master1 |
Started |
Started container kube-scheduler | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8d009a8c5ea6b7739d18d167647f2bd1733af8560b6d7b013a6d0c35e266323" already present on machine | |
kube-system |
kubelet |
bootstrap-kube-scheduler-master1 |
Created |
Created container kube-scheduler | |
openshift-kube-apiserver |
kubelet |
bootstrap-kube-apiserver-master1 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [+]etcd ok [-]etcd-readiness failed: reason withheld [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-deprecated-api-requests-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]shutdown ok readyz check failed | |
openshift-etcd |
kubelet |
etcd-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d1371d52c5233f6daf04aa0b0c12f29799155c15b49031bd9581d78529742b2" already present on machine | |
openshift-etcd |
kubelet |
etcd-master1 |
Created |
Created container setup | |
openshift-etcd |
kubelet |
etcd-master1 |
Started |
Started container setup | |
openshift-oauth-apiserver |
kubelet |
apiserver-68b6d6d454-ltjtf |
ProbeError |
Liveness probe error: Get "https://10.128.0.26:8443/healthz": context deadline exceeded body: | |
openshift-oauth-apiserver |
kubelet |
apiserver-68b6d6d454-ltjtf |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.26:8443/healthz": context deadline exceeded | |
(x2) | openshift-oauth-apiserver |
kubelet |
apiserver-68b6d6d454-ltjtf |
Unhealthy |
Liveness probe failed: Get "https://10.128.0.26:8443/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) |
openshift-oauth-apiserver |
kubelet |
apiserver-68b6d6d454-ltjtf |
Killing |
Container oauth-apiserver failed liveness probe, will be restarted | |
(x2) | openshift-oauth-apiserver |
kubelet |
apiserver-68b6d6d454-ltjtf |
ProbeError |
Liveness probe error: Get "https://10.128.0.26:8443/healthz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body: |
openshift-etcd |
kubelet |
etcd-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d1371d52c5233f6daf04aa0b0c12f29799155c15b49031bd9581d78529742b2" already present on machine | |
openshift-etcd |
kubelet |
etcd-master1 |
Started |
Started container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master1 |
Created |
Created container etcd-ensure-env-vars | |
openshift-oauth-apiserver |
kubelet |
apiserver-68b6d6d454-ltjtf |
ProbeError |
Readiness probe error: Get "https://10.128.0.26:8443/readyz": context deadline exceeded body: | |
openshift-oauth-apiserver |
kubelet |
apiserver-68b6d6d454-ltjtf |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.26:8443/readyz": context deadline exceeded | |
(x3) | openshift-kube-apiserver |
kubelet |
bootstrap-kube-apiserver-master1 |
Unhealthy |
Liveness probe failed: HTTP probe failed with statuscode: 500 |
(x3) | openshift-kube-apiserver |
kubelet |
bootstrap-kube-apiserver-master1 |
ProbeError |
Liveness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [-]etcd failed: reason withheld [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-deprecated-api-requests-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok livez check failed |
(x7) | openshift-oauth-apiserver |
kubelet |
apiserver-68b6d6d454-ltjtf |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.26:8443/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) |
(x7) | openshift-oauth-apiserver |
kubelet |
apiserver-68b6d6d454-ltjtf |
ProbeError |
Readiness probe error: Get "https://10.128.0.26:8443/readyz": net/http: request canceled (Client.Timeout exceeded while awaiting headers) body: |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigWriteError |
Failed to write observed config: Timeout: request did not complete within requested timeout - context deadline exceeded | |
openshift-etcd |
kubelet |
etcd-master1 |
Started |
Started container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master1 |
Created |
Created container etcd-resources-copy | |
openshift-etcd |
kubelet |
etcd-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d1371d52c5233f6daf04aa0b0c12f29799155c15b49031bd9581d78529742b2" already present on machine | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-6c77d44985-7k8lf |
FailedMount |
Unable to attach or mount volumes: unmounted volumes=[client-ca], unattached volumes=[config client-ca serving-cert kube-api-access-s7lqk]: timed out waiting for the condition | |
openshift-controller-manager |
kubelet |
controller-manager-5d9b9687f-w8g4x |
FailedMount |
Unable to attach or mount volumes: unmounted volumes=[client-ca], unattached volumes=[config client-ca serving-cert proxy-ca-bundles kube-api-access-s5fcs]: timed out waiting for the condition | |
(x9) | openshift-controller-manager |
kubelet |
controller-manager-5d9b9687f-w8g4x |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
(x9) | openshift-route-controller-manager |
kubelet |
route-controller-manager-6c77d44985-7k8lf |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : configmap "client-ca" not found |
openshift-etcd |
kubelet |
etcd-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d1371d52c5233f6daf04aa0b0c12f29799155c15b49031bd9581d78529742b2" already present on machine | |
openshift-etcd |
kubelet |
etcd-master1 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-master1 |
Created |
Created container etcd | |
openshift-etcd |
kubelet |
etcd-master1 |
Created |
Created container etcdctl | |
openshift-etcd |
kubelet |
etcd-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d1371d52c5233f6daf04aa0b0c12f29799155c15b49031bd9581d78529742b2" already present on machine | |
openshift-etcd |
kubelet |
etcd-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d1371d52c5233f6daf04aa0b0c12f29799155c15b49031bd9581d78529742b2" already present on machine | |
openshift-etcd |
kubelet |
etcd-master1 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-master1 |
Started |
Started container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master1 |
Created |
Created container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8090f9dd771f4f292e508b5ffca3aca3b4e6226aed25e131e49a9b6596b0b451" already present on machine | |
openshift-etcd |
kubelet |
etcd-master1 |
Created |
Created container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master1 |
Started |
Started container etcd-readyz | |
(x5) | openshift-kube-apiserver |
kubelet |
bootstrap-kube-apiserver-master1 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [-]etcd failed: reason withheld [-]etcd-readiness failed: reason withheld [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-deprecated-api-requests-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [+]shutdown ok readyz check failed |
openshift-kube-apiserver |
kubelet |
bootstrap-kube-apiserver-master1 |
Killing |
Container kube-apiserver failed liveness probe, will be restarted | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded changed from False to True ("BootstrapTeardownDegraded: giving up getting a cached client after 3 tries\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: giving up getting a cached client after 3 tries") | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "AuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded changed from False to True ("InstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nNodeControllerDegraded: The master nodes not ready: node \"master1\" not ready since 2023-02-13 14:52:05 +0000 UTC because KubeletNotReady (container runtime status check may not have completed yet)\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found\nWorkerLatencyProfileDegraded: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again") | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/template.openshift.io/v1: 401" to "APIServicesAvailable: Get \"https://172.30.0.1:443/apis/apiregistration.k8s.io/v1/apiservices/v1.apps.openshift.io\": stream error: stream ID 3459; INTERNAL_ERROR; received from peer" | |
openshift-kube-apiserver |
kubelet |
bootstrap-kube-apiserver-master1 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [-]etcd failed: reason withheld [-]etcd-readiness failed: reason withheld [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-deprecated-api-requests-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [-]shutdown failed: reason withheld readyz check failed | |
openshift-machine-api |
control-plane-machine-set-operator-749d766b67-gc5pf_9a01a829-2d13-4911-a8d9-7d56877a1128 |
control-plane-machine-set-leader |
LeaderElection |
control-plane-machine-set-operator-749d766b67-gc5pf_9a01a829-2d13-4911-a8d9-7d56877a1128 became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/revision-status-3 -n openshift-etcd because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nNodeControllerDegraded: The master nodes not ready: node \"master1\" not ready since 2023-02-13 14:52:05 +0000 UTC because KubeletNotReady (container runtime status check may not have completed yet)\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found\nWorkerLatencyProfileDegraded: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "InstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found\nWorkerLatencyProfileDegraded: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
RevisionTriggered |
new revision 3 triggered by "configmap/etcd-endpoints has changed" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "AuditPolicyDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps audit)" to "All is well" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available message changed from "APIServicesAvailable: Get \"https://172.30.0.1:443/apis/apiregistration.k8s.io/v1/apiservices/v1.apps.openshift.io\": stream error: stream ID 3459; INTERNAL_ERROR; received from peer" to "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.apps.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/apps.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/apps.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.authorization.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/authorization.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/authorization.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.build.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/build.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/build.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.image.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/image.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/image.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.project.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/project.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/project.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.quota.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/quota.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/quota.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.route.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/route.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/route.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.security.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/security.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/security.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.template.openshift.io: not available: failing or missing response from https://10.128.0.61:8443/apis/template.openshift.io/v1: bad status from https://10.128.0.61:8443/apis/template.openshift.io/v1: 401" | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorStatusChanged |
Status for clusteroperator/config-operator changed: Degraded message changed from "All is well" to "KubeCloudConfigControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" | |
openshift-config-operator |
config-operator-status-controller-statussyncer_config-operator |
openshift-config-operator |
OperatorStatusChanged |
Status for clusteroperator/config-operator changed: Degraded message changed from "KubeCloudConfigControllerDegraded: Timeout: request did not complete within requested timeout - context deadline exceeded" to "All is well" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-pod-3 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
ConfigMapUpdated |
Updated ConfigMap/etcd-endpoints -n openshift-etcd: cause by changes in data.MTkyLjE2OC4xMjYuMTA,data.fe32c3d6206a7a33 | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-3 -n openshift-etcd because it was missing | |
(x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
InstallerPodFailed |
installer errors: installer: g) (len=24) "openshift-kube-scheduler", PodConfigMapNamePrefix: (string) (len=18) "kube-scheduler-pod", SecretNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=31) "localhost-recovery-client-token" }, OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=12) "serving-cert" }, ConfigMapNamePrefixes: ([]string) (len=5 cap=8) { (string) (len=18) "kube-scheduler-pod", (string) (len=6) "config", (string) (len=17) "serviceaccount-ca", (string) (len=20) "scheduler-kubeconfig", (string) (len=37) "kube-scheduler-cert-syncer-kubeconfig" }, OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) { (string) (len=16) "policy-configmap" }, CertSecretNames: ([]string) (len=1 cap=1) { (string) (len=30) "kube-scheduler-client-cert-key" }, OptionalCertSecretNamePrefixes: ([]string) <nil>, CertConfigMapNamePrefixes: ([]string) <nil>, OptionalCertConfigMapNamePrefixes: ([]string) <nil>, CertDir: (string) (len=57) "/etc/kubernetes/static-pod-resources/kube-scheduler-certs", ResourceDir: (string) (len=36) "/etc/kubernetes/static-pod-resources", PodManifestDir: (string) (len=25) "/etc/kubernetes/manifests", Timeout: (time.Duration) 2m0s, StaticPodManifestsLockFile: (string) "", PodMutationFns: ([]installerpod.PodMutationFunc) <nil>, KubeletVersion: (string) "" }) I0213 14:52:49.071432 1 cmd.go:410] Getting controller reference for node master1 I0213 14:52:49.077022 1 cmd.go:423] Waiting for installer revisions to settle for node master1 I0213 14:52:49.078681 1 cmd.go:503] Pod container: installer state for node master1 is not terminated, waiting I0213 14:52:59.080711 1 cmd.go:515] Waiting additional period after revisions have settled for node master1 I0213 14:53:29.081640 1 cmd.go:521] Getting installer pods for node master1 F0213 14:53:43.084056 1 cmd.go:106] Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller": net/http: request canceled (Client.Timeout exceeded while awaiting headers) |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master1)" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: g) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0213 14:52:49.071432 1 cmd.go:410] Getting controller reference for node master1\nNodeInstallerDegraded: I0213 14:52:49.077022 1 cmd.go:423] Waiting for installer revisions to settle for node master1\nNodeInstallerDegraded: I0213 14:52:49.078681 1 cmd.go:503] Pod container: installer state for node master1 is not terminated, waiting\nNodeInstallerDegraded: I0213 14:52:59.080711 1 cmd.go:515] Waiting additional period after revisions have settled for node master1\nNodeInstallerDegraded: I0213 14:53:29.081640 1 cmd.go:521] Getting installer pods for node master1\nNodeInstallerDegraded: F0213 14:53:43.084056 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master1)\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa\": stream error: stream ID 2389; INTERNAL_ERROR; received from peer\nBackingResourceControllerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: g) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0213 14:52:49.071432 1 cmd.go:410] Getting controller reference for node master1\nNodeInstallerDegraded: I0213 14:52:49.077022 1 cmd.go:423] Waiting for installer revisions to settle for node master1\nNodeInstallerDegraded: I0213 14:52:49.078681 1 cmd.go:503] Pod container: installer state for node master1 is not terminated, waiting\nNodeInstallerDegraded: I0213 14:52:59.080711 1 cmd.go:515] Waiting additional period after revisions have settled for node master1\nNodeInstallerDegraded: I0213 14:53:29.081640 1 cmd.go:521] Getting installer pods for node master1\nNodeInstallerDegraded: F0213 14:53:43.084056 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa\": stream error: stream ID 2389; INTERNAL_ERROR; received from peer\nBackingResourceControllerDegraded: " | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-peer-client-ca-3 -n openshift-etcd because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: g) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0213 14:52:49.071432 1 cmd.go:410] Getting controller reference for node master1\nNodeInstallerDegraded: I0213 14:52:49.077022 1 cmd.go:423] Waiting for installer revisions to settle for node master1\nNodeInstallerDegraded: I0213 14:52:49.078681 1 cmd.go:503] Pod container: installer state for node master1 is not terminated, waiting\nNodeInstallerDegraded: I0213 14:52:59.080711 1 cmd.go:515] Waiting additional period after revisions have settled for node master1\nNodeInstallerDegraded: I0213 14:53:29.081640 1 cmd.go:521] Getting installer pods for node master1\nNodeInstallerDegraded: F0213 14:53:43.084056 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master1)" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: g) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0213 14:52:49.071432 1 cmd.go:410] Getting controller reference for node master1\nNodeInstallerDegraded: I0213 14:52:49.077022 1 cmd.go:423] Waiting for installer revisions to settle for node master1\nNodeInstallerDegraded: I0213 14:52:49.078681 1 cmd.go:503] Pod container: installer state for node master1 is not terminated, waiting\nNodeInstallerDegraded: I0213 14:52:59.080711 1 cmd.go:515] Waiting additional period after revisions have settled for node master1\nNodeInstallerDegraded: I0213 14:53:29.081640 1 cmd.go:521] Getting installer pods for node master1\nNodeInstallerDegraded: F0213 14:53:43.084056 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master1)\nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa\": stream error: stream ID 2389; INTERNAL_ERROR; received from peer\nBackingResourceControllerDegraded: " | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master1)" to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: g) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0213 14:52:49.071432 1 cmd.go:410] Getting controller reference for node master1\nNodeInstallerDegraded: I0213 14:52:49.077022 1 cmd.go:423] Waiting for installer revisions to settle for node master1\nNodeInstallerDegraded: I0213 14:52:49.078681 1 cmd.go:503] Pod container: installer state for node master1 is not terminated, waiting\nNodeInstallerDegraded: I0213 14:52:59.080711 1 cmd.go:515] Waiting additional period after revisions have settled for node master1\nNodeInstallerDegraded: I0213 14:53:29.081640 1 cmd.go:521] Getting installer pods for node master1\nNodeInstallerDegraded: F0213 14:53:43.084056 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nStaticPodsDegraded: the server was unable to return a response in the time allotted, but may still be processing the request (get pods openshift-kube-scheduler-master1)" | |
openshift-kube-apiserver |
kubelet |
bootstrap-kube-apiserver-master1 |
ProbeError |
Readiness probe error: HTTP probe failed with statuscode: 500 body: [+]ping ok [+]log ok [-]etcd failed: reason withheld [+]etcd-readiness ok [+]api-openshift-apiserver-available ok [+]api-openshift-oauth-apiserver-available ok [+]informer-sync ok [+]poststarthook/openshift.io-oauth-apiserver-reachable ok [+]poststarthook/start-kube-apiserver-admission-initializer ok [+]poststarthook/quota.openshift.io-clusterquotamapping ok [+]poststarthook/openshift.io-deprecated-api-requests-filter ok [+]poststarthook/openshift.io-startkubeinformers ok [+]poststarthook/openshift.io-openshift-apiserver-reachable ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/rbac/bootstrap-roles ok [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/apiservice-wait-for-first-sync ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok [-]shutdown failed: reason withheld readyz check failed | |
(x8) | openshift-kube-apiserver |
kubelet |
bootstrap-kube-apiserver-master1 |
Unhealthy |
Readiness probe failed: HTTP probe failed with statuscode: 500 |
openshift-authentication-operator |
kubelet |
authentication-operator-68df59f464-ffd6s |
Created |
Created container authentication-operator | |
openshift-authentication-operator |
kubelet |
authentication-operator-68df59f464-ffd6s |
Started |
Started container authentication-operator | |
openshift-authentication-operator |
kubelet |
authentication-operator-68df59f464-ffd6s |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a1252ab4a94ef96c90c19a926c6c10b1c73186377f408414c8a3aa1949a0a75" already present on machine | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
ReportEtcdMembersErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused | |
(x3) | openshift-etcd-operator |
openshift-cluster-etcd-operator-member-observer-controller-etcdmemberscontroller |
etcd-operator |
EtcdMembersErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-kube-apiserver |
kubelet |
bootstrap-kube-apiserver-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8d009a8c5ea6b7739d18d167647f2bd1733af8560b6d7b013a6d0c35e266323" already present on machine | |
openshift-kube-apiserver |
kubelet |
bootstrap-kube-apiserver-master1 |
Created |
Created container kube-apiserver | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ScriptControllerErrorUpdatingStatus |
etcds.operator.openshift.io "cluster" is forbidden: User "system:serviceaccount:openshift-etcd-operator:etcd-operator" cannot update resource "etcds/status" in API group "operator.openshift.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "cluster-admin" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigWriteError |
Failed to write observed config: kubeapiservers.operator.openshift.io "cluster" is forbidden: User "system:serviceaccount:openshift-kube-apiserver-operator:kube-apiserver-operator" cannot update resource "kubeapiservers" in API group "operator.openshift.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "cluster-admin" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "cluster-status" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:openshift:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found] | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
EtcdEndpointsErrorUpdatingStatus |
etcds.operator.openshift.io "cluster" is forbidden: User "system:serviceaccount:openshift-etcd-operator:etcd-operator" cannot update resource "etcds/status" in API group "operator.openshift.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:build-strategy-jenkinspipeline" not found, clusterrole.rbac.authorization.k8s.io "cluster-admin" not found, clusterrole.rbac.authorization.k8s.io "system:scope-impersonation" not found, clusterrole.rbac.authorization.k8s.io "system:build-strategy-source" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found] | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorVersionChanged |
clusteroperator/etcd version "operator" changed from "" to "4.12.2" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorVersionChanged |
clusteroperator/etcd version "etcd" changed from "" to "4.12.2" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: g) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0213 14:52:49.071432 1 cmd.go:410] Getting controller reference for node master1\nNodeInstallerDegraded: I0213 14:52:49.077022 1 cmd.go:423] Waiting for installer revisions to settle for node master1\nNodeInstallerDegraded: I0213 14:52:49.078681 1 cmd.go:503] Pod container: installer state for node master1 is not terminated, waiting\nNodeInstallerDegraded: I0213 14:52:59.080711 1 cmd.go:515] Waiting additional period after revisions have settled for node master1\nNodeInstallerDegraded: I0213 14:53:29.081640 1 cmd.go:521] Getting installer pods for node master1\nNodeInstallerDegraded: F0213 14:53:43.084056 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/installer-sa\": stream error: stream ID 2389; INTERNAL_ERROR; received from peer\nBackingResourceControllerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: g) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0213 14:52:49.071432 1 cmd.go:410] Getting controller reference for node master1\nNodeInstallerDegraded: I0213 14:52:49.077022 1 cmd.go:423] Waiting for installer revisions to settle for node master1\nNodeInstallerDegraded: I0213 14:52:49.078681 1 cmd.go:503] Pod container: installer state for node master1 is not terminated, waiting\nNodeInstallerDegraded: I0213 14:52:59.080711 1 cmd.go:515] Waiting additional period after revisions have settled for node master1\nNodeInstallerDegraded: I0213 14:53:29.081640 1 cmd.go:521] Getting installer pods for node master1\nNodeInstallerDegraded: F0213 14:53:43.084056 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): serviceaccounts \"installer-sa\" is forbidden: User \"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-kube-scheduler\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found]\nBackingResourceControllerDegraded: " | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"v3.11.0/openshift-controller-manager/informer-clusterrolebinding.yaml\" (string): clusterrolebindings.rbac.authorization.k8s.io \"system:openshift:openshift-controller-manager\" is forbidden: User \"system:serviceaccount:openshift-controller-manager-operator:openshift-controller-manager-operator\" cannot get resource \"clusterrolebindings\" in API group \"rbac.authorization.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]\nOpenshiftControllerManagerStaticResourcesDegraded: " | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "BootstrapTeardownDegraded: giving up getting a cached client after 3 tries\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: giving up getting a cached client after 3 tries" to "BackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): serviceaccounts \"installer-sa\" is forbidden: User \"system:serviceaccount:openshift-etcd-operator:etcd-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-etcd\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found]\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): clusterrolebindings.rbac.authorization.k8s.io \"system:openshift:operator:openshift-etcd-installer\" is forbidden: User \"system:serviceaccount:openshift-etcd-operator:etcd-operator\" cannot get resource \"clusterrolebindings\" in API group \"rbac.authorization.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found]\nBackingResourceControllerDegraded: \nBootstrapTeardownDegraded: giving up getting a cached client after 3 tries\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: giving up getting a cached client after 3 tries" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: status.versions changed from [{"raw-internal" "4.12.2"}] to [{"raw-internal" "4.12.2"} {"etcd" "4.12.2"} {"operator" "4.12.2"}] | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metrics-proxy-serving-ca-3 -n openshift-etcd because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: configmaps: client-ca" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: configmaps: client-ca\nSATokenSignerDegraded: pods is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot list resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" | |
(x2) | openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded message changed from "KubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/namespace.yaml\" (string): namespaces \"openshift-kube-storage-version-migrator\" is forbidden: User \"system:serviceaccount:openshift-kube-storage-version-migrator-operator:kube-storage-version-migrator-operator\" cannot get resource \"namespaces\" in API group \"\" in the namespace \"openshift-kube-storage-version-migrator\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]\nKubeStorageVersionMigratorStaticResourcesDegraded: " to "All is well" |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: g) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0213 14:52:49.071432 1 cmd.go:410] Getting controller reference for node master1\nNodeInstallerDegraded: I0213 14:52:49.077022 1 cmd.go:423] Waiting for installer revisions to settle for node master1\nNodeInstallerDegraded: I0213 14:52:49.078681 1 cmd.go:503] Pod container: installer state for node master1 is not terminated, waiting\nNodeInstallerDegraded: I0213 14:52:59.080711 1 cmd.go:515] Waiting additional period after revisions have settled for node master1\nNodeInstallerDegraded: I0213 14:53:29.081640 1 cmd.go:521] Getting installer pods for node master1\nNodeInstallerDegraded: F0213 14:53:43.084056 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): serviceaccounts \"installer-sa\" is forbidden: User \"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-kube-scheduler\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found]\nBackingResourceControllerDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: g) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0213 14:52:49.071432 1 cmd.go:410] Getting controller reference for node master1\nNodeInstallerDegraded: I0213 14:52:49.077022 1 cmd.go:423] Waiting for installer revisions to settle for node master1\nNodeInstallerDegraded: I0213 14:52:49.078681 1 cmd.go:503] Pod container: installer state for node master1 is not terminated, waiting\nNodeInstallerDegraded: I0213 14:52:59.080711 1 cmd.go:515] Waiting additional period after revisions have settled for node master1\nNodeInstallerDegraded: I0213 14:53:29.081640 1 cmd.go:521] Getting installer pods for node master1\nNodeInstallerDegraded: F0213 14:53:43.084056 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): serviceaccounts \"installer-sa\" is forbidden: User \"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-kube-scheduler\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found]\nBackingResourceControllerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): clusterrolebindings.rbac.authorization.k8s.io \"system:openshift:operator:kube-scheduler:public-2\" is forbidden: User \"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\" cannot get resource \"clusterrolebindings\" in API group \"rbac.authorization.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): roles.rbac.authorization.k8s.io \"system:openshift:sa-listing-configmaps\" is forbidden: User \"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\" cannot get resource \"roles\" in API group \"rbac.authorization.k8s.io\" in the namespace \"openshift-kube-scheduler\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): rolebindings.rbac.authorization.k8s.io \"system:openshift:sa-listing-configmaps\" is forbidden: User \"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\" cannot get resource \"rolebindings\" in API group \"rbac.authorization.k8s.io\" in the namespace \"openshift-kube-scheduler\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found]\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): services \"scheduler\" is forbidden: User \"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\" cannot get resource \"services\" in API group \"\" in the namespace \"openshift-kube-scheduler\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]\nKubeControllerManagerStaticResourcesDegraded: " | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/sa.yaml\" (string): serviceaccounts \"openshift-apiserver-sa\" is forbidden: User \"system:serviceaccount:openshift-apiserver-operator:openshift-apiserver-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-apiserver\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found]\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/pdb.yaml\" (string): poddisruptionbudgets.policy \"openshift-apiserver-pdb\" is forbidden: User \"system:serviceaccount:openshift-apiserver-operator:openshift-apiserver-operator\" cannot delete resource \"poddisruptionbudgets\" in API group \"policy\" in the namespace \"openshift-apiserver\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]\nAPIServerStaticResourcesDegraded: " | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ScriptControllerErrorUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-kube-storage-version-migrator-operator |
openshift-kube-storage-version-migrator-operator-status-controller-statussyncer_kube-storage-version-migrator |
kube-storage-version-migrator-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-storage-version-migrator changed: Degraded message changed from "All is well" to "KubeStorageVersionMigratorStaticResourcesDegraded: \"kube-storage-version-migrator/namespace.yaml\" (string): namespaces \"openshift-kube-storage-version-migrator\" is forbidden: User \"system:serviceaccount:openshift-kube-storage-version-migrator-operator:kube-storage-version-migrator-operator\" cannot get resource \"namespaces\" in API group \"\" in the namespace \"openshift-kube-storage-version-migrator\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]\nKubeStorageVersionMigratorStaticResourcesDegraded: " | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "BackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): serviceaccounts \"installer-sa\" is forbidden: User \"system:serviceaccount:openshift-etcd-operator:etcd-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-etcd\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found]\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): clusterrolebindings.rbac.authorization.k8s.io \"system:openshift:operator:openshift-etcd-installer\" is forbidden: User \"system:serviceaccount:openshift-etcd-operator:etcd-operator\" cannot get resource \"clusterrolebindings\" in API group \"rbac.authorization.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found]\nBackingResourceControllerDegraded: \nBootstrapTeardownDegraded: giving up getting a cached client after 3 tries\nClusterMemberControllerDegraded: could not get list of unhealthy members: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: giving up getting a cached client after 3 tries" to "BackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): serviceaccounts \"installer-sa\" is forbidden: User \"system:serviceaccount:openshift-etcd-operator:etcd-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-etcd\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found]\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): clusterrolebindings.rbac.authorization.k8s.io \"system:openshift:operator:openshift-etcd-installer\" is forbidden: User \"system:serviceaccount:openshift-etcd-operator:etcd-operator\" cannot get resource \"clusterrolebindings\" in API group \"rbac.authorization.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found]\nBackingResourceControllerDegraded: \nBootstrapTeardownDegraded: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: giving up getting a cached client after 3 tries" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"v3.11.0/openshift-controller-manager/informer-clusterrolebinding.yaml\" (string): clusterrolebindings.rbac.authorization.k8s.io \"system:openshift:openshift-controller-manager\" is forbidden: User \"system:serviceaccount:openshift-controller-manager-operator:openshift-controller-manager-operator\" cannot get resource \"clusterrolebindings\" in API group \"rbac.authorization.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well" | |
(x4) | openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
EtcdEndpointsErrorUpdatingStatus |
Operation cannot be fulfilled on etcds.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "BackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): serviceaccounts \"installer-sa\" is forbidden: User \"system:serviceaccount:openshift-etcd-operator:etcd-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-etcd\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found]\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): clusterrolebindings.rbac.authorization.k8s.io \"system:openshift:operator:openshift-etcd-installer\" is forbidden: User \"system:serviceaccount:openshift-etcd-operator:etcd-operator\" cannot get resource \"clusterrolebindings\" in API group \"rbac.authorization.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found]\nBackingResourceControllerDegraded: \nBootstrapTeardownDegraded: giving up getting a cached client after 3 tries\nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: giving up getting a cached client after 3 tries" to "BackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): serviceaccounts \"installer-sa\" is forbidden: User \"system:serviceaccount:openshift-etcd-operator:etcd-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-etcd\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found]\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): clusterrolebindings.rbac.authorization.k8s.io \"system:openshift:operator:openshift-etcd-installer\" is forbidden: User \"system:serviceaccount:openshift-etcd-operator:etcd-operator\" cannot get resource \"clusterrolebindings\" in API group \"rbac.authorization.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found]\nBackingResourceControllerDegraded: \nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: giving up getting a cached client after 3 tries" | |
kube-system |
default-scheduler |
kube-scheduler |
LeaderElection |
master1_687e072c-4a01-4280-89a8-6ccff202b489 became leader | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: g) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0213 14:52:49.071432 1 cmd.go:410] Getting controller reference for node master1\nNodeInstallerDegraded: I0213 14:52:49.077022 1 cmd.go:423] Waiting for installer revisions to settle for node master1\nNodeInstallerDegraded: I0213 14:52:49.078681 1 cmd.go:503] Pod container: installer state for node master1 is not terminated, waiting\nNodeInstallerDegraded: I0213 14:52:59.080711 1 cmd.go:515] Waiting additional period after revisions have settled for node master1\nNodeInstallerDegraded: I0213 14:53:29.081640 1 cmd.go:521] Getting installer pods for node master1\nNodeInstallerDegraded: F0213 14:53:43.084056 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nBackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): serviceaccounts \"installer-sa\" is forbidden: User \"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-kube-scheduler\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found]\nBackingResourceControllerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): clusterrolebindings.rbac.authorization.k8s.io \"system:openshift:operator:kube-scheduler:public-2\" is forbidden: User \"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\" cannot get resource \"clusterrolebindings\" in API group \"rbac.authorization.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): roles.rbac.authorization.k8s.io \"system:openshift:sa-listing-configmaps\" is forbidden: User \"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\" cannot get resource \"roles\" in API group \"rbac.authorization.k8s.io\" in the namespace \"openshift-kube-scheduler\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): rolebindings.rbac.authorization.k8s.io \"system:openshift:sa-listing-configmaps\" is forbidden: User \"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\" cannot get resource \"rolebindings\" in API group \"rbac.authorization.k8s.io\" in the namespace \"openshift-kube-scheduler\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found]\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): services \"scheduler\" is forbidden: User \"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\" cannot get resource \"services\" in API group \"\" in the namespace \"openshift-kube-scheduler\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: g) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0213 14:52:49.071432 1 cmd.go:410] Getting controller reference for node master1\nNodeInstallerDegraded: I0213 14:52:49.077022 1 cmd.go:423] Waiting for installer revisions to settle for node master1\nNodeInstallerDegraded: I0213 14:52:49.078681 1 cmd.go:503] Pod container: installer state for node master1 is not terminated, waiting\nNodeInstallerDegraded: I0213 14:52:59.080711 1 cmd.go:515] Waiting additional period after revisions have settled for node master1\nNodeInstallerDegraded: I0213 14:53:29.081640 1 cmd.go:521] Getting installer pods for node master1\nNodeInstallerDegraded: F0213 14:53:43.084056 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): clusterrolebindings.rbac.authorization.k8s.io \"system:openshift:operator:kube-scheduler:public-2\" is forbidden: User \"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\" cannot get resource \"clusterrolebindings\" in API group \"rbac.authorization.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): roles.rbac.authorization.k8s.io \"system:openshift:sa-listing-configmaps\" is forbidden: User \"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\" cannot get resource \"roles\" in API group \"rbac.authorization.k8s.io\" in the namespace \"openshift-kube-scheduler\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): rolebindings.rbac.authorization.k8s.io \"system:openshift:sa-listing-configmaps\" is forbidden: User \"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\" cannot get resource \"rolebindings\" in API group \"rbac.authorization.k8s.io\" in the namespace \"openshift-kube-scheduler\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found]\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): services \"scheduler\" is forbidden: User \"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\" cannot get resource \"services\" in API group \"\" in the namespace \"openshift-kube-scheduler\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]\nKubeControllerManagerStaticResourcesDegraded: " | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "BackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): serviceaccounts \"installer-sa\" is forbidden: User \"system:serviceaccount:openshift-etcd-operator:etcd-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-etcd\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found]\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): clusterrolebindings.rbac.authorization.k8s.io \"system:openshift:operator:openshift-etcd-installer\" is forbidden: User \"system:serviceaccount:openshift-etcd-operator:etcd-operator\" cannot get resource \"clusterrolebindings\" in API group \"rbac.authorization.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found]\nBackingResourceControllerDegraded: \nEtcdEndpointsDegraded: failed to get member list: giving up getting a cached client after 3 tries\nEtcdMembersControllerDegraded: giving up getting a cached client after 3 tries" to "BackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): serviceaccounts \"installer-sa\" is forbidden: User \"system:serviceaccount:openshift-etcd-operator:etcd-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-etcd\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found]\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): clusterrolebindings.rbac.authorization.k8s.io \"system:openshift:operator:openshift-etcd-installer\" is forbidden: User \"system:serviceaccount:openshift-etcd-operator:etcd-operator\" cannot get resource \"clusterrolebindings\" in API group \"rbac.authorization.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found]\nBackingResourceControllerDegraded: \nEtcdMembersControllerDegraded: giving up getting a cached client after 3 tries" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: configmaps: client-ca\nSATokenSignerDegraded: pods is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot list resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: configmaps: client-ca\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): roles.rbac.authorization.k8s.io \"system:openshift:leader-election-lock-cluster-policy-controller\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"roles\" in API group \"rbac.authorization.k8s.io\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found]\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): rolebindings.rbac.authorization.k8s.io \"system:openshift:leader-election-lock-cluster-policy-controller\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"rolebindings\" in API group \"rbac.authorization.k8s.io\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]\nKubeControllerManagerStaticResourcesDegraded: \nSATokenSignerDegraded: pods is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot list resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-metrics-proxy-client-ca-3 -n openshift-etcd because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-4-retry-1-master1 -n openshift-kube-scheduler because it was missing | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master1_4e1c7710-6361-4bf4-ac9e-af670f6c290b became leader | |
default |
node-controller |
master1 |
RegisteredNode |
Node master1 event: Registered Node master1 in Controller | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": secret \"node-system-admin-client\" not found\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found\nWorkerLatencyProfileDegraded: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "InstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found\nWorkerLatencyProfileDegraded: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" | |
openshift-marketplace |
kubelet |
community-operators-hx2zr |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/community-operator-index:v4.12" in 827.202438ms | |
openshift-marketplace |
kubelet |
community-operators-hx2zr |
Pulling |
Pulling image "registry.redhat.io/redhat/community-operator-index:v4.12" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: configmaps: client-ca\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): roles.rbac.authorization.k8s.io \"system:openshift:leader-election-lock-cluster-policy-controller\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"roles\" in API group \"rbac.authorization.k8s.io\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found]\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): rolebindings.rbac.authorization.k8s.io \"system:openshift:leader-election-lock-cluster-policy-controller\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"rolebindings\" in API group \"rbac.authorization.k8s.io\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]\nKubeControllerManagerStaticResourcesDegraded: \nSATokenSignerDegraded: pods is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot list resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: configmaps: client-ca\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): roles.rbac.authorization.k8s.io \"system:openshift:leader-election-lock-cluster-policy-controller\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"roles\" in API group \"rbac.authorization.k8s.io\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found]\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): rolebindings.rbac.authorization.k8s.io \"system:openshift:leader-election-lock-cluster-policy-controller\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"rolebindings\" in API group \"rbac.authorization.k8s.io\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]\nKubeControllerManagerStaticResourcesDegraded: " | |
openshift-marketplace |
multus |
community-operators-hx2zr |
AddedInterface |
Add eth0 [10.128.0.66/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
redhat-marketplace-j79l7 |
AddedInterface |
Add eth0 [10.128.0.67/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-j79l7 |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-marketplace-index:v4.12" | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master1_4e1c7710-6361-4bf4-ac9e-af670f6c290b became leader | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "BackingResourceControllerDegraded: \"manifests/installer-sa.yaml\" (string): serviceaccounts \"installer-sa\" is forbidden: User \"system:serviceaccount:openshift-etcd-operator:etcd-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-etcd\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found]\nBackingResourceControllerDegraded: \"manifests/installer-cluster-rolebinding.yaml\" (string): clusterrolebindings.rbac.authorization.k8s.io \"system:openshift:operator:openshift-etcd-installer\" is forbidden: User \"system:serviceaccount:openshift-etcd-operator:etcd-operator\" cannot get resource \"clusterrolebindings\" in API group \"rbac.authorization.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found]\nBackingResourceControllerDegraded: \nEtcdMembersControllerDegraded: giving up getting a cached client after 3 tries" to "EtcdMembersControllerDegraded: giving up getting a cached client after 3 tries" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "APIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/ns.yaml\" (string): the server was unable to return a response in the time allotted, but may still be processing the request (get namespaces openshift-apiserver)\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/sa.yaml\" (string): serviceaccounts \"openshift-apiserver-sa\" is forbidden: User \"system:serviceaccount:openshift-apiserver-operator:openshift-apiserver-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-apiserver\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found]\nAPIServerStaticResourcesDegraded: \"v3.11.0/openshift-apiserver/pdb.yaml\" (string): poddisruptionbudgets.policy \"openshift-apiserver-pdb\" is forbidden: User \"system:serviceaccount:openshift-apiserver-operator:openshift-apiserver-operator\" cannot delete resource \"poddisruptionbudgets\" in API group \"policy\" in the namespace \"openshift-apiserver\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]\nAPIServerStaticResourcesDegraded: " to "All is well" | |
openshift-marketplace |
kubelet |
redhat-operators-vfgtk |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.12" | |
openshift-ingress-canary |
daemonset-controller |
ingress-canary |
SuccessfulCreate |
Created pod: ingress-canary-lbrhv | |
openshift-marketplace |
multus |
redhat-operators-vfgtk |
AddedInterface |
Add eth0 [10.128.0.65/23] from ovn-kubernetes | |
openshift-marketplace |
multus |
certified-operators-tvmdn |
AddedInterface |
Add eth0 [10.128.0.68/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-j79l7 |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-marketplace-index:v4.12" in 695.01381ms | |
openshift-marketplace |
kubelet |
redhat-operators-vfgtk |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.12" in 1.238140498s | |
openshift-ingress-canary |
kubelet |
ingress-canary-lbrhv |
Started |
Started container serve-healthcheck-canary | |
openshift-marketplace |
kubelet |
community-operators-hx2zr |
Started |
Started container registry-server | |
openshift-ingress-canary |
kubelet |
ingress-canary-lbrhv |
Created |
Created container serve-healthcheck-canary | |
openshift-ingress-canary |
kubelet |
ingress-canary-lbrhv |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6275a171f6d5a523627963860415ed0e43f1728f2dd897c49412600bf64bc9c3" already present on machine | |
openshift-marketplace |
kubelet |
certified-operators-tvmdn |
Pulling |
Pulling image "registry.redhat.io/redhat/certified-operator-index:v4.12" | |
openshift-kube-scheduler |
kubelet |
installer-4-retry-1-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e045fad043f28570b754619999ffb356bedee81ff842c56a32b1b13588fc1651" already present on machine | |
openshift-ingress-canary |
multus |
ingress-canary-lbrhv |
AddedInterface |
Add eth0 [10.128.0.70/23] from ovn-kubernetes | |
openshift-kube-scheduler |
multus |
installer-4-retry-1-master1 |
AddedInterface |
Add eth0 [10.128.0.69/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-marketplace-j79l7 |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
community-operators-hx2zr |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-vfgtk |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-vfgtk |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-marketplace-j79l7 |
Created |
Created container registry-server | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
ConfigMapCreated |
Created ConfigMap/etcd-endpoints-3 -n openshift-etcd because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: g) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0213 14:52:49.071432 1 cmd.go:410] Getting controller reference for node master1\nNodeInstallerDegraded: I0213 14:52:49.077022 1 cmd.go:423] Waiting for installer revisions to settle for node master1\nNodeInstallerDegraded: I0213 14:52:49.078681 1 cmd.go:503] Pod container: installer state for node master1 is not terminated, waiting\nNodeInstallerDegraded: I0213 14:52:59.080711 1 cmd.go:515] Waiting additional period after revisions have settled for node master1\nNodeInstallerDegraded: I0213 14:53:29.081640 1 cmd.go:521] Getting installer pods for node master1\nNodeInstallerDegraded: F0213 14:53:43.084056 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): clusterrolebindings.rbac.authorization.k8s.io \"system:openshift:operator:kube-scheduler:public-2\" is forbidden: User \"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\" cannot get resource \"clusterrolebindings\" in API group \"rbac.authorization.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): roles.rbac.authorization.k8s.io \"system:openshift:sa-listing-configmaps\" is forbidden: User \"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\" cannot get resource \"roles\" in API group \"rbac.authorization.k8s.io\" in the namespace \"openshift-kube-scheduler\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): rolebindings.rbac.authorization.k8s.io \"system:openshift:sa-listing-configmaps\" is forbidden: User \"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\" cannot get resource \"rolebindings\" in API group \"rbac.authorization.k8s.io\" in the namespace \"openshift-kube-scheduler\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found]\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): services \"scheduler\" is forbidden: User \"system:serviceaccount:openshift-kube-scheduler-operator:openshift-kube-scheduler-operator\" cannot get resource \"services\" in API group \"\" in the namespace \"openshift-kube-scheduler\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: g) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0213 14:52:49.071432 1 cmd.go:410] Getting controller reference for node master1\nNodeInstallerDegraded: I0213 14:52:49.077022 1 cmd.go:423] Waiting for installer revisions to settle for node master1\nNodeInstallerDegraded: I0213 14:52:49.078681 1 cmd.go:503] Pod container: installer state for node master1 is not terminated, waiting\nNodeInstallerDegraded: I0213 14:52:59.080711 1 cmd.go:515] Waiting additional period after revisions have settled for node master1\nNodeInstallerDegraded: I0213 14:53:29.081640 1 cmd.go:521] Getting installer pods for node master1\nNodeInstallerDegraded: F0213 14:53:43.084056 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " | |
openshift-kube-scheduler |
kubelet |
installer-4-retry-1-master1 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-4-retry-1-master1 |
Created |
Created container installer | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
RevisionCreate |
Revision 2 created because configmap/etcd-endpoints has changed | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-revisioncontroller |
etcd-operator |
SecretCreated |
Created Secret/etcd-all-certs-3 -n openshift-etcd because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "master1" from revision 0 to 2 because static pod is ready | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Progressing message changed from "NodeInstallerProgressing: 1 nodes are at revision 0; 0 nodes have achieved new revision 2" to "NodeInstallerProgressing: 1 nodes are at revision 2; 0 nodes have achieved new revision 3",Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 2; 0 nodes have achieved new revision 3") | |
(x6) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
RequiredInstallerResourcesMissing |
configmaps: client-ca |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: configmaps: client-ca\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): roles.rbac.authorization.k8s.io \"system:openshift:leader-election-lock-cluster-policy-controller\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"roles\" in API group \"rbac.authorization.k8s.io\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-docker\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:service-account-issuer-discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-status\" not found]\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): rolebindings.rbac.authorization.k8s.io \"system:openshift:leader-election-lock-cluster-policy-controller\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"rolebindings\" in API group \"rbac.authorization.k8s.io\" in the namespace \"openshift-kube-controller-manager\": RBAC: [clusterrole.rbac.authorization.k8s.io \"system:build-strategy-jenkinspipeline\" not found, clusterrole.rbac.authorization.k8s.io \"cluster-admin\" not found, clusterrole.rbac.authorization.k8s.io \"system:scope-impersonation\" not found, clusterrole.rbac.authorization.k8s.io \"system:openshift:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:build-strategy-source\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]\nKubeControllerManagerStaticResourcesDegraded: " to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: configmaps: client-ca" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeTargetRevisionChanged |
Updating node "master1" from revision 2 to 3 because node master1 with revision 2 is the oldest | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 2; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 2; 0 nodes have achieved new revision 3\nEtcdMembersAvailable: 1 members are available" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found") | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
PodCreated |
Created Pod/installer-3-master1 -n openshift-etcd because it was missing | |
openshift-etcd |
kubelet |
installer-3-master1 |
Created |
Created container installer | |
openshift-etcd |
kubelet |
installer-3-master1 |
Started |
Started container installer | |
openshift-etcd |
kubelet |
installer-3-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8090f9dd771f4f292e508b5ffca3aca3b4e6226aed25e131e49a9b6596b0b451" already present on machine | |
openshift-etcd |
multus |
installer-3-master1 |
AddedInterface |
Add eth0 [10.128.0.71/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
certified-operators-tvmdn |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/certified-operator-index:v4.12" in 11.424458858s | |
openshift-marketplace |
kubelet |
certified-operators-tvmdn |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
certified-operators-tvmdn |
Started |
Started container registry-server | |
(x2) | openshift-marketplace |
kubelet |
redhat-marketplace-j79l7 |
Killing |
Stopping container registry-server |
(x2) | openshift-marketplace |
kubelet |
community-operators-hx2zr |
Killing |
Stopping container registry-server |
(x2) | openshift-marketplace |
kubelet |
redhat-operators-vfgtk |
Killing |
Stopping container registry-server |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found\nWorkerLatencyProfileDegraded: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again" to "InstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" | |
openshift-marketplace |
kubelet |
certified-operators-tvmdn |
Killing |
Stopping container registry-server | |
(x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigWriteError |
Failed to write observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again |
(x129) | openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMissing |
no observedConfig |
(x40) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: client-ca, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1 |
openshift-kube-scheduler |
static-pod-installer |
installer-4-retry-1-master1 |
StaticPodInstallerCompleted |
Successfully installed revision 4 | |
(x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "operator" changed from "" to "4.12.2" |
kube-system |
kubelet |
bootstrap-kube-scheduler-master1 |
Killing |
Stopping container kube-scheduler | |
(x5) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveStorageUpdated |
Updated storage urls to https://192.168.126.10:2379,https://localhost:2379 |
(x5) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveTLSSecurityProfile |
cipherSuites changed to ["TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256" "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256" "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"] |
(x5) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]interface{}{ + "admission": map[string]interface{}{ + "pluginConfig": map[string]interface{}{ + "network.openshift.io/ExternalIPRanger": map[string]interface{}{"configuration": map[string]interface{}{...}}, + "network.openshift.io/RestrictedEndpointsAdmission": map[string]interface{}{"configuration": map[string]interface{}{...}}, + }, + }, + "apiServerArguments": map[string]interface{}{ + "api-audiences": []interface{}{string("https://kubernetes.default.svc")}, + "authentication-token-webhook-config-file": []interface{}{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, + "authentication-token-webhook-version": []interface{}{string("v1")}, + "etcd-servers": []interface{}{string("https://192.168.126.10:2379"), string("https://localhost:2379")}, + "feature-gates": []interface{}{ + string("APIPriorityAndFairness=true"), + string("RotateKubeletServerCertificate=true"), + string("DownwardAPIHugePages=true"), string("CSIMigrationAzureFile=false"), + string("CSIMigrationvSphere=false"), + }, + "service-account-issuer": []interface{}{string("https://kubernetes.default.svc")}, + "service-account-jwks-uri": []interface{}{string("https://api-int.test-cluster.redhat.com:6443/openid/v1/jwks")}, + "shutdown-delay-duration": []interface{}{string("0s")}, + }, + "corsAllowedOrigins": []interface{}{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, + "gracefulTerminationDuration": string("15"), + "servicesSubnet": string("172.30.0.0/16"), + "servingInfo": map[string]interface{}{ + "bindAddress": string("0.0.0.0:6443"), + "bindNetwork": string("tcp4"), + "cipherSuites": []interface{}{ + string("TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"), + string("TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"), + string("TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256"), + string("TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256"), + }, + "minTLSVersion": string("VersionTLS12"), + "namedCertificates": []interface{}{ + map[string]interface{}{ + "certFile": string("/etc/kubernetes/static-pod-certs"...), + "keyFile": string("/etc/kubernetes/static-pod-certs"...), + }, + map[string]interface{}{ + "certFile": string("/etc/kubernetes/static-pod-certs"...), + "keyFile": string("/etc/kubernetes/static-pod-certs"...), + }, + map[string]interface{}{ + "certFile": string("/etc/kubernetes/static-pod-certs"...), + "keyFile": string("/etc/kubernetes/static-pod-certs"...), + }, + map[string]interface{}{ + "certFile": string("/etc/kubernetes/static-pod-certs"...), + "keyFile": string("/etc/kubernetes/static-pod-certs"...), + }, + map[string]interface{}{ + "certFile": string("/etc/kubernetes/static-pod-resou"...), + "keyFile": string("/etc/kubernetes/static-pod-resou"...), + }, + }, + }, } |
(x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorVersionChanged |
clusteroperator/kube-scheduler version "kube-scheduler" changed from "" to "1.25.4" |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" to "ConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" | |
(x5) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveTLSSecurityProfile |
minTLSVersion changed to VersionTLS12 |
(x58) | openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 2 triggered by "configmap \"kube-apiserver-pod\" not found" |
(x5) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveFeatureFlagsUpdated |
Updated apiServerArguments.feature-gates to APIPriorityAndFairness=true,RotateKubeletServerCertificate=true,DownwardAPIHugePages=true,CSIMigrationAzureFile=false,CSIMigrationvSphere=false |
(x5) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObserveWebhookTokenAuthenticator |
authentication-token webhook configuration status changed from false to true |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: status.versions changed from [{"raw-internal" "4.12.2"}] to [{"raw-internal" "4.12.2"} {"kube-scheduler" "1.25.4"} {"operator" "4.12.2"}] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Started |
Started container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Created |
Created container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8d009a8c5ea6b7739d18d167647f2bd1733af8560b6d7b013a6d0c35e266323" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/revision-status-2 -n openshift-kube-apiserver: cause by changes in data.reason | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8d009a8c5ea6b7739d18d167647f2bd1733af8560b6d7b013a6d0c35e266323" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "ConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nInstallerControllerDegraded: missing required resources: [configmaps: client-ca, secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" to "ConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nInstallerControllerDegraded: missing required resources: [secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-client-ca -n openshift-config-managed because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Created |
Created container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Started |
Started container kube-scheduler | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-resource-sync-controller-resourcesynccontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e045fad043f28570b754619999ffb356bedee81ff842c56a32b1b13588fc1651" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Created |
Created container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e045fad043f28570b754619999ffb356bedee81ff842c56a32b1b13588fc1651" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Created |
Created container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-5d9b9687f to 0 from 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-2 -n openshift-kube-apiserver because it was missing | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-7cb74487c to 1 from 0 | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-6c77d44985 to 0 from 1 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-7cb74487c |
SuccessfulCreate |
Created pod: route-controller-manager-7cb74487c-h9grb | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-controller-manager because it was missing | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-6c77d44985 |
SuccessfulDelete |
Deleted pod: route-controller-manager-6c77d44985-7k8lf | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/kube-apiserver-client-ca -n openshift-config-managed: configmaps "kube-apiserver-client-ca" already exists | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/client-ca -n openshift-route-controller-manager because it was missing | |
openshift-controller-manager |
replicaset-controller |
controller-manager-5d9b9687f |
SuccessfulDelete |
Deleted pod: controller-manager-5d9b9687f-w8g4x | |
openshift-controller-manager |
replicaset-controller |
controller-manager-779c4cdcc7 |
SuccessfulCreate |
Created pod: controller-manager-779c4cdcc7-sfmpx | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-779c4cdcc7 to 1 from 0 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "ConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on kubeapiservers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nInstallerControllerDegraded: missing required resources: [secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" to "InstallerControllerDegraded: missing required resources: [secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/revision-status-5 -n openshift-kube-scheduler because it was missing | |
(x8) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1 |
(x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-targetconfigcontroller |
openshift-kube-scheduler-operator |
ConfigMapUpdated |
Updated ConfigMap/serviceaccount-ca -n openshift-kube-scheduler: cause by changes in data.ca-bundle.crt |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" to "InstallerControllerDegraded: missing required resources: [secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nResourceSyncControllerDegraded: configmaps \"kube-apiserver-client-ca\" already exists\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nResourceSyncControllerDegraded: configmaps \"kube-apiserver-client-ca\" already exists\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" to "InstallerControllerDegraded: missing required resources: [secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-node-kubeconfig-controller-nodekubeconfigcontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/node-kubeconfigs -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nNodeKubeconfigControllerDegraded: \"secret/node-kubeconfigs\": configmap \"kube-apiserver-server-ca\" not found\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" to "InstallerControllerDegraded: missing required resources: [secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [secrets: node-kubeconfigs, configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" to "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-2 -n openshift-kube-apiserver because it was missing | |
(x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-targetconfigcontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/serviceaccount-ca -n openshift-kube-controller-manager: cause by changes in data.ca-bundle.crt |
openshift-etcd |
kubelet |
etcd-master1 |
Killing |
Stopping container etcdctl | |
openshift-etcd |
kubelet |
etcd-master1 |
Killing |
Stopping container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master1 |
Killing |
Stopping container etcd | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-5 -n openshift-kube-scheduler because it was missing | |
(x5) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master1 |
BackOff |
Back-off restarting failed container |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/kubelet-serving-ca-2 -n openshift-kube-apiserver: Timeout: request did not complete within requested timeout - context deadline exceeded | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/revision-status-4 -n openshift-kube-controller-manager: Timeout: request did not complete within requested timeout - context deadline exceeded | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreateFailed |
Failed to create Secret/serving-cert-5 -n openshift-kube-scheduler: Timeout: request did not complete within requested timeout - context deadline exceeded | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionCreateFailed |
Failed to create revision 4: Timeout: request did not complete within requested timeout - context deadline exceeded | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionCreateFailed |
Failed to create revision 2: Timeout: request did not complete within requested timeout - context deadline exceeded | |
openshift-etcd |
kubelet |
etcd-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d1371d52c5233f6daf04aa0b0c12f29799155c15b49031bd9581d78529742b2" already present on machine | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionCreateFailed |
Failed to create revision 5: Timeout: request did not complete within requested timeout - context deadline exceeded | |
openshift-etcd |
kubelet |
etcd-master1 |
Created |
Created container setup | |
openshift-etcd |
kubelet |
etcd-master1 |
Started |
Started container setup | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreateFailed |
Failed to create Secret/serving-cert-5 -n openshift-kube-scheduler: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/secrets": net/http: TLS handshake timeout | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/revision-status-4 -n openshift-kube-controller-manager: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps": net/http: TLS handshake timeout | |
(x4) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master1 |
Unhealthy |
Startup probe failed: Get "https://192.168.126.10:10257/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionCreateFailed |
Failed to create revision 5: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/secrets": net/http: TLS handshake timeout | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionCreateFailed |
Failed to create revision 2: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps": dial tcp 172.30.0.1:443: connect: connection refused | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionCreateFailed |
Failed to create revision 4: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps": net/http: TLS handshake timeout | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/kubelet-serving-ca-2 -n openshift-kube-apiserver: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps": dial tcp 172.30.0.1:443: connect: connection refused | |
(x2) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master1 |
Unhealthy |
Startup probe failed: Get "https://192.168.126.10:10257/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) |
(x4) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8d009a8c5ea6b7739d18d167647f2bd1733af8560b6d7b013a6d0c35e266323" already present on machine |
(x2) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master1 |
Killing |
Container kube-controller-manager failed startup probe, will be restarted |
(x4) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master1 |
Started |
Started container kube-controller-manager |
(x4) | kube-system |
kubelet |
bootstrap-kube-controller-manager-master1 |
Created |
Created container kube-controller-manager |
openshift-etcd |
kubelet |
etcd-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d1371d52c5233f6daf04aa0b0c12f29799155c15b49031bd9581d78529742b2" already present on machine | |
openshift-etcd |
kubelet |
etcd-master1 |
Created |
Created container etcd-ensure-env-vars | |
openshift-etcd |
kubelet |
etcd-master1 |
Started |
Started container etcd-ensure-env-vars | |
(x3) | openshift-marketplace |
kubelet |
marketplace-operator-75746f848d-v4htq |
BackOff |
Back-off restarting failed container |
openshift-etcd |
kubelet |
etcd-master1 |
Started |
Started container etcd-resources-copy | |
(x8) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/revision-status-4 -n openshift-kube-controller-manager: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps": dial tcp 172.30.0.1:443: connect: connection refused |
(x8) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionCreateFailed |
Failed to create revision 4: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/configmaps": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-etcd |
kubelet |
etcd-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d1371d52c5233f6daf04aa0b0c12f29799155c15b49031bd9581d78529742b2" already present on machine | |
openshift-etcd |
kubelet |
etcd-master1 |
Created |
Created container etcd-resources-copy | |
(x8) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreateFailed |
Failed to create Secret/serving-cert-5 -n openshift-kube-scheduler: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/secrets": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-etcd |
kubelet |
etcd-master1 |
Started |
Started container etcd | |
openshift-etcd |
kubelet |
etcd-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d1371d52c5233f6daf04aa0b0c12f29799155c15b49031bd9581d78529742b2" already present on machine | |
openshift-etcd |
kubelet |
etcd-master1 |
Created |
Created container etcdctl | |
openshift-etcd |
kubelet |
etcd-master1 |
Started |
Started container etcdctl | |
openshift-etcd |
kubelet |
etcd-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d1371d52c5233f6daf04aa0b0c12f29799155c15b49031bd9581d78529742b2" already present on machine | |
openshift-etcd |
kubelet |
etcd-master1 |
Created |
Created container etcd | |
(x8) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionCreateFailed |
Failed to create revision 5: Post "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/secrets": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-etcd |
kubelet |
etcd-master1 |
Created |
Created container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master1 |
Started |
Started container etcd-readyz | |
openshift-etcd |
kubelet |
etcd-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2d1371d52c5233f6daf04aa0b0c12f29799155c15b49031bd9581d78529742b2" already present on machine | |
openshift-etcd |
kubelet |
etcd-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8090f9dd771f4f292e508b5ffca3aca3b4e6226aed25e131e49a9b6596b0b451" already present on machine | |
openshift-etcd |
kubelet |
etcd-master1 |
Created |
Created container etcd-metrics | |
openshift-etcd |
kubelet |
etcd-master1 |
Started |
Started container etcd-metrics | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-5 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/revision-status-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: g) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0213 14:52:49.071432 1 cmd.go:410] Getting controller reference for node master1\nNodeInstallerDegraded: I0213 14:52:49.077022 1 cmd.go:423] Waiting for installer revisions to settle for node master1\nNodeInstallerDegraded: I0213 14:52:49.078681 1 cmd.go:503] Pod container: installer state for node master1 is not terminated, waiting\nNodeInstallerDegraded: I0213 14:52:59.080711 1 cmd.go:515] Waiting additional period after revisions have settled for node master1\nNodeInstallerDegraded: I0213 14:53:29.081640 1 cmd.go:521] Getting installer pods for node master1\nNodeInstallerDegraded: F0213 14:53:43.084056 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "KubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client\": net/http: TLS handshake timeout\nKubeControllerManagerStaticResourcesDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: g) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0213 14:52:49.071432 1 cmd.go:410] Getting controller reference for node master1\nNodeInstallerDegraded: I0213 14:52:49.077022 1 cmd.go:423] Waiting for installer revisions to settle for node master1\nNodeInstallerDegraded: I0213 14:52:49.078681 1 cmd.go:503] Pod container: installer state for node master1 is not terminated, waiting\nNodeInstallerDegraded: I0213 14:52:59.080711 1 cmd.go:515] Waiting additional period after revisions have settled for node master1\nNodeInstallerDegraded: I0213 14:53:29.081640 1 cmd.go:521] Getting installer pods for node master1\nNodeInstallerDegraded: F0213 14:53:43.084056 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " | |
(x3) | openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 2 triggered by "configmap \"kube-apiserver-pod-1\" not found" |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded changed from False to True ("NodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: g) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0213 14:52:49.071432 1 cmd.go:410] Getting controller reference for node master1\nNodeInstallerDegraded: I0213 14:52:49.077022 1 cmd.go:423] Waiting for installer revisions to settle for node master1\nNodeInstallerDegraded: I0213 14:52:49.078681 1 cmd.go:503] Pod container: installer state for node master1 is not terminated, waiting\nNodeInstallerDegraded: I0213 14:52:59.080711 1 cmd.go:515] Waiting additional period after revisions have settled for node master1\nNodeInstallerDegraded: I0213 14:53:29.081640 1 cmd.go:521] Getting installer pods for node master1\nNodeInstallerDegraded: F0213 14:53:43.084056 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: ") | |
(x2) | openshift-marketplace |
kubelet |
marketplace-operator-75746f848d-v4htq |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fb7a1e5f6616311d94b625dd3b452348bf75577b824f58a92883139f8f233681" already present on machine |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionCreate |
Revision 4 created because configmap/serviceaccount-ca has changed | |
(x2) | openshift-marketplace |
kubelet |
marketplace-operator-75746f848d-v4htq |
Created |
Created container marketplace-operator |
(x2) | openshift-marketplace |
kubelet |
marketplace-operator-75746f848d-v4htq |
Started |
Started container marketplace-operator |
(x12) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 5 triggered by "configmap/serviceaccount-ca has changed" |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "KubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client\": net/http: TLS handshake timeout\nKubeControllerManagerStaticResourcesDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: g) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0213 14:52:49.071432 1 cmd.go:410] Getting controller reference for node master1\nNodeInstallerDegraded: I0213 14:52:49.077022 1 cmd.go:423] Waiting for installer revisions to settle for node master1\nNodeInstallerDegraded: I0213 14:52:49.078681 1 cmd.go:503] Pod container: installer state for node master1 is not terminated, waiting\nNodeInstallerDegraded: I0213 14:52:59.080711 1 cmd.go:515] Waiting additional period after revisions have settled for node master1\nNodeInstallerDegraded: I0213 14:53:29.081640 1 cmd.go:521] Getting installer pods for node master1\nNodeInstallerDegraded: F0213 14:53:43.084056 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: " to "KubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client\": net/http: TLS handshake timeout\nKubeControllerManagerStaticResourcesDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: g) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0213 14:52:49.071432 1 cmd.go:410] Getting controller reference for node master1\nNodeInstallerDegraded: I0213 14:52:49.077022 1 cmd.go:423] Waiting for installer revisions to settle for node master1\nNodeInstallerDegraded: I0213 14:52:49.078681 1 cmd.go:503] Pod container: installer state for node master1 is not terminated, waiting\nNodeInstallerDegraded: I0213 14:52:59.080711 1 cmd.go:515] Waiting additional period after revisions have settled for node master1\nNodeInstallerDegraded: I0213 14:53:29.081640 1 cmd.go:521] Getting installer pods for node master1\nNodeInstallerDegraded: F0213 14:53:43.084056 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionCreate |
Revision 3 created because configmap/serviceaccount-ca has changed | |
(x12) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 4 triggered by "configmap/serviceaccount-ca has changed" |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: configmaps: client-ca" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: configmaps: client-ca\nStaticPodsDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master1\": net/http: TLS handshake timeout" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-master1\": net/http: TLS handshake timeout\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: configmaps: client-ca\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": net/http: TLS handshake timeout\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): rolebindings.rbac.authorization.k8s.io \"system:openshift:leader-locking-kube-controller-manager\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"rolebindings\" in API group \"rbac.authorization.k8s.io\" in the namespace \"kube-system\"\nKubeControllerManagerStaticResourcesDegraded: \nRevisionControllerDegraded: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nStaticPodsDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master1\": net/http: TLS handshake timeout" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: configmaps: client-ca\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": net/http: TLS handshake timeout\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): rolebindings.rbac.authorization.k8s.io \"system:openshift:leader-locking-kube-controller-manager\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"rolebindings\" in API group \"rbac.authorization.k8s.io\" in the namespace \"kube-system\"\nKubeControllerManagerStaticResourcesDegraded: \nStaticPodsDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master1\": net/http: TLS handshake timeout" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: configmaps: client-ca\nStaticPodsDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master1\": net/http: TLS handshake timeout" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: configmaps: client-ca\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": net/http: TLS handshake timeout\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): rolebindings.rbac.authorization.k8s.io \"system:openshift:leader-locking-kube-controller-manager\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"rolebindings\" in API group \"rbac.authorization.k8s.io\" in the namespace \"kube-system\"\nKubeControllerManagerStaticResourcesDegraded: \nStaticPodsDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master1\": net/http: TLS handshake timeout" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: configmaps: client-ca\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": net/http: TLS handshake timeout\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): rolebindings.rbac.authorization.k8s.io \"system:openshift:leader-locking-kube-controller-manager\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"rolebindings\" in API group \"rbac.authorization.k8s.io\" in the namespace \"kube-system\"\nKubeControllerManagerStaticResourcesDegraded: \nStaticPodsDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master1\": net/http: TLS handshake timeout" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: configmaps: client-ca\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": net/http: TLS handshake timeout\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): rolebindings.rbac.authorization.k8s.io \"system:openshift:leader-locking-kube-controller-manager\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"rolebindings\" in API group \"rbac.authorization.k8s.io\" in the namespace \"kube-system\"\nKubeControllerManagerStaticResourcesDegraded: \nRevisionControllerDegraded: Operation cannot be fulfilled on kubecontrollermanagers.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nStaticPodsDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master1\": net/http: TLS handshake timeout" | |
(x2) | openshift-marketplace |
kubelet |
marketplace-operator-75746f848d-v4htq |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.8:8080/healthz": dial tcp 10.128.0.8:8080: connect: connection refused |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "KubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client\": net/http: TLS handshake timeout\nKubeControllerManagerStaticResourcesDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: g) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0213 14:52:49.071432 1 cmd.go:410] Getting controller reference for node master1\nNodeInstallerDegraded: I0213 14:52:49.077022 1 cmd.go:423] Waiting for installer revisions to settle for node master1\nNodeInstallerDegraded: I0213 14:52:49.078681 1 cmd.go:503] Pod container: installer state for node master1 is not terminated, waiting\nNodeInstallerDegraded: I0213 14:52:59.080711 1 cmd.go:515] Waiting additional period after revisions have settled for node master1\nNodeInstallerDegraded: I0213 14:53:29.081640 1 cmd.go:521] Getting installer pods for node master1\nNodeInstallerDegraded: F0213 14:53:43.084056 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nRevisionControllerDegraded: secrets \"serving-cert-5\" already exists\nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused" to "KubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client\": net/http: TLS handshake timeout\nKubeControllerManagerStaticResourcesDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: g) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0213 14:52:49.071432 1 cmd.go:410] Getting controller reference for node master1\nNodeInstallerDegraded: I0213 14:52:49.077022 1 cmd.go:423] Waiting for installer revisions to settle for node master1\nNodeInstallerDegraded: I0213 14:52:49.078681 1 cmd.go:503] Pod container: installer state for node master1 is not terminated, waiting\nNodeInstallerDegraded: I0213 14:52:59.080711 1 cmd.go:515] Waiting additional period after revisions have settled for node master1\nNodeInstallerDegraded: I0213 14:53:29.081640 1 cmd.go:521] Getting installer pods for node master1\nNodeInstallerDegraded: F0213 14:53:43.084056 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "KubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client\": net/http: TLS handshake timeout\nKubeControllerManagerStaticResourcesDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: g) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0213 14:52:49.071432 1 cmd.go:410] Getting controller reference for node master1\nNodeInstallerDegraded: I0213 14:52:49.077022 1 cmd.go:423] Waiting for installer revisions to settle for node master1\nNodeInstallerDegraded: I0213 14:52:49.078681 1 cmd.go:503] Pod container: installer state for node master1 is not terminated, waiting\nNodeInstallerDegraded: I0213 14:52:59.080711 1 cmd.go:515] Waiting additional period after revisions have settled for node master1\nNodeInstallerDegraded: I0213 14:53:29.081640 1 cmd.go:521] Getting installer pods for node master1\nNodeInstallerDegraded: F0213 14:53:43.084056 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused" to "KubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client\": net/http: TLS handshake timeout\nKubeControllerManagerStaticResourcesDegraded: \nNodeInstallerDegraded: 1 nodes are failing on revision 4:\nNodeInstallerDegraded: installer: g) (len=24) \"openshift-kube-scheduler\",\nNodeInstallerDegraded: PodConfigMapNamePrefix: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: SecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=31) \"localhost-recovery-client-token\"\nNodeInstallerDegraded: },\nNodeInstallerDegraded: OptionalSecretNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=12) \"serving-cert\"\nNodeInstallerDegraded: ConfigMapNamePrefixes: ([]string) (len=5 cap=8) {\nNodeInstallerDegraded: (string) (len=18) \"kube-scheduler-pod\",\nNodeInstallerDegraded: (string) (len=6) \"config\",\nNodeInstallerDegraded: (string) (len=17) \"serviceaccount-ca\",\nNodeInstallerDegraded: (string) (len=20) \"scheduler-kubeconfig\",\nNodeInstallerDegraded: (string) (len=37) \"kube-scheduler-cert-syncer-kubeconfig\"\nNodeInstallerDegraded: OptionalConfigMapNamePrefixes: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=16) \"policy-configmap\"\nNodeInstallerDegraded: CertSecretNames: ([]string) (len=1 cap=1) {\nNodeInstallerDegraded: (string) (len=30) \"kube-scheduler-client-cert-key\"\nNodeInstallerDegraded: OptionalCertSecretNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: OptionalCertConfigMapNamePrefixes: ([]string) <nil>,\nNodeInstallerDegraded: CertDir: (string) (len=57) \"/etc/kubernetes/static-pod-resources/kube-scheduler-certs\",\nNodeInstallerDegraded: ResourceDir: (string) (len=36) \"/etc/kubernetes/static-pod-resources\",\nNodeInstallerDegraded: PodManifestDir: (string) (len=25) \"/etc/kubernetes/manifests\",\nNodeInstallerDegraded: Timeout: (time.Duration) 2m0s,\nNodeInstallerDegraded: StaticPodManifestsLockFile: (string) \"\",\nNodeInstallerDegraded: PodMutationFns: ([]installerpod.PodMutationFunc) <nil>,\nNodeInstallerDegraded: KubeletVersion: (string) \"\"\nNodeInstallerDegraded: })\nNodeInstallerDegraded: I0213 14:52:49.071432 1 cmd.go:410] Getting controller reference for node master1\nNodeInstallerDegraded: I0213 14:52:49.077022 1 cmd.go:423] Waiting for installer revisions to settle for node master1\nNodeInstallerDegraded: I0213 14:52:49.078681 1 cmd.go:503] Pod container: installer state for node master1 is not terminated, waiting\nNodeInstallerDegraded: I0213 14:52:59.080711 1 cmd.go:515] Waiting additional period after revisions have settled for node master1\nNodeInstallerDegraded: I0213 14:53:29.081640 1 cmd.go:521] Getting installer pods for node master1\nNodeInstallerDegraded: F0213 14:53:43.084056 1 cmd.go:106] Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods?labelSelector=app%3Dinstaller\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\nNodeInstallerDegraded: \nRevisionControllerDegraded: secrets \"serving-cert-5\" already exists\nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreateFailed |
Failed to create Secret/serving-cert-5 -n openshift-kube-scheduler: secrets "serving-cert-5" already exists | |
(x2) | openshift-marketplace |
kubelet |
marketplace-operator-75746f848d-v4htq |
ProbeError |
Readiness probe error: Get "http://10.128.0.8:8080/healthz": dial tcp 10.128.0.8:8080: connect: connection refused body: |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" to "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found\nTargetConfigControllerDegraded: \"configmap/config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-pod\": net/http: TLS handshake timeout" | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "All is well" to "CSISnapshotWebhookControllerDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-webhook\": net/http: TLS handshake timeout" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing message changed from "NodeInstallerProgressing: 1 nodes are at revision 0; 0 nodes have achieved new revision 3" to "NodeInstallerProgressing: 1 nodes are at revision 0; 0 nodes have achieved new revision 4",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 nodes are at revision 0; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 0 nodes are active; 1 nodes are at revision 0; 0 nodes have achieved new revision 4" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: configmaps: client-ca\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": net/http: TLS handshake timeout\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): rolebindings.rbac.authorization.k8s.io \"system:openshift:leader-locking-kube-controller-manager\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"rolebindings\" in API group \"rbac.authorization.k8s.io\" in the namespace \"kube-system\"\nKubeControllerManagerStaticResourcesDegraded: \nStaticPodsDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master1\": net/http: TLS handshake timeout" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: configmaps: client-ca\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": net/http: TLS handshake timeout\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): rolebindings.rbac.authorization.k8s.io \"system:openshift:leader-locking-kube-controller-manager\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"rolebindings\" in API group \"rbac.authorization.k8s.io\" in the namespace \"kube-system\"\nKubeControllerManagerStaticResourcesDegraded: " | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nInstallerControllerDegraded: missing required resources: configmaps: client-ca\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": net/http: TLS handshake timeout\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): rolebindings.rbac.authorization.k8s.io \"system:openshift:leader-locking-kube-controller-manager\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"rolebindings\" in API group \"rbac.authorization.k8s.io\" in the namespace \"kube-system\"\nKubeControllerManagerStaticResourcesDegraded: " to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": net/http: TLS handshake timeout\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): rolebindings.rbac.authorization.k8s.io \"system:openshift:leader-locking-kube-controller-manager\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"rolebindings\" in API group \"rbac.authorization.k8s.io\" in the namespace \"kube-system\"\nKubeControllerManagerStaticResourcesDegraded: " | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd/pods/etcd-master1\": net/http: TLS handshake timeout\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master1" from revision 0 to 3 because node master1 static pod not found | |
(x2) | openshift-marketplace |
kubelet |
marketplace-operator-75746f848d-v4htq |
ProbeError |
Liveness probe error: Get "http://10.128.0.8:8080/healthz": dial tcp 10.128.0.8:8080: connect: connection refused body: |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-2 -n openshift-kube-apiserver because it was missing | |
(x2) | openshift-marketplace |
kubelet |
marketplace-operator-75746f848d-v4htq |
Unhealthy |
Liveness probe failed: Get "http://10.128.0.8:8080/healthz": dial tcp 10.128.0.8:8080: connect: connection refused |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeCurrentRevisionChanged |
Updated node "master1" from revision 0 to 4 because static pod is ready | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client\": net/http: TLS handshake timeout\nKubeControllerManagerStaticResourcesDegraded: "),Progressing message changed from "NodeInstallerProgressing: 1 nodes are at revision 0; 0 nodes have achieved new revision 4" to "NodeInstallerProgressing: 1 nodes are at revision 4; 0 nodes have achieved new revision 5",Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 4; 0 nodes have achieved new revision 5") | |
openshift-cluster-storage-operator |
csi-snapshot-controller-operator-status-controller-statussyncer_csi-snapshot-controller |
csi-snapshot-controller-operator |
OperatorStatusChanged |
Status for clusteroperator/csi-snapshot-controller changed: Degraded message changed from "CSISnapshotWebhookControllerDegraded: Get \"https://172.30.0.1:443/apis/apps/v1/namespaces/openshift-cluster-storage-operator/deployments/csi-snapshot-webhook\": net/http: TLS handshake timeout" to "All is well" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/scheduler-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler:public-2\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/roles/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/policyconfigmap-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-scheduler/rolebindings/system:openshift:sa-listing-configmaps\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/services/scheduler\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/openshift-kube-scheduler-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-scheduler-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-scheduler/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/serviceaccounts/localhost-recovery-client\": net/http: TLS handshake timeout\nKubeControllerManagerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 nodes are at revision 3\nEtcdMembersProgressing: No unstarted etcd members found"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 2; 0 nodes have achieved new revision 3\nEtcdMembersAvailable: 1 members are available" to "StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 3\nEtcdMembersAvailable: 1 members are available" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-2 -n openshift-kube-apiserver because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-installer-controller |
etcd-operator |
NodeCurrentRevisionChanged |
Updated node "master1" from revision 2 to 3 because static pod is ready | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeTargetRevisionChanged |
Updating node "master1" from revision 4 to 5 because node master1 with revision 4 is the oldest | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-controller-manager: cause by changes in data.openshift-controller-manager.client-ca.configmap | |
(x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/route-controller-manager -n openshift-route-controller-manager because it changed |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing message changed from "Progressing: deployment/controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 3, desired generation is 4.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" to "Progressing: deployment/controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/controller-manager: available replicas is 0, desired available replicas > 1\nProgressing: deployment/route-controller-manager: observed generation is 4, desired generation is 5.\nProgressing: deployment/route-controller-manager: available replicas is 0, desired available replicas > 1" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-route-controller-manager: cause by changes in data.openshift-route-controller-manager.client-ca.configmap | |
(x2) | openshift-controller-manager-operator |
openshift-controller-manager-operator |
openshift-controller-manager-operator |
DeploymentUpdated |
Updated Deployment.apps/controller-manager -n openshift-controller-manager because it changed |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-5-master1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nTargetConfigControllerDegraded: \"configmap\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/serviceaccount-ca\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/configmaps/serviceaccount-ca\": dial tcp 172.30.0.1:443: connect: connection refused" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-4-master1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-5-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e045fad043f28570b754619999ffb356bedee81ff842c56a32b1b13588fc1651" already present on machine | |
openshift-kube-controller-manager |
multus |
installer-4-master1 |
AddedInterface |
Add eth0 [10.128.0.73/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-4-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81a043e61c07b8e93c6b082aa920d61ffa69762bcc2ef1018360026d62c11b18" already present on machine | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler |
multus |
installer-5-master1 |
AddedInterface |
Add eth0 [10.128.0.72/23] from ovn-kubernetes | |
openshift-kube-scheduler |
kubelet |
installer-5-master1 |
Started |
Started container installer | |
openshift-kube-scheduler |
kubelet |
installer-5-master1 |
Created |
Created container installer | |
openshift-kube-controller-manager |
kubelet |
installer-4-master1 |
Created |
Created container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager |
kubelet |
installer-4-master1 |
Started |
Started container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found\nTargetConfigControllerDegraded: \"configmap/config\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/config\": dial tcp 172.30.0.1:443: connect: connection refused\nTargetConfigControllerDegraded: \"configmap/kube-apiserver-pod\": Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/configmaps/kube-apiserver-pod\": net/http: TLS handshake timeout" to "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": net/http: TLS handshake timeout\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): rolebindings.rbac.authorization.k8s.io \"system:openshift:leader-locking-kube-controller-manager\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"rolebindings\" in API group \"rbac.authorization.k8s.io\" in the namespace \"kube-system\"\nKubeControllerManagerStaticResourcesDegraded: " to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host" | |
(x10) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
RequiredInstallerResourcesMissing |
configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1 |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]\nRevisionControllerDegraded: configmaps \"kube-apiserver-pod\" not found" to "InstallerControllerDegraded: missing required resources: [configmaps: bound-sa-token-signing-certs-1,config-1,etcd-serving-ca-1,kube-apiserver-audit-policies-1,kube-apiserver-cert-syncer-kubeconfig-1,kube-apiserver-pod-1,kubelet-serving-ca-1,sa-token-signing-certs-1, secrets: etcd-client-1,localhost-recovery-client-token-1,localhost-recovery-serving-certkey-1]" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-2 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionCreate |
Revision 1 created because configmap "kube-apiserver-pod-1" not found | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing message changed from "NodeInstallerProgressing: 1 nodes are at revision 0; 0 nodes have achieved new revision 1" to "NodeInstallerProgressing: 1 nodes are at revision 0; 0 nodes have achieved new revision 2",Available message changed from "StaticPodsAvailable: 0 nodes are active; 1 nodes are at revision 0; 0 nodes have achieved new revision 1" to "StaticPodsAvailable: 0 nodes are active; 1 nodes are at revision 0; 0 nodes have achieved new revision 2" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master1" from revision 0 to 2 because node master1 static pod not found | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready") | |
openshift-kube-apiserver |
kubelet |
installer-2-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-2-master1 |
Created |
Created container installer | |
openshift-kube-apiserver |
multus |
installer-2-master1 |
AddedInterface |
Add eth0 [10.128.0.74/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-2-master1 -n openshift-kube-apiserver because it was missing | |
(x4) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallWaiting |
apiServices not installed |
(x4) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallSucceeded |
waiting for install components to report healthy |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
NeedsReinstall |
apiServices not installed | |
(x2) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallCheckFailed |
install timeout |
openshift-kube-apiserver |
kubelet |
installer-2-master1 |
Started |
Started container installer | |
(x3) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
AllRequirementsMet |
all requirements found, attempting install |
openshift-ingress-operator |
kubelet |
ingress-operator-6dbf96bf9c-6rdr8 |
BackOff |
Back-off restarting failed container | |
(x267) | openshift-ingress |
kubelet |
router-default-f8bd48fbf-2mnbd |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [-]backend-http failed: reason withheld [-]has-synced failed: reason withheld [+]process-running ok healthz check failed |
(x2) | openshift-ingress-operator |
kubelet |
ingress-operator-6dbf96bf9c-6rdr8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6275a171f6d5a523627963860415ed0e43f1728f2dd897c49412600bf64bc9c3" already present on machine |
(x3) | openshift-ingress-operator |
kubelet |
ingress-operator-6dbf96bf9c-6rdr8 |
Started |
Started container ingress-operator |
(x3) | openshift-ingress-operator |
kubelet |
ingress-operator-6dbf96bf9c-6rdr8 |
Created |
Created container ingress-operator |
openshift-kube-scheduler |
static-pod-installer |
installer-5-master1 |
StaticPodInstallerCompleted |
Successfully installed revision 5 | |
(x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Killing |
Stopping container kube-scheduler-recovery-controller |
openshift-kube-controller-manager |
static-pod-installer |
installer-4-master1 |
StaticPodInstallerCompleted |
Successfully installed revision 4 | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8d009a8c5ea6b7739d18d167647f2bd1733af8560b6d7b013a6d0c35e266323" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Created |
Created container kube-controller-manager | |
(x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Killing |
Stopping container kube-scheduler-cert-syncer |
(x2) | openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Killing |
Stopping container kube-scheduler |
(x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "kube-controller-manager" changed from "" to "1.25.4" |
(x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/kube-controller-manager version "operator" changed from "" to "4.12.2" |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: status.versions changed from [{"raw-internal" "4.12.2"}] to [{"raw-internal" "4.12.2"} {"kube-controller-manager" "1.25.4"} {"operator" "4.12.2"}] | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81a043e61c07b8e93c6b082aa920d61ffa69762bcc2ef1018360026d62c11b18" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:09c8eb0283a9eda5b282f04357875966a549651e120e527904a917ec862eb642" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Created |
Created container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Created |
Created container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Created |
Created container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81a043e61c07b8e93c6b082aa920d61ffa69762bcc2ef1018360026d62c11b18" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master1 |
ClusterInfrastructureStatus |
unable to get cluster infrastructure status, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master1_ecc5efd1-daa0-4120-a44f-cd48918ec43a became leader | |
openshift-kube-controller-manager |
podsecurity-admission-label-sync-controller-pod-security-admission-label-synchronization-controller-pod-security-admission-label-synchronization-controller |
kube-controller-manager-master1 |
FastControllerResync |
Controller "pod-security-admission-label-synchronization-controller" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master1 |
FastControllerResync |
Controller "namespace-security-allocation-controller" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master1_ecc5efd1-daa0-4120-a44f-cd48918ec43a became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master1" from revision 0 to 4 because static pod is ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 nodes are at revision 4"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 4") | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8d009a8c5ea6b7739d18d167647f2bd1733af8560b6d7b013a6d0c35e266323" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Created |
Created container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e045fad043f28570b754619999ffb356bedee81ff842c56a32b1b13588fc1651" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Created |
Created container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8d009a8c5ea6b7739d18d167647f2bd1733af8560b6d7b013a6d0c35e266323" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Started |
Started container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Created |
Created container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e045fad043f28570b754619999ffb356bedee81ff842c56a32b1b13588fc1651" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Created |
Created container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
master1_895d804a-12fc-4af3-90e6-f12d302dc878 became leader | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
master1_895d804a-12fc-4af3-90e6-f12d302dc878 became leader | |
(x3) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
InstallerPodFailed |
Failed to create installer pod for revision 2 count 0 on node "master1": Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master1": dial tcp 172.30.0.1:443: connect: connection refused |
default |
kubelet |
master1 |
Starting |
Starting kubelet. | |
(x8) | default |
kubelet |
master1 |
NodeHasSufficientMemory |
Node master1 status is now: NodeHasSufficientMemory |
(x8) | default |
kubelet |
master1 |
NodeHasNoDiskPressure |
Node master1 status is now: NodeHasNoDiskPressure |
(x7) | default |
kubelet |
master1 |
NodeHasSufficientPID |
Node master1 status is now: NodeHasSufficientPID |
default |
kubelet |
master1 |
NodeAllocatableEnforced |
Updated Node Allocatable limit across pods | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8d009a8c5ea6b7739d18d167647f2bd1733af8560b6d7b013a6d0c35e266323" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Started |
Started container kube-apiserver-cert-syncer | |
(x10) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
InstallerPodFailed |
Failed to create installer pod for revision 5 count 1 on node "master1": Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-5-master1": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" already present on machine | |
openshift-kube-apiserver |
cert-syncer-certsynccontroller |
kube-apiserver-master1 |
FastControllerResync |
Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Started |
Started container kube-apiserver-insecure-readyz | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master1_a328fe6e-b698-47c5-a86d-99f2d7bc9df9 became leader | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master1_a328fe6e-b698-47c5-a86d-99f2d7bc9df9 became leader | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master1_35d26368-2f74-4c4a-8890-8054311ac064 became leader | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master1_35d26368-2f74-4c4a-8890-8054311ac064 became leader | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7cb74487c-h9grb |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-controller-manager |
kubelet |
controller-manager-779c4cdcc7-sfmpx |
FailedMount |
MountVolume.SetUp failed for volume "proxy-ca-bundles" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5ff95d844f-hdc8s |
FailedMount |
MountVolume.SetUp failed for volume "openshift-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
openshift-state-metrics-5ff95d844f-hdc8s |
FailedMount |
MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition | |
default |
node-controller |
master1 |
RegisteredNode |
Node master1 event: Registered Node master1 in Controller | |
openshift-ingress |
kubelet |
router-default-f8bd48fbf-2mnbd |
FailedMount |
MountVolume.SetUp failed for volume "stats-auth" : failed to sync secret cache: timed out waiting for the condition | |
openshift-ingress |
kubelet |
router-default-f8bd48fbf-2mnbd |
FailedMount |
MountVolume.SetUp failed for volume "default-certificate" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
telemeter-client-8ffdbd7d6-h294p |
FailedMount |
MountVolume.SetUp failed for volume "secret-telemeter-client" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
node-exporter-rqgmw |
FailedMount |
MountVolume.SetUp failed for volume "node-exporter-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-ingress |
kubelet |
router-default-f8bd48fbf-2mnbd |
FailedMount |
MountVolume.SetUp failed for volume "metrics-certs" : failed to sync secret cache: timed out waiting for the condition | |
openshift-apiserver |
kubelet |
apiserver-6c9d449c6-bt726 |
FailedMount |
MountVolume.SetUp failed for volume "etcd-client" : failed to sync secret cache: timed out waiting for the condition | |
openshift-apiserver |
kubelet |
apiserver-6c9d449c6-bt726 |
FailedMount |
MountVolume.SetUp failed for volume "trusted-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-apiserver |
kubelet |
apiserver-6c9d449c6-bt726 |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-apiserver |
kubelet |
apiserver-6c9d449c6-bt726 |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
node-exporter-rqgmw |
FailedMount |
MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
telemeter-client-8ffdbd7d6-h294p |
FailedMount |
MountVolume.SetUp failed for volume "telemeter-trusted-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
kube-state-metrics-75455b796c-45w6j |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-ingress |
kubelet |
router-default-f8bd48fbf-2mnbd |
FailedMount |
MountVolume.SetUp failed for volume "service-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-controller-manager |
kubelet |
controller-manager-779c4cdcc7-sfmpx |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-controller-manager |
kubelet |
controller-manager-779c4cdcc7-sfmpx |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-multus |
kubelet |
multus-additional-cni-plugins-pcprp |
FailedMount |
MountVolume.SetUp failed for volume "cni-sysctl-allowlist" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-controller-manager |
kubelet |
controller-manager-779c4cdcc7-sfmpx |
FailedMount |
MountVolume.SetUp failed for volume "config" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
telemeter-client-8ffdbd7d6-h294p |
FailedMount |
MountVolume.SetUp failed for volume "telemeter-client-tls" : failed to sync secret cache: timed out waiting for the condition | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7cb74487c-h9grb |
FailedMount |
MountVolume.SetUp failed for volume "client-ca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-cluster-node-tuning-operator |
kubelet |
tuned-q94d7 |
FailedMount |
MountVolume.SetUp failed for volume "var-lib-tuned-profiles-data" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
kube-state-metrics-75455b796c-45w6j |
FailedMount |
MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7cb74487c-h9grb |
FailedMount |
MountVolume.SetUp failed for volume "serving-cert" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
telemeter-client-8ffdbd7d6-h294p |
FailedMount |
MountVolume.SetUp failed for volume "secret-telemeter-client-kube-rbac-proxy-config" : failed to sync secret cache: timed out waiting for the condition | |
openshift-monitoring |
kubelet |
telemeter-client-8ffdbd7d6-h294p |
FailedMount |
MountVolume.SetUp failed for volume "serving-certs-ca-bundle" : failed to sync configmap cache: timed out waiting for the condition | |
(x2) | openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" already present on machine |
openshift-controller-manager |
kubelet |
controller-manager-779c4cdcc7-sfmpx |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbeeb31a94b29354971d11e3db852e7a6ec8d2b70b8ec323a01b124281e49261" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/revision-status-5 -n openshift-kube-controller-manager because it was missing | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7cb74487c-h9grb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c23b71619bd88c1bfa093cfa1a72db148937e8f1637c99ff164bf566eaf78b8" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-5 -n openshift-kube-controller-manager because it was missing | |
(x2) | openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Started |
Started container kube-apiserver-check-endpoints |
(x2) | openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Created |
Created container kube-apiserver-check-endpoints |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver |
check-endpoint-checkendpointsstop |
master1 |
FastControllerResync |
Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-apiserver |
check-endpoint-checkendpointstimetostart |
master1 |
FastControllerResync |
Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled down replica set controller-manager-779c4cdcc7 to 0 from 1 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-5 -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager |
replicaset-controller |
controller-manager-779c4cdcc7 |
SuccessfulDelete |
Deleted pod: controller-manager-779c4cdcc7-sfmpx | |
openshift-controller-manager |
deployment-controller |
controller-manager |
ScalingReplicaSet |
Scaled up replica set controller-manager-7c9df9889d to 1 from 0 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-66c9d88ff4 |
SuccessfulCreate |
Created pod: route-controller-manager-66c9d88ff4-lz6w7 | |
openshift-controller-manager |
replicaset-controller |
controller-manager-7c9df9889d |
SuccessfulCreate |
Created pod: controller-manager-7c9df9889d-wvt8q | |
(x58) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerStuck |
unexpected addresses: 192.168.126.10 |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled up replica set route-controller-manager-66c9d88ff4 to 1 from 0 | |
openshift-monitoring |
replicaset-controller |
prometheus-adapter-786496f679 |
SuccessfulCreate |
Created pod: prometheus-adapter-786496f679-ffgkb | |
openshift-monitoring |
deployment-controller |
prometheus-adapter |
ScalingReplicaSet |
Scaled up replica set prometheus-adapter-786496f679 to 1 | |
openshift-route-controller-manager |
replicaset-controller |
route-controller-manager-7cb74487c |
SuccessfulDelete |
Deleted pod: route-controller-manager-7cb74487c-h9grb | |
openshift-route-controller-manager |
deployment-controller |
route-controller-manager |
ScalingReplicaSet |
Scaled down replica set route-controller-manager-7cb74487c to 0 from 1 | |
openshift-monitoring |
kubelet |
prometheus-adapter-786496f679-ffgkb |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56e8f74cab8fdae7f7bbf1c9a307a5fb98eac750a306ec8073478f0899259609" | |
openshift-monitoring |
multus |
prometheus-adapter-786496f679-ffgkb |
AddedInterface |
Add eth0 [10.128.0.77/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SATokenSignerControllerOK |
found expected kube-apiserver endpoints | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/next-service-account-private-key -n openshift-kube-controller-manager-operator because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-5 -n openshift-kube-controller-manager because it was missing | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/etcd-master1 container \"etcd\" started at 2023-02-13 14:56:49 +0000 UTC is still not ready\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
ConfigMapUpdated |
Updated ConfigMap/sa-token-signing-certs -n openshift-config-managed: cause by changes in data.service-account-002.pub | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-5 -n openshift-kube-controller-manager because it was missing | |
openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
ComponentUnhealthy |
apiServices not installed | |
(x2) | openshift-operator-lifecycle-manager |
operator-lifecycle-manager |
packageserver |
InstallSucceeded |
install strategy completed with no errors |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-5 -n openshift-kube-controller-manager because it was missing | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-779c4cdcc7-sfmpx became leader | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7cb74487c-h9grb |
Created |
Created container route-controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-779c4cdcc7-sfmpx |
Started |
Started container controller-manager | |
openshift-controller-manager |
kubelet |
controller-manager-779c4cdcc7-sfmpx |
Created |
Created container controller-manager | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-5 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/sa-token-signing-certs -n openshift-kube-apiserver: cause by changes in data.service-account-002.pub | |
openshift-controller-manager |
kubelet |
controller-manager-779c4cdcc7-sfmpx |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbeeb31a94b29354971d11e3db852e7a6ec8d2b70b8ec323a01b124281e49261" in 8.173939092s | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-779c4cdcc7-sfmpx became leader | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-7cb74487c-h9grb became leader | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7cb74487c-h9grb |
Started |
Started container route-controller-manager | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 3 triggered by "configmap/sa-token-signing-certs has changed" | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7cb74487c-h9grb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c23b71619bd88c1bfa093cfa1a72db148937e8f1637c99ff164bf566eaf78b8" in 7.125070901s | |
openshift-controller-manager |
kubelet |
controller-manager-779c4cdcc7-sfmpx |
Killing |
Stopping container controller-manager | |
openshift-monitoring |
kubelet |
prometheus-adapter-786496f679-ffgkb |
Created |
Created container prometheus-adapter | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-7cb74487c-h9grb |
Killing |
Stopping container route-controller-manager | |
openshift-monitoring |
kubelet |
prometheus-adapter-786496f679-ffgkb |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56e8f74cab8fdae7f7bbf1c9a307a5fb98eac750a306ec8073478f0899259609" in 6.726205091s | |
(x9) | openshift-ingress |
kubelet |
router-default-f8bd48fbf-2mnbd |
ProbeError |
Startup probe error: HTTP probe failed with statuscode: 500 body: [-]backend-http failed: reason withheld [-]has-synced failed: reason withheld [+]process-running ok healthz check failed |
(x9) | openshift-ingress |
kubelet |
router-default-f8bd48fbf-2mnbd |
Unhealthy |
Startup probe failed: HTTP probe failed with statuscode: 500 |
openshift-monitoring |
kubelet |
prometheus-adapter-786496f679-ffgkb |
Started |
Started container prometheus-adapter | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/revision-status-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionCreateFailed |
Failed to create revision 5: configmaps "revision-status-5" already exists | |
openshift-controller-manager |
multus |
controller-manager-7c9df9889d-wvt8q |
AddedInterface |
Add eth0 [10.128.0.78/23] from ovn-kubernetes | |
openshift-image-registry |
image-registry-operator |
cluster-image-registry-operator |
DaemonSetCreated |
Created DaemonSet.apps/node-ca -n openshift-image-registry because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-3 -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager |
kubelet |
controller-manager-7c9df9889d-wvt8q |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbeeb31a94b29354971d11e3db852e7a6ec8d2b70b8ec323a01b124281e49261" already present on machine | |
openshift-controller-manager |
kubelet |
controller-manager-7c9df9889d-wvt8q |
Created |
Created container controller-manager | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-7c9df9889d-wvt8q became leader | |
openshift-controller-manager |
kubelet |
controller-manager-7c9df9889d-wvt8q |
Started |
Started container controller-manager | |
openshift-controller-manager |
openshift-controller-manager |
openshift-master-controllers |
LeaderElection |
controller-manager-7c9df9889d-wvt8q became leader | |
openshift-image-registry |
daemonset-controller |
node-ca |
SuccessfulCreate |
Created pod: node-ca-phrb9 | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:gce-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-binding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:gce-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-kube-controller-manager-role-kube-system.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-election-lock-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-kube-controller-manager-rolebinding-kube-system.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-election-lock-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-3 -n openshift-kube-apiserver because it was missing | |
openshift-image-registry |
kubelet |
node-ca-phrb9 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fabfe66dbbe204c284860937d453712fe199940fb1088823268fe611a44b793" | |
openshift-image-registry |
kubelet |
node-ca-phrb9 |
FailedMount |
MountVolume.SetUp failed for volume "serviceca" : failed to sync configmap cache: timed out waiting for the condition | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: status.versions changed from [{"raw-internal" "4.12.2"}] to [{"raw-internal" "4.12.2"} {"kube-apiserver" "1.25.4"} {"operator" "4.12.2"}] | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:gce-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-binding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:gce-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-kube-controller-manager-role-kube-system.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-election-lock-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-kube-controller-manager-rolebinding-kube-system.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-election-lock-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:gce-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-binding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:gce-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-kube-controller-manager-role-kube-system.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-election-lock-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-kube-controller-manager-rolebinding-kube-system.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-election-lock-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \nRevisionControllerDegraded: configmaps \"revision-status-5\" already exists\nSATokenSignerDegraded: secrets \"next-service-account-private-key\" already exists" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nStaticPodsDegraded: pod/etcd-master1 container \"etcd\" started at 2023-02-13 14:56:49 +0000 UTC is still not ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "operator" changed from "" to "4.12.2" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorVersionChanged |
clusteroperator/kube-apiserver version "kube-apiserver" changed from "" to "1.25.4" | |
openshift-route-controller-manager |
multus |
route-controller-manager-66c9d88ff4-lz6w7 |
AddedInterface |
Add eth0 [10.128.0.79/23] from ovn-kubernetes | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-66c9d88ff4-lz6w7 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c23b71619bd88c1bfa093cfa1a72db148937e8f1637c99ff164bf566eaf78b8" already present on machine | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-66c9d88ff4-lz6w7 |
Created |
Created container route-controller-manager | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-console-user-settings namespace | |
openshift-route-controller-manager |
kubelet |
route-controller-manager-66c9d88ff4-lz6w7 |
Started |
Started container route-controller-manager | |
(x4) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreateFailed |
Failed to create ConfigMap/revision-status-5 -n openshift-kube-controller-manager: configmaps "revision-status-5" already exists |
(x4) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretCreateFailed |
Failed to create Secret/next-service-account-private-key -n openshift-kube-controller-manager-operator: secrets "next-service-account-private-key" already exists |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:gce-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-binding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:gce-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-kube-controller-manager-role-kube-system.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-election-lock-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-kube-controller-manager-rolebinding-kube-system.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-election-lock-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \nSATokenSignerDegraded: secrets \"next-service-account-private-key\" already exists" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:gce-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-binding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:gce-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-kube-controller-manager-role-kube-system.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-election-lock-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-kube-controller-manager-rolebinding-kube-system.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-election-lock-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " | |
openshift-route-controller-manager |
route-controller-manager |
openshift-route-controllers |
LeaderElection |
route-controller-manager-66c9d88ff4-lz6w7 became leader | |
(x6) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 5 triggered by "secret/localhost-recovery-client-token has changed" |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionCreate |
Revision 4 created because secret/localhost-recovery-client-token has changed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:gce-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-binding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:gce-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-kube-controller-manager-role-kube-system.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-election-lock-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-kube-controller-manager-rolebinding-kube-system.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-election-lock-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \nRevisionControllerDegraded: configmaps \"revision-status-5\" already exists\nSATokenSignerDegraded: secrets \"next-service-account-private-key\" already exists" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:gce-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-binding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:gce-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-kube-controller-manager-role-kube-system.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-election-lock-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-kube-controller-manager-rolebinding-kube-system.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-election-lock-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \nSATokenSignerDegraded: secrets \"next-service-account-private-key\" already exists" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorVersionChanged |
clusteroperator/openshift-controller-manager version "operator" changed from "" to "4.12.2" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 nodes are at revision 4; 0 nodes have achieved new revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 4; 0 nodes have achieved new revision 5" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well"),status.versions changed from [] to [{"operator" "4.12.2"}] | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:gce-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-binding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:kube-controller-manager:gce-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-kube-controller-manager-role-kube-system.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:openshift:leader-election-lock-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-kube-controller-manager-rolebinding-kube-system.yaml\" (string): Delete \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-election-lock-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host" | |
openshift-image-registry |
kubelet |
node-ca-phrb9 |
Started |
Started container node-ca | |
openshift-image-registry |
kubelet |
node-ca-phrb9 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fabfe66dbbe204c284860937d453712fe199940fb1088823268fe611a44b793" in 4.879138355s | |
openshift-image-registry |
kubelet |
node-ca-phrb9 |
Created |
Created container node-ca | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/revision-status-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-pod-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeCurrentRevisionChanged |
Updated node "master1" from revision 0 to 2 because static pod is ready | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/config-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 nodes are at revision 2"),Available changed from False to True ("StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 2") | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master1" from revision 4 to 5 because node master1 with revision 4 is the oldest | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/scheduler-kubeconfig-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master1 |
Killing |
Stopping container startup-monitor | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-5-master1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/serving-cert-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
ConfigMapCreated |
Created ConfigMap/kube-scheduler-cert-syncer-kubeconfig-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-controller-manager |
multus |
installer-5-master1 |
AddedInterface |
Add eth0 [10.128.0.80/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
installer-5-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81a043e61c07b8e93c6b082aa920d61ffa69762bcc2ef1018360026d62c11b18" already present on machine | |
(x2) | openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionTriggered |
new revision 6 triggered by "secret/localhost-recovery-client-token has changed" |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: conflicting latestAvailableRevision 6" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: conflicting latestAvailableRevision 6" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-6 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-revisioncontroller |
openshift-kube-scheduler-operator |
RevisionCreate |
Revision 5 created because secret/localhost-recovery-client-token has changed | |
openshift-kube-controller-manager |
kubelet |
installer-5-master1 |
Created |
Created container installer | |
openshift-kube-controller-manager |
kubelet |
installer-5-master1 |
Started |
Started container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-3 -n openshift-kube-apiserver because it was missing | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing message changed from "NodeInstallerProgressing: 1 nodes are at revision 4; 0 nodes have achieved new revision 5" to "NodeInstallerProgressing: 1 nodes are at revision 4; 0 nodes have achieved new revision 6",Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 4; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 4; 0 nodes have achieved new revision 6" | |
openshift-console-operator |
replicaset-controller |
console-operator-7f587bf69b |
SuccessfulCreate |
Created pod: console-operator-7f587bf69b-gpgdn | |
openshift-console-operator |
deployment-controller |
console-operator |
ScalingReplicaSet |
Scaled up replica set console-operator-7f587bf69b to 1 | |
openshift-kube-scheduler |
kubelet |
installer-6-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e045fad043f28570b754619999ffb356bedee81ff842c56a32b1b13588fc1651" already present on machine | |
openshift-kube-scheduler |
multus |
installer-6-master1 |
AddedInterface |
Add eth0 [10.128.0.82/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-3 -n openshift-kube-apiserver because it was missing | |
openshift-console-operator |
multus |
console-operator-7f587bf69b-gpgdn |
AddedInterface |
Add eth0 [10.128.0.81/23] from ovn-kubernetes | |
openshift-console-operator |
kubelet |
console-operator-7f587bf69b-gpgdn |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13c6c13414ca1ad1b47ed6b7e785e92f1e435dff1d70709fb807c23a98803a32" | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
PodCreated |
Created Pod/installer-6-master1 -n openshift-kube-scheduler because it was missing | |
openshift-kube-scheduler |
kubelet |
installer-6-master1 |
Created |
Created container installer | |
openshift-kube-scheduler |
kubelet |
installer-6-master1 |
Started |
Started container installer | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-3 -n openshift-kube-apiserver because it was missing | |
openshift-console-operator |
kubelet |
console-operator-7f587bf69b-gpgdn |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13c6c13414ca1ad1b47ed6b7e785e92f1e435dff1d70709fb807c23a98803a32" in 4.958388594s | |
openshift-console-operator |
kubelet |
console-operator-7f587bf69b-gpgdn |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:13c6c13414ca1ad1b47ed6b7e785e92f1e435dff1d70709fb807c23a98803a32" already present on machine | |
openshift-console-operator |
kubelet |
console-operator-7f587bf69b-gpgdn |
Started |
Started container console-operator | |
openshift-console-operator |
kubelet |
console-operator-7f587bf69b-gpgdn |
Created |
Created container console-operator | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-3 -n openshift-kube-apiserver because it was missing | |
openshift-console-operator |
console-operator |
console-operator-lock |
LeaderElection |
console-operator-7f587bf69b-gpgdn_e23972ea-f775-43c1-9b44-f48b8995183e became leader | |
openshift-console-operator |
console-operator-resource-sync-controller-resourcesynccontroller |
console-operator |
ConfigMapCreated |
Created ConfigMap/default-ingress-cert -n openshift-console because it was missing | |
openshift-console |
replicaset-controller |
downloads-797d94d7f9 |
SuccessfulCreate |
Created pod: downloads-797d94d7f9-ln4r8 | |
openshift-console-operator |
kubelet |
console-operator-7f587bf69b-gpgdn |
Started |
Started container conversion-webhook-server | |
openshift-console-operator |
console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller |
console-operator |
DeploymentCreated |
Created Deployment.apps/downloads -n openshift-console because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-3 -n openshift-kube-apiserver because it was missing | |
(x2) | openshift-console |
controllermanager |
downloads |
NoPods |
No matching pods found |
openshift-console-operator |
console-operator |
console-operator-lock |
LeaderElection |
console-operator-7f587bf69b-gpgdn_e23972ea-f775-43c1-9b44-f48b8995183e became leader | |
openshift-console-operator |
console-operator-console-pdb-controller-poddisruptionbudgetcontroller |
console-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/console -n openshift-console because it was missing | |
openshift-console-operator |
kubelet |
console-operator-7f587bf69b-gpgdn |
Created |
Created container conversion-webhook-server | |
(x2) | openshift-console |
controllermanager |
console |
NoPods |
No matching pods found |
openshift-console |
deployment-controller |
downloads |
ScalingReplicaSet |
Scaled up replica set downloads-797d94d7f9 to 1 | |
openshift-console-operator |
console-operator-downloads-pdb-controller-poddisruptionbudgetcontroller |
console-operator |
PodDisruptionBudgetCreated |
Created PodDisruptionBudget.policy/downloads -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-resource-sync-controller-resourcesynccontroller |
console-operator |
ConfigMapCreated |
Created ConfigMap/oauth-serving-cert -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-health-check-controller-healthcheckcontroller |
console-operator |
FastControllerResync |
Controller "HealthCheckController" resync interval is set to 30s which might lead to client request throttling | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded set to Unknown (""),Progressing set to Unknown (""),Available set to Unknown (""),Upgradeable set to Unknown (""),status.relatedObjects changed from [] to [{"operator.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "consoles" "" "cluster"} {"config.openshift.io" "infrastructures" "" "cluster"} {"config.openshift.io" "proxies" "" "cluster"} {"config.openshift.io" "oauths" "" "cluster"} {"oauth.openshift.io" "oauthclients" "" "console"} {"" "namespaces" "" "openshift-console-operator"} {"" "namespaces" "" "openshift-console"} {"" "configmaps" "openshift-config-managed" "console-public"}],status.versions changed from [] to [{"operator" "4.12.2"}] | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorVersionChanged |
clusteroperator/console version "operator" changed from "" to "4.12.2" | |
openshift-console-operator |
console-operator-console-service-controller-consoleservicecontroller |
console-operator |
ServiceCreated |
Created Service/downloads -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-loggingsyncer |
console-operator |
FastControllerResync |
Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling | |
openshift-console-operator |
console-operator-unsupportedconfigoverridescontroller |
console-operator |
FastControllerResync |
Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionCreate |
Revision 2 created because configmap/sa-token-signing-certs has changed | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded changed from Unknown to False ("All is well") | |
openshift-console |
multus |
downloads-797d94d7f9-ln4r8 |
AddedInterface |
Add eth0 [10.128.0.83/23] from ovn-kubernetes | |
openshift-console |
kubelet |
downloads-797d94d7f9-ln4r8 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:851bbd751f0896f040e55e8fbf0c621e96f3ea2536cb1dfbdcc9a890bcbf2a32" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-3 -n openshift-kube-apiserver because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from Unknown to False ("All is well") | |
openshift-console-operator |
console-operator-console-service-controller-consoleservicecontroller |
console-operator |
ServiceCreated |
Created Service/console -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "All is well" to "DefaultRouteSyncDegraded: no ingress for host console-openshift-console.apps.test-cluster.redhat.com in route console in namespace openshift-console" | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapCreated |
Created ConfigMap/console-config -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentCreated |
Created Deployment.apps/console -n openshift-console because it was missing | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapCreated |
Created ConfigMap/console-public -n openshift-config-managed because it was missing | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
SecretCreated |
Created Secret/console-oauth-config -n openshift-console because it was missing | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-5c44fb5754 to 1 | |
openshift-console |
replicaset-controller |
console-5c44fb5754 |
SuccessfulCreate |
Created pod: console-5c44fb5754-9zfch | |
openshift-console |
multus |
console-5c44fb5754-9zfch |
AddedInterface |
Add eth0 [10.128.0.84/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-5c44fb5754-9zfch |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81ecc8fb6073babcfb5c08b43206fbbe49e5c0c0694dc3fb6433aebfa9e0bd0f" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: Changes made during sync updates, additional sync expected."),Available changed from Unknown to False ("DeploymentAvailable: 0 replicas available for console deployment") | |
openshift-console |
replicaset-controller |
console-87d9d6878 |
SuccessfulCreate |
Created pod: console-87d9d6878-bjhl4 | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-87d9d6878 to 1 | |
openshift-console-operator |
console-operator-console-downloads-deployment-controller-consoledownloadsdeploymentsynccontroller |
console-operator |
DeploymentUpdated |
Updated Deployment.apps/downloads -n openshift-console because it changed | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Upgradeable changed from Unknown to True ("All is well") | |
openshift-console |
multus |
console-87d9d6878-bjhl4 |
AddedInterface |
Add eth0 [10.128.0.85/23] from ovn-kubernetes | |
openshift-console |
kubelet |
console-87d9d6878-bjhl4 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81ecc8fb6073babcfb5c08b43206fbbe49e5c0c0694dc3fb6433aebfa9e0bd0f" | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "DefaultRouteSyncDegraded: no ingress for host console-openshift-console.apps.test-cluster.redhat.com in route console in namespace openshift-console" to "DefaultRouteSyncDegraded: no ingress for host console-openshift-console.apps.test-cluster.redhat.com in route console in namespace openshift-console\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps.test-cluster.redhat.com returns '503 Service Unavailable'",Available message changed from "DeploymentAvailable: 0 replicas available for console deployment" to "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.test-cluster.redhat.com returns '503 Service Unavailable'" | |
openshift-console |
kubelet |
downloads-797d94d7f9-ln4r8 |
Started |
Started container download-server | |
openshift-console |
kubelet |
downloads-797d94d7f9-ln4r8 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:851bbd751f0896f040e55e8fbf0c621e96f3ea2536cb1dfbdcc9a890bcbf2a32" in 13.33652909s | |
openshift-console |
kubelet |
console-5c44fb5754-9zfch |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81ecc8fb6073babcfb5c08b43206fbbe49e5c0c0694dc3fb6433aebfa9e0bd0f" in 8.297692915s | |
openshift-console |
kubelet |
console-5c44fb5754-9zfch |
Created |
Created container console | |
openshift-console |
kubelet |
console-5c44fb5754-9zfch |
Started |
Started container console | |
openshift-console |
kubelet |
downloads-797d94d7f9-ln4r8 |
Created |
Created container download-server | |
openshift-console |
kubelet |
downloads-797d94d7f9-ln4r8 |
Unhealthy |
Readiness probe failed: Get "http://10.128.0.83:8080/": dial tcp 10.128.0.83:8080: connect: connection refused | |
openshift-console |
kubelet |
downloads-797d94d7f9-ln4r8 |
ProbeError |
Readiness probe error: Get "http://10.128.0.83:8080/": dial tcp 10.128.0.83:8080: connect: connection refused body: | |
openshift-console |
kubelet |
console-87d9d6878-bjhl4 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81ecc8fb6073babcfb5c08b43206fbbe49e5c0c0694dc3fb6433aebfa9e0bd0f" in 1.524511401s | |
openshift-console |
kubelet |
console-87d9d6878-bjhl4 |
Started |
Started container console | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master1" from revision 2 to 3 because node master1 with revision 2 is the oldest | |
openshift-console |
kubelet |
console-87d9d6878-bjhl4 |
Created |
Created container console | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 nodes are at revision 2; 0 nodes have achieved new revision 3"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 2" to "StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 2; 0 nodes have achieved new revision 3" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-3-master1 -n openshift-kube-apiserver because it was missing | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing message changed from "SyncLoopRefreshProgressing: Changes made during sync updates, additional sync expected." to "SyncLoopRefreshProgressing: Working toward version 4.12.2, 0 replicas available" | |
openshift-kube-apiserver |
kubelet |
installer-3-master1 |
Created |
Created container installer | |
openshift-kube-apiserver |
kubelet |
installer-3-master1 |
Started |
Started container installer | |
openshift-kube-apiserver |
kubelet |
installer-3-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" already present on machine | |
openshift-kube-apiserver |
multus |
installer-3-master1 |
AddedInterface |
Add eth0 [10.128.0.86/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Killing |
Stopping container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Killing |
Stopping container cluster-policy-controller | |
openshift-kube-controller-manager |
static-pod-installer |
installer-5-master1 |
StaticPodInstallerCompleted |
Successfully installed revision 5 | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Killing |
Stopping container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Killing |
Stopping container kube-controller-manager | |
openshift-kube-scheduler |
static-pod-installer |
installer-6-master1 |
StaticPodInstallerCompleted |
Successfully installed revision 6 | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Killing |
Stopping container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Killing |
Stopping container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Killing |
Stopping container kube-scheduler-recovery-controller | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "DefaultRouteSyncDegraded: no ingress for host console-openshift-console.apps.test-cluster.redhat.com in route console in namespace openshift-console\nRouteHealthDegraded: route not yet available, https://console-openshift-console.apps.test-cluster.redhat.com returns '503 Service Unavailable'" to "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.test-cluster.redhat.com returns '503 Service Unavailable'" | |
openshift-kube-controller-manager |
cert-syncer-certsynccontroller |
kube-controller-manager-master1 |
FastControllerResync |
Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81a043e61c07b8e93c6b082aa920d61ffa69762bcc2ef1018360026d62c11b18" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:09c8eb0283a9eda5b282f04357875966a549651e120e527904a917ec862eb642" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Created |
Created container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81a043e61c07b8e93c6b082aa920d61ffa69762bcc2ef1018360026d62c11b18" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Created |
Created container kube-controller-manager-cert-syncer | |
(x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nStaticPodsDegraded: pod/kube-controller-manager-master1 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master1 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master1 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master1 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host" |
(x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nStaticPodsDegraded: pod/kube-controller-manager-master1 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master1 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master1 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master1 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master1 |
ClusterInfrastructureStatus |
unable to get cluster infrastructure status, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-controller-manager |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master1_cce84795-4608-4238-9dba-dc805bcba252 became leader | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Created |
Created container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master1_83d05b13-5fc6-481b-9403-a25090701233 became leader | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master1_83d05b13-5fc6-481b-9403-a25090701233 became leader | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master1 |
FastControllerResync |
Controller "namespace-security-allocation-controller" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-controller-manager |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master1_cce84795-4608-4238-9dba-dc805bcba252 became leader | |
openshift-kube-controller-manager |
podsecurity-admission-label-sync-controller-pod-security-admission-label-synchronization-controller-pod-security-admission-label-synchronization-controller |
kube-controller-manager-master1 |
FastControllerResync |
Controller "pod-security-admission-label-synchronization-controller" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Started |
Started container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Started |
Started container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e045fad043f28570b754619999ffb356bedee81ff842c56a32b1b13588fc1651" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Created |
Created container kube-scheduler | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8d009a8c5ea6b7739d18d167647f2bd1733af8560b6d7b013a6d0c35e266323" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Created |
Created container wait-for-host-port | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8d009a8c5ea6b7739d18d167647f2bd1733af8560b6d7b013a6d0c35e266323" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Started |
Started container kube-scheduler-cert-syncer | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Started |
Started container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master1_7368e1c1-445d-4504-a6cc-892e48266624 became leader | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Created |
Created container kube-scheduler-recovery-controller | |
openshift-kube-scheduler |
cert-syncer-certsynccontroller |
openshift-kube-scheduler-master1 |
FastControllerResync |
Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-scheduler |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master1_7368e1c1-445d-4504-a6cc-892e48266624 became leader | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
master1_fc32b1d1-9b3e-44f4-b01e-406d2257441d became leader | |
openshift-kube-scheduler |
default-scheduler |
kube-scheduler |
LeaderElection |
master1_fc32b1d1-9b3e-44f4-b01e-406d2257441d became leader | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e045fad043f28570b754619999ffb356bedee81ff842c56a32b1b13588fc1651" already present on machine | |
openshift-kube-scheduler |
kubelet |
openshift-kube-scheduler-master1 |
Created |
Created container kube-scheduler-cert-syncer | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 nodes are at revision 5"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 4; 0 nodes have achieved new revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 5" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master1" from revision 4 to 5 because static pod is ready | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master1_b9a70220-c7fd-46f2-8a19-9021614572e9 became leader | |
default |
node-controller |
master1 |
RegisteredNode |
Node master1 event: Registered Node master1 in Controller | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master1_b9a70220-c7fd-46f2-8a19-9021614572e9 became leader | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master1 |
Started |
Started container startup-monitor | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master1 |
Created |
Created container startup-monitor | |
(x6) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
InstallerPodFailed |
Failed to create installer pod for revision 3 count 0 on node "master1": Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master1": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8d009a8c5ea6b7739d18d167647f2bd1733af8560b6d7b013a6d0c35e266323" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8d009a8c5ea6b7739d18d167647f2bd1733af8560b6d7b013a6d0c35e266323" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Created |
Created container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Created |
Created container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
check-endpoint-checkendpointsstop |
master1 |
FastControllerResync |
Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-apiserver |
check-endpoint-checkendpointstimetostart |
master1 |
FastControllerResync |
Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
InstallerPodFailed |
Failed to create installer pod for revision 3 count 0 on node "master1": pods "installer-3-master1" is forbidden: User "system:serviceaccount:openshift-kube-apiserver-operator:kube-apiserver-operator" cannot get resource "pods" in API group "" in the namespace "openshift-kube-apiserver" | |
(x10) | openshift-console |
kubelet |
console-5c44fb5754-9zfch |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.84:8443/health": dial tcp 10.128.0.84:8443: connect: connection refused |
(x11) | openshift-console |
kubelet |
console-5c44fb5754-9zfch |
ProbeError |
Readiness probe error: Get "https://10.128.0.84:8443/health": dial tcp 10.128.0.84:8443: connect: connection refused body: |
(x10) | openshift-console |
kubelet |
console-87d9d6878-bjhl4 |
Unhealthy |
Readiness probe failed: Get "https://10.128.0.85:8443/health": dial tcp 10.128.0.85:8443: connect: connection refused |
(x11) | openshift-console |
kubelet |
console-87d9d6878-bjhl4 |
ProbeError |
Readiness probe error: Get "https://10.128.0.85:8443/health": dial tcp 10.128.0.85:8443: connect: connection refused body: |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "All is well" to "OpenshiftControllerManagerStaticResourcesDegraded: \"v3.11.0/openshift-controller-manager/informer-clusterrole.yaml\" (string): clusterroles.rbac.authorization.k8s.io \"system:openshift:openshift-controller-manager\" is forbidden: User \"system:serviceaccount:openshift-controller-manager-operator:openshift-controller-manager-operator\" cannot get resource \"clusterroles\" in API group \"rbac.authorization.k8s.io\" at the cluster scope\nOpenshiftControllerManagerStaticResourcesDegraded: \"v3.11.0/openshift-controller-manager/informer-clusterrolebinding.yaml\" (string): clusterrolebindings.rbac.authorization.k8s.io \"system:openshift:openshift-controller-manager\" is forbidden: User \"system:serviceaccount:openshift-controller-manager-operator:openshift-controller-manager-operator\" cannot get resource \"clusterrolebindings\" in API group \"rbac.authorization.k8s.io\" at the cluster scope\nOpenshiftControllerManagerStaticResourcesDegraded: \"v3.11.0/openshift-controller-manager/ingress-to-route-controller-clusterrole.yaml\" (string): clusterroles.rbac.authorization.k8s.io \"system:openshift:openshift-controller-manager:ingress-to-route-controller\" is forbidden: User \"system:serviceaccount:openshift-controller-manager-operator:openshift-controller-manager-operator\" cannot get resource \"clusterroles\" in API group \"rbac.authorization.k8s.io\" at the cluster scope\nOpenshiftControllerManagerStaticResourcesDegraded: \"v3.11.0/openshift-controller-manager/ingress-to-route-controller-clusterrolebinding.yaml\" (string): clusterrolebindings.rbac.authorization.k8s.io \"system:openshift:openshift-controller-manager:ingress-to-route-controller\" is forbidden: User \"system:serviceaccount:openshift-controller-manager-operator:openshift-controller-manager-operator\" cannot get resource \"clusterrolebindings\" in API group \"rbac.authorization.k8s.io\" at the cluster scope\nOpenshiftControllerManagerStaticResourcesDegraded: \"v3.11.0/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml\" (string): roles.rbac.authorization.k8s.io \"system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller\" is forbidden: User \"system:serviceaccount:openshift-controller-manager-operator:openshift-controller-manager-operator\" cannot get resource \"roles\" in API group \"rbac.authorization.k8s.io\" in the namespace \"openshift-route-controller-manager\"\nOpenshiftControllerManagerStaticResourcesDegraded: " | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master1_496e9a45-3ca2-4ca4-9bd2-753a5f2c5884 became leader | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master1_496e9a45-3ca2-4ca4-9bd2-753a5f2c5884 became leader | |
default |
node-controller |
master1 |
RegisteredNode |
Node master1 event: Registered Node master1 in Controller | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SuccessfulCreate |
Created job collect-profiles-27938340 | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-27938340 |
SuccessfulCreate |
Created pod: collect-profiles-27938340-swxrj | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-27938340-swxrj |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f1d71ba084c63e2d6b3140b9cbada2b50bb6589a39a526dedb466945d284c73e" already present on machine | |
openshift-operator-lifecycle-manager |
multus |
collect-profiles-27938340-swxrj |
AddedInterface |
Add eth0 [10.128.0.87/23] from ovn-kubernetes | |
openshift-ingress-operator |
kubelet |
ingress-operator-6dbf96bf9c-6rdr8 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6275a171f6d5a523627963860415ed0e43f1728f2dd897c49412600bf64bc9c3" already present on machine | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-27938340-swxrj |
Started |
Started container collect-profiles | |
openshift-operator-lifecycle-manager |
kubelet |
collect-profiles-27938340-swxrj |
Created |
Created container collect-profiles | |
openshift-ingress-operator |
kubelet |
ingress-operator-6dbf96bf9c-6rdr8 |
Started |
Started container ingress-operator | |
openshift-ingress-operator |
kubelet |
ingress-operator-6dbf96bf9c-6rdr8 |
Created |
Created container ingress-operator | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master1 |
Killing |
Stopping container startup-monitor | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-installer-controller |
openshift-kube-scheduler-operator |
NodeCurrentRevisionChanged |
Updated node "master1" from revision 4 to 6 because static pod is ready | |
openshift-kube-scheduler-operator |
openshift-cluster-kube-scheduler-operator-status-controller-statussyncer_kube-scheduler |
openshift-kube-scheduler-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-scheduler changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 nodes are at revision 6"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 4; 0 nodes have achieved new revision 6" to "StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 6" | |
openshift-operator-lifecycle-manager |
cronjob-controller |
collect-profiles |
SawCompletedJob |
Saw completed job: collect-profiles-27938340, status: Complete | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): serviceaccounts \"localhost-recovery-client\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"\nKubeControllerManagerStaticResourcesDegraded: " | |
openshift-operator-lifecycle-manager |
job-controller |
collect-profiles-27938340 |
Completed |
Job completed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): serviceaccounts \"localhost-recovery-client\" is forbidden: User \"system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"\nKubeControllerManagerStaticResourcesDegraded: " to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-sa.yaml\" (string): serviceaccounts \"localhost-recovery-client\" is forbidden: User \"system:serviceaccount:openshift-kube-apiserver-operator:kube-apiserver-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-kube-apiserver\"\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml\" (string): customresourcedefinitions.apiextensions.k8s.io \"apirequestcounts.apiserver.openshift.io\" is forbidden: User \"system:serviceaccount:openshift-kube-apiserver-operator:kube-apiserver-operator\" cannot get resource \"customresourcedefinitions\" in API group \"apiextensions.k8s.io\" at the cluster scope\nKubeAPIServerStaticResourcesDegraded: " | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-sa.yaml\" (string): serviceaccounts \"localhost-recovery-client\" is forbidden: User \"system:serviceaccount:openshift-kube-apiserver-operator:kube-apiserver-operator\" cannot get resource \"serviceaccounts\" in API group \"\" in the namespace \"openshift-kube-apiserver\"\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/apiserver.openshift.io_apirequestcount.yaml\" (string): customresourcedefinitions.apiextensions.k8s.io \"apirequestcounts.apiserver.openshift.io\" is forbidden: User \"system:serviceaccount:openshift-kube-apiserver-operator:kube-apiserver-operator\" cannot get resource \"customresourcedefinitions\" in API group \"apiextensions.k8s.io\" at the cluster scope\nKubeAPIServerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready" | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-68df59f464-ffd6s_3b24e326-6bdc-4937-903c-a882d10e5151 became leader | |
openshift-authentication-operator |
oauth-apiserver-audit-policy-controller-auditpolicycontroller |
authentication-operator |
FastControllerResync |
Controller "auditPolicyController" resync interval is set to 10s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-oauthapiservercontrollerworkloadcontroller |
authentication-operator |
FastControllerResync |
Controller "OAuthAPIServerControllerWorkloadController" resync interval is set to 0s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-revisioncontroller |
authentication-operator |
FastControllerResync |
Controller "RevisionController" resync interval is set to 0s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
authentication-operator |
FastControllerResync |
Controller "APIServiceController_openshift-apiserver" resync interval is set to 10s which might lead to client request throttling | |
openshift-authentication-operator |
cluster-authentication-operator-loggingsyncer |
authentication-operator |
FastControllerResync |
Controller "LoggingSyncer" resync interval is set to 0s which might lead to client request throttling | |
openshift-authentication-operator |
cluster-authentication-operator-unsupportedconfigoverridescontroller |
authentication-operator |
FastControllerResync |
Controller "UnsupportedConfigOverridesController" resync interval is set to 0s which might lead to client request throttling | |
openshift-authentication-operator |
cluster-authentication-operator |
cluster-authentication-operator-lock |
LeaderElection |
authentication-operator-68df59f464-ffd6s_3b24e326-6bdc-4937-903c-a882d10e5151 became leader | |
openshift-authentication-operator |
cluster-authentication-operator-oauthserverworkloadcontroller |
authentication-operator |
FastControllerResync |
Controller "OAuthServerWorkloadController" resync interval is set to 0s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-secret-revision-prune-controller-secretrevisionprunecontroller |
authentication-operator |
FastControllerResync |
Controller "SecretRevisionPruneController" resync interval is set to 0s which might lead to client request throttling | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "AuditPolicyDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit\": stream error: stream ID 3525; INTERNAL_ERROR; received from peer\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-controller-manager-operator |
openshift-controller-manager-operator-status-controller-statussyncer_openshift-controller-manager |
openshift-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-controller-manager changed: Degraded message changed from "OpenshiftControllerManagerStaticResourcesDegraded: \"v3.11.0/openshift-controller-manager/informer-clusterrole.yaml\" (string): clusterroles.rbac.authorization.k8s.io \"system:openshift:openshift-controller-manager\" is forbidden: User \"system:serviceaccount:openshift-controller-manager-operator:openshift-controller-manager-operator\" cannot get resource \"clusterroles\" in API group \"rbac.authorization.k8s.io\" at the cluster scope\nOpenshiftControllerManagerStaticResourcesDegraded: \"v3.11.0/openshift-controller-manager/informer-clusterrolebinding.yaml\" (string): clusterrolebindings.rbac.authorization.k8s.io \"system:openshift:openshift-controller-manager\" is forbidden: User \"system:serviceaccount:openshift-controller-manager-operator:openshift-controller-manager-operator\" cannot get resource \"clusterrolebindings\" in API group \"rbac.authorization.k8s.io\" at the cluster scope\nOpenshiftControllerManagerStaticResourcesDegraded: \"v3.11.0/openshift-controller-manager/ingress-to-route-controller-clusterrole.yaml\" (string): clusterroles.rbac.authorization.k8s.io \"system:openshift:openshift-controller-manager:ingress-to-route-controller\" is forbidden: User \"system:serviceaccount:openshift-controller-manager-operator:openshift-controller-manager-operator\" cannot get resource \"clusterroles\" in API group \"rbac.authorization.k8s.io\" at the cluster scope\nOpenshiftControllerManagerStaticResourcesDegraded: \"v3.11.0/openshift-controller-manager/ingress-to-route-controller-clusterrolebinding.yaml\" (string): clusterrolebindings.rbac.authorization.k8s.io \"system:openshift:openshift-controller-manager:ingress-to-route-controller\" is forbidden: User \"system:serviceaccount:openshift-controller-manager-operator:openshift-controller-manager-operator\" cannot get resource \"clusterrolebindings\" in API group \"rbac.authorization.k8s.io\" at the cluster scope\nOpenshiftControllerManagerStaticResourcesDegraded: \"v3.11.0/openshift-controller-manager/leader-ingress-to-route-controller-role.yaml\" (string): roles.rbac.authorization.k8s.io \"system:openshift:openshift-controller-manager:leader-locking-ingress-to-route-controller\" is forbidden: User \"system:serviceaccount:openshift-controller-manager-operator:openshift-controller-manager-operator\" cannot get resource \"roles\" in API group \"rbac.authorization.k8s.io\" in the namespace \"openshift-route-controller-manager\"\nOpenshiftControllerManagerStaticResourcesDegraded: " to "All is well" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "AuditPolicyDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit\": stream error: stream ID 3525; INTERNAL_ERROR; received from peer\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "AuditPolicyDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit\": stream error: stream ID 3525; INTERNAL_ERROR; received from peer\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready",Available message changed from "APIServicesAvailable: apiservices.apiregistration.k8s.io/v1.oauth.openshift.io: not available: failing or missing response from https://10.128.0.26:8443/apis/oauth.openshift.io/v1: bad status from https://10.128.0.26:8443/apis/oauth.openshift.io/v1: 401\nAPIServicesAvailable: apiservices.apiregistration.k8s.io/v1.user.openshift.io: not available: failing or missing response from https://10.128.0.26:8443/apis/user.openshift.io/v1: bad status from https://10.128.0.26:8443/apis/user.openshift.io/v1: 401\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "AuditPolicyDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit\": stream error: stream ID 3525; INTERNAL_ERROR; received from peer\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "AuditPolicyDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit\": stream error: stream ID 3525; INTERNAL_ERROR; received from peer\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "AuditPolicyDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit\": stream error: stream ID 3525; INTERNAL_ERROR; received from peer\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "AuditPolicyDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit\": stream error: stream ID 3525; INTERNAL_ERROR; received from peer\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to appear: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "AuditPolicyDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit\": stream error: stream ID 3525; INTERNAL_ERROR; received from peer\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "AuditPolicyDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit\": stream error: stream ID 3525; INTERNAL_ERROR; received from peer\nCustomRouteControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "AuditPolicyDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit\": stream error: stream ID 3525; INTERNAL_ERROR; received from peer\nCustomRouteControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "AuditPolicyDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit\": stream error: stream ID 3525; INTERNAL_ERROR; received from peer\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master1_57c43ca0-b206-4b81-9979-2c938b7d82f0 became leader | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master1_57c43ca0-b206-4b81-9979-2c938b7d82f0 became leader | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found" to "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigWriteError |
Failed to write observed config: Operation cannot be fulfilled on authentications.operator.openshift.io "cluster": the object has been modified; please apply your changes to the latest version and try again | |
openshift-authentication-operator |
cluster-authentication-operator-metadata-controller-metadatacontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-metadata -n openshift-authentication because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "AuditPolicyDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit\": stream error: stream ID 3525; INTERNAL_ERROR; received from peer\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthClientsControllerDegraded: route.route.openshift.io \"oauth-openshift\" not found\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "AuditPolicyDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit\": stream error: stream ID 3525; INTERNAL_ERROR; received from peer\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
(x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-config-observer-configobserver |
kube-apiserver-operator |
ObservedConfigChanged |
Writing updated observed config: map[string]interface{}{ "admission": map[string]interface{}{"pluginConfig": map[string]interface{}{"network.openshift.io/ExternalIPRanger": map[string]interface{}{"configuration": map[string]interface{}{"allowIngressIP": bool(false), "apiVersion": string("network.openshift.io/v1"), "kind": string("ExternalIPRangerAdmissionConfig")}}, "network.openshift.io/RestrictedEndpointsAdmission": map[string]interface{}{"configuration": map[string]interface{}{"apiVersion": string("network.openshift.io/v1"), "kind": string("RestrictedEndpointsAdmissionConfig"), "restrictedCIDRs": []interface{}{string("10.128.0.0/14"), string("172.30.0.0/16")}}}}}, "apiServerArguments": map[string]interface{}{"api-audiences": []interface{}{string("https://kubernetes.default.svc")}, "authentication-token-webhook-config-file": []interface{}{string("/etc/kubernetes/static-pod-resources/secrets/webhook-authenticat"...)}, "authentication-token-webhook-version": []interface{}{string("v1")}, "etcd-servers": []interface{}{string("https://192.168.126.10:2379"), string("https://localhost:2379")}, ...}, + "authConfig": map[string]interface{}{ + "oauthMetadataFile": string("/etc/kubernetes/static-pod-resources/configmaps/oauth-metadata/oauthMetadata"), + }, "corsAllowedOrigins": []interface{}{string(`//127\.0\.0\.1(:|$)`), string("//localhost(:|$)")}, "gracefulTerminationDuration": string("15"), ... // 2 identical entries } |
openshift-authentication-operator |
cluster-authentication-operator-oauthserverworkloadcontroller |
authentication-operator |
DeploymentCreated |
Created Deployment.apps/oauth-openshift -n openshift-authentication because it was missing | |
openshift-authentication |
replicaset-controller |
oauth-openshift-76f8b8bcb7 |
SuccessfulCreate |
Created pod: oauth-openshift-76f8b8bcb7-vcnkl | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-76f8b8bcb7 to 1 | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "AuditPolicyDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit\": stream error: stream ID 3525; INTERNAL_ERROR; received from peer\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: waiting for the oauth-openshift route to contain an admitted ingress: no admitted ingress for route oauth-openshift in namespace openshift-authentication\nOAuthServerDeploymentDegraded: \nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "AuditPolicyDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit\": stream error: stream ID 3525; INTERNAL_ERROR; received from peer\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready",Progressing changed from False to True ("OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1."),Available message changed from "OAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
(x2) | openshift-authentication |
kubelet |
oauth-openshift-76f8b8bcb7-vcnkl |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-session" : secret "v4-0-config-system-session" not found |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 4 triggered by "configmap/config has changed" | |
openshift-authentication-operator |
cluster-authentication-operator-resource-sync-controller-resourcesynccontroller |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/oauth-openshift -n openshift-config-managed because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-target-config-controller-targetconfigcontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/config -n openshift-kube-apiserver: cause by changes in data.config.yaml | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-payloadconfig |
authentication-operator |
SecretCreated |
Created Secret/v4-0-config-system-session -n openshift-authentication because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/revision-status-4 -n openshift-kube-apiserver because it was missing | |
(x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObserveConsoleURL |
assetPublicURL changed from to https://console-openshift-console.apps.test-cluster.redhat.com |
(x2) | openshift-authentication-operator |
cluster-authentication-operator-config-observer-configobserver |
authentication-operator |
ObservedConfigChanged |
Writing updated section ("oauthServer") of observed config: "\u00a0\u00a0map[string]interface{}{\n\u00a0\u00a0\t\"corsAllowedOrigins\": []interface{}{string(`//127\\.0\\.0\\.1(:|$)`), string(\"//localhost(:|$)\")},\n\u00a0\u00a0\t\"oauthConfig\": map[string]interface{}{\n-\u00a0\t\t\"assetPublicURL\": string(\"\"),\n+\u00a0\t\t\"assetPublicURL\": string(\"https://console-openshift-console.apps.test-cluster.redhat.com\"),\n\u00a0\u00a0\t\t\"loginURL\": string(\"https://api.test-cluster.redhat.com:6443\"),\n\u00a0\u00a0\t\t\"templates\": map[string]interface{}{\"error\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"login\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...), \"providerSelection\": string(\"/var/config/system/secrets/v4-0-config-system-ocp-branding-templ\"...)},\n\u00a0\u00a0\t\t\"tokenConfig\": map[string]interface{}{\"accessTokenMaxAgeSeconds\": float64(86400), \"authorizeTokenMaxAgeSeconds\": float64(300)},\n\u00a0\u00a0\t},\n\u00a0\u00a0\t\"serverArguments\": map[string]interface{}{\"audit-log-format\": []interface{}{string(\"json\")}, \"audit-log-maxbackup\": []interface{}{string(\"10\")}, \"audit-log-maxsize\": []interface{}{string(\"100\")}, \"audit-log-path\": []interface{}{string(\"/var/log/oauth-server/audit.log\")}, ...},\n\u00a0\u00a0\t\"servingInfo\": map[string]interface{}{\"cipherSuites\": []interface{}{string(\"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\"), string(\"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\"), string(\"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\"), ...}, \"minTLSVersion\": string(\"VersionTLS12\"), \"namedCertificates\": []interface{}{map[string]interface{}{\"certFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"keyFile\": string(\"/var/config/system/secrets/v4-0-config-system-router-certs/apps.\"...), \"names\": []interface{}{string(\"*.apps.test-cluster.redhat.com\")}}}},\n\u00a0\u00a0\t\"volumesToMount\": map[string]interface{}{\"identityProviders\": string(\"{}\")},\n\u00a0\u00a0}\n" |
openshift-kube-apiserver-operator |
kube-apiserver-operator-resource-sync-controller-resourcesynccontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "AuditPolicyDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit\": stream error: stream ID 3525; INTERNAL_ERROR; received from peer\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "AuditPolicyDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit\": stream error: stream ID 3525; INTERNAL_ERROR; received from peer\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-pod-4 -n openshift-kube-apiserver because it was missing | |
(x4) | openshift-authentication |
kubelet |
oauth-openshift-76f8b8bcb7-vcnkl |
FailedMount |
MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" : configmap "v4-0-config-system-cliconfig" not found |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "AuditPolicyDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit\": stream error: stream ID 3525; INTERNAL_ERROR; received from peer\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "AuditPolicyDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit\": stream error: stream ID 3525; INTERNAL_ERROR; received from peer\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerRouteEndpointAccessibleControllerDegraded: [Get \"https://oauth-openshift.apps.test-cluster.redhat.com/healthz\": EOF, Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again]\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/config-4 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.test-cluster.redhat.com/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 nodes are at revision 3"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 2; 0 nodes have achieved new revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 3" | |
openshift-authentication |
replicaset-controller |
oauth-openshift-689f594445 |
SuccessfulCreate |
Created pod: oauth-openshift-689f594445-thjj4 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-76f8b8bcb7 |
SuccessfulDelete |
Deleted pod: oauth-openshift-76f8b8bcb7-vcnkl | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-689f594445 to 1 from 0 | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-76f8b8bcb7 to 0 from 1 | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-cert-syncer-kubeconfig-4 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-payloadconfig |
authentication-operator |
ConfigMapCreated |
Created ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "AuditPolicyDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit\": stream error: stream ID 3525; INTERNAL_ERROR; received from peer\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerRouteEndpointAccessibleControllerDegraded: [Get \"https://oauth-openshift.apps.test-cluster.redhat.com/healthz\": EOF, Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again]\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "AuditPolicyDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit\": stream error: stream ID 3525; INTERNAL_ERROR; received from peer\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.test-cluster.redhat.com/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/oauth-metadata-4 -n openshift-kube-apiserver because it was missing | |
openshift-authentication |
multus |
oauth-openshift-76f8b8bcb7-vcnkl |
AddedInterface |
Add eth0 [10.128.0.88/23] from ovn-kubernetes | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/bound-sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing | |
openshift-authentication |
kubelet |
oauth-openshift-76f8b8bcb7-vcnkl |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d756df4dce6ace35ff2aecf459affb7cc1bef2aa08004d62575ec09f6c76c86" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/etcd-serving-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1." to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.126.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.test-cluster.redhat.com/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: failed to get oauth metadata from openshift-config-managed/oauth-openshift ConfigMap: configmap \"oauth-openshift\" not found (check authentication operator, it is supposed to create this)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.test-cluster.redhat.com/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.126.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-server-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kubelet-serving-ca-4 -n openshift-kube-apiserver because it was missing | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "AuditPolicyDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit\": stream error: stream ID 3525; INTERNAL_ERROR; received from peer\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.test-cluster.redhat.com/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.test-cluster.redhat.com/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "AuditPolicyDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit\": stream error: stream ID 3525; INTERNAL_ERROR; received from peer\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerConfigObservationDegraded: error writing updated observed config: Operation cannot be fulfilled on authentications.operator.openshift.io \"cluster\": the object has been modified; please apply your changes to the latest version and try again\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.test-cluster.redhat.com/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "AuditPolicyDegraded: Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-oauth-apiserver/configmaps/audit\": stream error: stream ID 3525; INTERNAL_ERROR; received from peer\nIngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.test-cluster.redhat.com/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-authentication |
kubelet |
oauth-openshift-76f8b8bcb7-vcnkl |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d756df4dce6ace35ff2aecf459affb7cc1bef2aa08004d62575ec09f6c76c86" in 4.836028568s | |
openshift-authentication |
replicaset-controller |
oauth-openshift-689f594445 |
SuccessfulDelete |
Deleted pod: oauth-openshift-689f594445-thjj4 | |
openshift-authentication |
kubelet |
oauth-openshift-76f8b8bcb7-vcnkl |
Started |
Started container oauth-openshift | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-79959d769 to 1 from 0 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-79959d769 |
SuccessfulCreate |
Created pod: oauth-openshift-79959d769-f6b52 | |
openshift-authentication-operator |
cluster-authentication-operator-payload-config-controller-payloadconfig |
authentication-operator |
ConfigMapUpdated |
Updated ConfigMap/v4-0-config-system-cliconfig -n openshift-authentication: cause by changes in data.v4-0-config-system-cliconfig | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.test-cluster.redhat.com/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication (container is waiting in pending oauth-openshift-76f8b8bcb7-vcnkl pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.test-cluster.redhat.com/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready",Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 0, desired generation is 1.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.126.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.126.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/sa-token-signing-certs-4 -n openshift-kube-apiserver because it was missing | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-689f594445 to 0 from 1 | |
openshift-authentication |
kubelet |
oauth-openshift-76f8b8bcb7-vcnkl |
Created |
Created container oauth-openshift | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapCreated |
Created ConfigMap/kube-apiserver-audit-policies-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/etcd-client-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-serving-certkey-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionCreate |
Revision 3 created because configmap/config has changed | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
RevisionTriggered |
new revision 4 triggered by "configmap/config has changed,configmap/oauth-metadata has changed" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
SecretCreated |
Created Secret/webhook-authenticator-4 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-revisioncontroller |
kube-apiserver-operator |
ConfigMapUpdated |
Updated ConfigMap/revision-status-4 -n openshift-kube-apiserver: cause by changes in data.reason | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: conflicting latestAvailableRevision 4" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nRevisionControllerDegraded: conflicting latestAvailableRevision 4" to "NodeControllerDegraded: All master nodes are ready" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
NodeTargetRevisionChanged |
Updating node "master1" from revision 3 to 4 because node master1 with revision 3 is the oldest | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 nodes are at revision 3; 0 nodes have achieved new revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 3" to "StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 3; 0 nodes have achieved new revision 4" | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
PodCreated |
Created Pod/installer-4-master1 -n openshift-kube-apiserver because it was missing | |
openshift-kube-apiserver |
kubelet |
installer-4-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" already present on machine | |
openshift-kube-apiserver |
kubelet |
installer-4-master1 |
Created |
Created container installer | |
openshift-kube-apiserver |
kubelet |
installer-4-master1 |
Started |
Started container installer | |
openshift-kube-apiserver |
multus |
installer-4-master1 |
AddedInterface |
Add eth0 [10.128.0.89/23] from ovn-kubernetes | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled down replica set oauth-openshift-79959d769 to 0 from 1 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-b5887fb6f |
SuccessfulCreate |
Created pod: oauth-openshift-b5887fb6f-fg5md | |
openshift-authentication |
deployment-controller |
oauth-openshift |
ScalingReplicaSet |
Scaled up replica set oauth-openshift-b5887fb6f to 1 from 0 | |
openshift-authentication |
replicaset-controller |
oauth-openshift-79959d769 |
SuccessfulDelete |
Deleted pod: oauth-openshift-79959d769-f6b52 | |
(x3) | openshift-authentication-operator |
cluster-authentication-operator-oauthserverworkloadcontroller |
authentication-operator |
DeploymentUpdated |
Updated Deployment.apps/oauth-openshift -n openshift-authentication because it changed |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp: lookup thanos-querier.openshift-monitoring.svc on 172.30.0.10:53: no such host" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.194.80:9091: connect: connection refused" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication (container is waiting in pending oauth-openshift-76f8b8bcb7-vcnkl pod)\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.test-cluster.redhat.com/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.test-cluster.redhat.com/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready",Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 2, desired generation is 3.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.126.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.126.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-zvgzt | |
openshift-authentication |
kubelet |
oauth-openshift-b5887fb6f-fg5md |
Created |
Created container oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-b5887fb6f-fg5md |
Started |
Started container oauth-openshift | |
openshift-authentication |
kubelet |
oauth-openshift-b5887fb6f-fg5md |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d756df4dce6ace35ff2aecf459affb7cc1bef2aa08004d62575ec09f6c76c86" already present on machine | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-zvgzt |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-zvgzt |
Created |
Created container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-zvgzt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" already present on machine | |
openshift-multus |
multus |
cni-sysctl-allowlist-ds-zvgzt |
AddedInterface |
Add eth0 [10.128.0.90/23] from ovn-kubernetes | |
openshift-authentication |
multus |
oauth-openshift-b5887fb6f-fg5md |
AddedInterface |
Add eth0 [10.128.0.91/23] from ovn-kubernetes | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.test-cluster.redhat.com/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.test-cluster.redhat.com/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.test-cluster.redhat.com/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.126.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.test-cluster.redhat.com/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.126.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
(x2) | openshift-authentication |
kubelet |
oauth-openshift-76f8b8bcb7-vcnkl |
Killing |
Stopping container oauth-openshift |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.test-cluster.redhat.com/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerAvailable: endpoints \"oauth-openshift\" not found\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.126.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.test-cluster.redhat.com/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerAvailable: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.126.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: No subsets found for the endpoints of oauth-server\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.test-cluster.redhat.com/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" to "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.test-cluster.redhat.com/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused\nOAuthServerServiceEndpointsEndpointAccessibleControllerDegraded: oauth service endpoints are not ready" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.test-cluster.redhat.com/healthz\": EOF\nOAuthServerServiceEndpointAccessibleControllerDegraded: Get \"https://172.30.68.38:443/healthz\": dial tcp 172.30.68.38:443: connect: connection refused" to "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.test-cluster.redhat.com/healthz\": EOF" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nOAuthServerRouteEndpointAccessibleControllerAvailable: Get \"https://oauth-openshift.apps.test-cluster.redhat.com/healthz\": EOF\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.126.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.126.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded changed from True to False ("OAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()\nOAuthServerRouteEndpointAccessibleControllerDegraded: Get \"https://oauth-openshift.apps.test-cluster.redhat.com/healthz\": EOF" to "IngressStateEndpointsDegraded: All 1 endpoints for oauth-server are reporting 'not ready'\nOAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()" | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-zvgzt |
Killing |
Stopping container kube-multus-additional-cni-plugins | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Killing |
Stopping container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Killing |
Stopping container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master1 |
Created |
Created container startup-monitor | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Killing |
Stopping container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Killing |
Stopping container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Killing |
Stopping container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master1 |
Started |
Started container startup-monitor | |
openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"route.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/route.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused | |
(x2) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"security.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/security.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused |
(x2) | openshift-apiserver-operator |
openshift-apiserver-operator-apiservice-openshift-apiserver-controller-apiservicecontroller_openshift-apiserver |
openshift-apiserver-operator |
OpenShiftAPICheckFailed |
"template.openshift.io.v1" failed with an attempt failed with statusCode = 0, err = Get "https://172.30.0.1:443/apis/template.openshift.io/v1": dial tcp 172.30.0.1:443: connect: connection refused |
(x7) | openshift-kube-apiserver-operator |
kube-apiserver-operator-installer-controller |
kube-apiserver-operator |
InstallerPodFailed |
Failed to create installer pod for revision 4 count 0 on node "master1": Get "https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-4-master1": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Started |
Started container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8d009a8c5ea6b7739d18d167647f2bd1733af8560b6d7b013a6d0c35e266323" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Created |
Created container setup | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Created |
Created container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Created |
Created container kube-apiserver-cert-regeneration-controller | |
(x33) | openshift-etcd-operator |
openshift-cluster-etcd-operator-script-controller-scriptcontroller |
etcd-operator |
ScriptControllerErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
(x18) | openshift-etcd-operator |
openshift-cluster-etcd-operator-etcd-endpoints-controller-etcdendpointscontroller |
etcd-operator |
EtcdEndpointsErrorUpdatingStatus |
Put "https://172.30.0.1:443/apis/operator.openshift.io/v1/etcds/cluster/status": dial tcp 172.30.0.1:443: connect: connection refused |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8d009a8c5ea6b7739d18d167647f2bd1733af8560b6d7b013a6d0c35e266323" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Created |
Created container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Started |
Started container kube-apiserver | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Started |
Started container kube-apiserver-cert-syncer | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" already present on machine | |
openshift-kube-apiserver |
cert-syncer-certsynccontroller |
kube-apiserver-master1 |
FastControllerResync |
Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Started |
Started container kube-apiserver-cert-regeneration-controller | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Started |
Started container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Created |
Created container kube-apiserver-check-endpoints | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" already present on machine | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Started |
Started container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Created |
Created container kube-apiserver-insecure-readyz | |
openshift-kube-apiserver |
kubelet |
kube-apiserver-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9ed61d19216d71cc5692c22402961b0f865ed8629f5d64f1687aa47af601c018" already present on machine | |
openshift-kube-apiserver |
check-endpoint-checkendpointstimetostart |
master1 |
FastControllerResync |
Controller "CheckEndpointsTimeToStart" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-apiserver |
check-endpoint-checkendpointsstop |
master1 |
FastControllerResync |
Controller "CheckEndpointsStop" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master1_af0236a5-250c-4343-a763-533d5e0fe5d1 became leader | |
openshift-kube-apiserver |
cert-regeneration-controller |
cert-regeneration-controller-lock |
LeaderElection |
master1_af0236a5-250c-4343-a763-533d5e0fe5d1 became leader | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: status.versions changed from [{"operator" "4.12.2"} {"oauth-apiserver" "4.12.2"}] to [{"operator" "4.12.2"} {"oauth-apiserver" "4.12.2"} {"oauth-openshift" "4.12.2_openshift"}] | |
(x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
BackOff |
Back-off restarting failed container |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.194.80:9091: connect: connection refused" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.194.80:9091: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:gce-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " | |
(x2) | openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorVersionChanged |
clusteroperator/authentication version "oauth-openshift" changed from "" to "4.12.2_openshift" |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "OAuthServerDeploymentDegraded: 1 of 1 requested instances are unavailable for oauth-openshift.openshift-authentication ()" to "All is well",Progressing message changed from "OAuthServerDeploymentProgressing: deployment/oauth-openshift.openshift-authentication: observed generation is 3, desired generation is 4.\nWellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.126.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownReadyControllerProgressing: kube-apiserver oauth endpoint https://192.168.126.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)",Available message changed from "OAuthServerDeploymentAvailable: no oauth-openshift.openshift-authentication pods available on any node.\nWellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.126.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.126.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Progressing changed from True to False ("AuthenticatorCertKeyProgressing: All is well"),Available changed from False to True ("All is well") | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Available message changed from "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.126.10:6443/.well-known/oauth-authorization-server is not yet served and authentication operator keeps waiting (check kube-apiserver operator, and check that instances roll out successfully, which can take several minutes per instance)" to "WellKnownAvailable: The well-known endpoint is not yet available: kube-apiserver oauth endpoint https://192.168.126.10:6443/.well-known/oauth-authorization-server replied with unexpected status: 403 Forbidden (check kube-apiserver logs if this error persists)" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "All is well" to "WellKnownReadyControllerDegraded: kube-apiserver oauth endpoint https://192.168.126.10:6443/.well-known/oauth-authorization-server replied with unexpected status: 403 Forbidden (check kube-apiserver logs if this error persists)" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdStaticResourcesDegraded: \"etcd/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd\": dial tcp 172.30.0.1:443: connect: connection refused\nEtcdStaticResourcesDegraded: \nEtcdMembersDegraded: No unhealthy members found" | |
openshift-authentication-operator |
oauth-apiserver-status-controller-statussyncer_authentication |
authentication-operator |
OperatorStatusChanged |
Status for clusteroperator/authentication changed: Degraded message changed from "WellKnownReadyControllerDegraded: kube-apiserver oauth endpoint https://192.168.126.10:6443/.well-known/oauth-authorization-server replied with unexpected status: 403 Forbidden (check kube-apiserver logs if this error persists)" to "All is well" | |
openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available changed from True to False ("APIServicesAvailable: Get \"https://172.30.0.1:443/apis/apiregistration.k8s.io/v1/apiservices/v1.apps.openshift.io\": dial tcp 172.30.0.1:443: connect: connection refused") | |
(x2) | openshift-apiserver-operator |
openshift-apiserver-operator-status-controller-statussyncer_openshift-apiserver |
openshift-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/openshift-apiserver changed: Available changed from False to True ("All is well") |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.194.80:9091: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:leader-locking-kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/roles/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/leader-election-cluster-policy-controller-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-controller-manager/rolebindings/system:openshift:leader-election-lock-cluster-policy-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-security-allocation-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:namespace-security-allocation-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/podsecurity-admission-label-syncer-controller-clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:podsecurity-admission-label-syncer-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/namespace-openshift-infra.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/services/kube-controller-manager\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/kube-controller-manager-sa\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/recycler-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-infra/serviceaccounts/pv-recycler-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-controller-manager-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/localhost-recovery-sa.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/localhost-recovery-client\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/csr_approver_clusterrolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:cluster-csr-approver-controller\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: \"assets/kube-controller-manager/gce/cloud-provider-role.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:kube-controller-manager:gce-cloud-provider\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeControllerManagerStaticResourcesDegraded: " to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.194.80:9091: connect: connection refused\nStaticPodsDegraded: pod/kube-controller-manager-master1 container \"kube-controller-manager\" is waiting: CrashLoopBackOff: back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-master1_openshift-kube-controller-manager(87652a2097f87e4ffa50543ef432707a)" | |
openshift-etcd-operator |
openshift-cluster-etcd-operator-status-controller-statussyncer_etcd |
etcd-operator |
OperatorStatusChanged |
Status for clusteroperator/etcd changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nEtcdStaticResourcesDegraded: \"etcd/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-etcd\": dial tcp 172.30.0.1:443: connect: connection refused\nEtcdStaticResourcesDegraded: \nEtcdMembersDegraded: No unhealthy members found" to "NodeControllerDegraded: All master nodes are ready\nEtcdMembersDegraded: No unhealthy members found" | |
(x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: " |
(x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Started |
Started container kube-controller-manager |
(x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8d009a8c5ea6b7739d18d167647f2bd1733af8560b6d7b013a6d0c35e266323" already present on machine |
(x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Created |
Created container kube-controller-manager |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.194.80:9091: connect: connection refused\nStaticPodsDegraded: pod/kube-controller-manager-master1 container \"kube-controller-manager\" is waiting: CrashLoopBackOff: back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-master1_openshift-kube-controller-manager(87652a2097f87e4ffa50543ef432707a)" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.194.80:9091: connect: connection refused" | |
(x2) | openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/ns.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/svc.yaml\" (string): Get \"https://172.30.0.1:443/api/v1/namespaces/openshift-kube-apiserver/services/apiserver\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrole-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterroles/system:openshift:controller:check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-auth-delegator.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-auth-delegator\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-node-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-node-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-clusterrolebinding-crd-reader.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:controller:kube-apiserver-check-endpoints-crd-reader\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding-kube-system.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:openshift:controller:kube-apiserver-check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/check-endpoints-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/openshift-kube-apiserver/rolebindings/system:openshift:controller:check-endpoints\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/delegated-incluster-authentication-rolebinding.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/authentication-reader-for-authenticated-users\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: \"assets/kube-apiserver/localhost-recovery-client-crb.yaml\" (string): Get \"https://172.30.0.1:443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:operator:kube-apiserver-recovery\": dial tcp 172.30.0.1:443: connect: connection refused\nKubeAPIServerStaticResourcesDegraded: " to "NodeControllerDegraded: All master nodes are ready" |
openshift-kube-apiserver |
kubelet |
kube-apiserver-startup-monitor-master1 |
Killing |
Stopping container startup-monitor | |
(x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Unhealthy |
Startup probe failed: Get "https://192.168.126.10:10257/healthz": dial tcp 192.168.126.10:10257: connect: connection refused |
(x2) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
ProbeError |
Startup probe error: Get "https://192.168.126.10:10257/healthz": dial tcp 192.168.126.10:10257: connect: connection refused body: |
openshift-kube-apiserver-operator |
kube-apiserver-operator-status-controller-statussyncer_kube-apiserver |
kube-apiserver-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-apiserver changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 nodes are at revision 4"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 3; 0 nodes have achieved new revision 4" to "StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 4" | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master1_e1c43302-a1e1-407c-ac03-7bd2aadc9a7c became leader | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master1_e1c43302-a1e1-407c-ac03-7bd2aadc9a7c became leader | |
openshift-monitoring |
replicaset-controller |
prometheus-adapter-786496f679 |
SuccessfulDelete |
Deleted pod: prometheus-adapter-786496f679-ffgkb | |
openshift-monitoring |
deployment-controller |
prometheus-adapter |
ScalingReplicaSet |
Scaled down replica set prometheus-adapter-786496f679 to 0 from 1 | |
openshift-console |
replicaset-controller |
console-5c44fb5754 |
SuccessfulDelete |
Deleted pod: console-5c44fb5754-9zfch | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-5c44fb5754 to 0 from 1 | |
default |
node-controller |
master1 |
RegisteredNode |
Node master1 event: Registered Node master1 in Controller | |
openshift-monitoring |
multus |
alertmanager-main-0 |
AddedInterface |
Add eth0 [10.128.0.92/23] from ovn-kubernetes | |
openshift-monitoring |
statefulset-controller |
prometheus-k8s |
SuccessfulCreate |
create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful | |
openshift-monitoring |
replicaset-controller |
prometheus-adapter-5dbc6bf64 |
SuccessfulCreate |
Created pod: prometheus-adapter-5dbc6bf64-j8nnk | |
openshift-monitoring |
replicaset-controller |
thanos-querier-5b8dcdd9b4 |
SuccessfulCreate |
Created pod: thanos-querier-5b8dcdd9b4-x9dtp | |
openshift-monitoring |
deployment-controller |
thanos-querier |
ScalingReplicaSet |
Scaled up replica set thanos-querier-5b8dcdd9b4 to 1 | |
openshift-monitoring |
statefulset-controller |
alertmanager-main |
SuccessfulCreate |
create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful | |
openshift-monitoring |
deployment-controller |
prometheus-adapter |
ScalingReplicaSet |
Scaled up replica set prometheus-adapter-5dbc6bf64 to 1 | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1705c63614eeb3feebc11b29e6a977c28bac2401092efae1d42b655259e2629" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:782acf9917df2dff59e1318fc08487830240019e5cc241e02e39a06651900bc2" | |
openshift-monitoring |
multus |
prometheus-k8s-0 |
AddedInterface |
Add eth0 [10.128.0.93/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
thanos-querier-5b8dcdd9b4-x9dtp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:00d1be95201020c5cb1d3fae3435ee9e7dc22d8360481ec8609fa368c6ad306e" | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container init-config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container init-config-reloader | |
openshift-monitoring |
multus |
prometheus-adapter-5dbc6bf64-j8nnk |
AddedInterface |
Add eth0 [10.128.0.95/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-adapter-5dbc6bf64-j8nnk |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:56e8f74cab8fdae7f7bbf1c9a307a5fb98eac750a306ec8073478f0899259609" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-adapter-5dbc6bf64-j8nnk |
Created |
Created container prometheus-adapter | |
openshift-monitoring |
kubelet |
prometheus-adapter-5dbc6bf64-j8nnk |
Started |
Started container prometheus-adapter | |
openshift-monitoring |
multus |
thanos-querier-5b8dcdd9b4-x9dtp |
AddedInterface |
Add eth0 [10.128.0.94/23] from ovn-kubernetes | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:917b84445c725430f74f2041baa697d86d2a0bc971f6b9101591524daf8053f6" | |
(x2) | openshift-monitoring |
kubelet |
prometheus-adapter-786496f679-ffgkb |
Killing |
Stopping container prometheus-adapter |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from True to False ("All is well"),Available message changed from "DeploymentAvailable: 0 replicas available for console deployment\nRouteHealthAvailable: route not yet available, https://console-openshift-console.apps.test-cluster.redhat.com returns '503 Service Unavailable'" to "RouteHealthAvailable: route not yet available, https://console-openshift-console.apps.test-cluster.redhat.com returns '503 Service Unavailable'" | |
openshift-multus |
daemonset-controller |
cni-sysctl-allowlist-ds |
SuccessfulCreate |
Created pod: cni-sysctl-allowlist-ds-9m8mw | |
openshift-multus |
multus |
cni-sysctl-allowlist-ds-9m8mw |
AddedInterface |
Add eth0 [10.128.0.96/23] from ovn-kubernetes | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-9m8mw |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae783ee6a05beafca04f0766933ee1573b70231a6cd8c449a2177afdaf4802a0" already present on machine | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-9m8mw |
Created |
Created container kube-multus-additional-cni-plugins | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-9m8mw |
Started |
Started container kube-multus-additional-cni-plugins | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1705c63614eeb3feebc11b29e6a977c28bac2401092efae1d42b655259e2629" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:782acf9917df2dff59e1318fc08487830240019e5cc241e02e39a06651900bc2" in 5.337774964s | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f968922564c3eea1c69d6bbe529d8970784d6cae8935afaf674d9fa7c0f72ea3" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-5b8dcdd9b4-x9dtp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f968922564c3eea1c69d6bbe529d8970784d6cae8935afaf674d9fa7c0f72ea3" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-5b8dcdd9b4-x9dtp |
Created |
Created container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-5b8dcdd9b4-x9dtp |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-5b8dcdd9b4-x9dtp |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:756f3f02d7592b100d5fcf2a8351a570102e79e02425d9b5d3d09a63ee21839d" | |
openshift-monitoring |
kubelet |
thanos-querier-5b8dcdd9b4-x9dtp |
Started |
Started container thanos-query | |
openshift-monitoring |
kubelet |
thanos-querier-5b8dcdd9b4-x9dtp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" already present on machine | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container config-reloader | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
thanos-querier-5b8dcdd9b4-x9dtp |
Created |
Created container oauth-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-5b8dcdd9b4-x9dtp |
Created |
Created container thanos-query | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container alertmanager-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulling |
Pulling image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:756f3f02d7592b100d5fcf2a8351a570102e79e02425d9b5d3d09a63ee21839d" | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container kube-rbac-proxy-metric | |
openshift-monitoring |
kubelet |
thanos-querier-5b8dcdd9b4-x9dtp |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:00d1be95201020c5cb1d3fae3435ee9e7dc22d8360481ec8609fa368c6ad306e" in 6.034712408s | |
openshift-monitoring |
kubelet |
thanos-querier-5b8dcdd9b4-x9dtp |
Started |
Started container oauth-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container kube-rbac-proxy | |
openshift-multus |
kubelet |
cni-sysctl-allowlist-ds-9m8mw |
Killing |
Stopping container kube-multus-additional-cni-plugins | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:917b84445c725430f74f2041baa697d86d2a0bc971f6b9101591524daf8053f6" in 7.051542193s | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d1705c63614eeb3feebc11b29e6a977c28bac2401092efae1d42b655259e2629" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container prometheus | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container kube-rbac-proxy-thanos | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f968922564c3eea1c69d6bbe529d8970784d6cae8935afaf674d9fa7c0f72ea3" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container prometheus-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container prometheus-proxy | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container config-reloader | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:00d1be95201020c5cb1d3fae3435ee9e7dc22d8360481ec8609fa368c6ad306e" already present on machine | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Started |
Started container thanos-sidecar | |
openshift-monitoring |
kubelet |
prometheus-k8s-0 |
Created |
Created container thanos-sidecar | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Degraded message changed from "RouteHealthDegraded: route not yet available, https://console-openshift-console.apps.test-cluster.redhat.com returns '503 Service Unavailable'" to "All is well",Available changed from False to True ("All is well") | |
(x2) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container alertmanager |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:756f3f02d7592b100d5fcf2a8351a570102e79e02425d9b5d3d09a63ee21839d" in 6.840448631s | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Created |
Created container prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
alertmanager-main-0 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:782acf9917df2dff59e1318fc08487830240019e5cc241e02e39a06651900bc2" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-5b8dcdd9b4-x9dtp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-5b8dcdd9b4-x9dtp |
Pulled |
Successfully pulled image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:756f3f02d7592b100d5fcf2a8351a570102e79e02425d9b5d3d09a63ee21839d" in 6.919629311s | |
openshift-monitoring |
kubelet |
thanos-querier-5b8dcdd9b4-x9dtp |
Created |
Created container prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-5b8dcdd9b4-x9dtp |
Started |
Started container prom-label-proxy | |
openshift-monitoring |
kubelet |
thanos-querier-5b8dcdd9b4-x9dtp |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c28f27a3a10df13e5e8c074e8734683a6603ebaccd9d67e2095070fb6859b1d6" already present on machine | |
openshift-monitoring |
kubelet |
thanos-querier-5b8dcdd9b4-x9dtp |
Created |
Created container kube-rbac-proxy-rules | |
openshift-monitoring |
kubelet |
thanos-querier-5b8dcdd9b4-x9dtp |
Started |
Started container kube-rbac-proxy-rules | |
(x2) | openshift-monitoring |
kubelet |
alertmanager-main-0 |
Started |
Started container alertmanager |
openshift-monitoring |
kubelet |
thanos-querier-5b8dcdd9b4-x9dtp |
Created |
Created container kube-rbac-proxy-metrics | |
openshift-monitoring |
kubelet |
thanos-querier-5b8dcdd9b4-x9dtp |
Started |
Started container kube-rbac-proxy-metrics | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/revision-status-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-manager-pod-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-satokensignercontroller |
kube-controller-manager-operator |
SecretUpdated |
Updated Secret/service-account-private-key -n openshift-kube-controller-manager because it changed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/cluster-policy-controller-config-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/config-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/controller-manager-kubeconfig-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/serviceaccount-ca-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/kube-controller-cert-syncer-kubeconfig-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/recycler-config-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
ConfigMapCreated |
Created ConfigMap/service-ca-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/service-account-private-key-6 -n openshift-kube-controller-manager because it was missing | |
(x2) | openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionTriggered |
new revision 6 triggered by "secret/service-account-private-key has changed" |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/localhost-recovery-client-token-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
RevisionCreate |
Revision 5 created because secret/service-account-private-key has changed | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.194.80:9091: connect: connection refused" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.194.80:9091: connect: connection refused\nRevisionControllerDegraded: conflicting latestAvailableRevision 6" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.194.80:9091: connect: connection refused\nRevisionControllerDegraded: conflicting latestAvailableRevision 6" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.194.80:9091: connect: connection refused" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-revisioncontroller |
kube-controller-manager-operator |
SecretCreated |
Created Secret/serving-cert-6 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeTargetRevisionChanged |
Updating node "master1" from revision 5 to 6 because node master1 with revision 5 is the oldest | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from False to True ("NodeInstallerProgressing: 1 nodes are at revision 5; 0 nodes have achieved new revision 6"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 5" to "StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 5; 0 nodes have achieved new revision 6" | |
openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
ConfigMapUpdated |
Updated ConfigMap/console-config -n openshift-console: cause by changes in data.console-config.yaml | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled up replica set console-7db86c8ffd to 1 | |
openshift-kube-controller-manager |
kubelet |
installer-6-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81a043e61c07b8e93c6b082aa920d61ffa69762bcc2ef1018360026d62c11b18" already present on machine | |
openshift-console |
replicaset-controller |
console-7db86c8ffd |
SuccessfulCreate |
Created pod: console-7db86c8ffd-zswlt | |
openshift-kube-controller-manager |
multus |
installer-6-master1 |
AddedInterface |
Add eth0 [10.128.0.98/23] from ovn-kubernetes | |
openshift-console |
multus |
console-7db86c8ffd-zswlt |
AddedInterface |
Add eth0 [10.128.0.97/23] from ovn-kubernetes | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
PodCreated |
Created Pod/installer-6-master1 -n openshift-kube-controller-manager because it was missing | |
openshift-console |
kubelet |
console-7db86c8ffd-zswlt |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81ecc8fb6073babcfb5c08b43206fbbe49e5c0c0694dc3fb6433aebfa9e0bd0f" already present on machine | |
openshift-console |
kubelet |
console-7db86c8ffd-zswlt |
Started |
Started container console | |
(x2) | openshift-console-operator |
console-operator-console-operator-consoleoperator |
console-operator |
DeploymentUpdated |
Updated Deployment.apps/console -n openshift-console because it changed |
openshift-console |
kubelet |
console-7db86c8ffd-zswlt |
Created |
Created container console | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from False to True ("SyncLoopRefreshProgressing: Changes made during sync updates, additional sync expected.") | |
openshift-kube-controller-manager |
kubelet |
installer-6-master1 |
Started |
Started container installer | |
openshift-kube-controller-manager |
kubelet |
installer-6-master1 |
Created |
Created container installer | |
openshift-console |
deployment-controller |
console |
ScalingReplicaSet |
Scaled down replica set console-87d9d6878 to 0 from 1 | |
openshift-console |
replicaset-controller |
console-87d9d6878 |
SuccessfulDelete |
Deleted pod: console-87d9d6878-bjhl4 | |
openshift-console-operator |
console-operator-status-controller-statussyncer_console |
console-operator |
OperatorStatusChanged |
Status for clusteroperator/console changed: Progressing changed from True to False ("All is well") | |
openshift-console |
kubelet |
console-87d9d6878-bjhl4 |
ProbeError |
Readiness probe error: Get "https://10.128.0.85:8443/health": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) body: | |
openshift-kube-controller-manager |
static-pod-installer |
installer-6-master1 |
StaticPodInstallerCompleted |
Successfully installed revision 6 | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Killing |
Stopping container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8d009a8c5ea6b7739d18d167647f2bd1733af8560b6d7b013a6d0c35e266323" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Started |
Started container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:09c8eb0283a9eda5b282f04357875966a549651e120e527904a917ec862eb642" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Created |
Created container cluster-policy-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Started |
Started container cluster-policy-controller | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.194.80:9091: connect: connection refused" to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.194.80:9091: connect: connection refused\nStaticPodsDegraded: pod/kube-controller-manager-master1 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master1 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master1 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master1 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " | |
openshift-kube-controller-manager |
cert-syncer-certsynccontroller |
kube-controller-manager-master1 |
FastControllerResync |
Controller "CertSyncController" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Created |
Created container kube-controller-manager | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Started |
Started container kube-controller-manager | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.194.80:9091: connect: connection refused\nStaticPodsDegraded: pod/kube-controller-manager-master1 container \"cluster-policy-controller\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master1 container \"kube-controller-manager\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master1 container \"kube-controller-manager-cert-syncer\" is waiting: ContainerCreating: \nStaticPodsDegraded: pod/kube-controller-manager-master1 container \"kube-controller-manager-recovery-controller\" is waiting: ContainerCreating: " to "GarbageCollectorDegraded: error fetching rules: Get \"https://thanos-querier.openshift-monitoring.svc:9091/api/v1/rules\": dial tcp 172.30.194.80:9091: connect: connection refused" | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81a043e61c07b8e93c6b082aa920d61ffa69762bcc2ef1018360026d62c11b18" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81a043e61c07b8e93c6b082aa920d61ffa69762bcc2ef1018360026d62c11b18" already present on machine | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Created |
Created container kube-controller-manager-cert-syncer | |
openshift-kube-controller-manager |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master1_53de91c7-ee6c-4b67-b793-e5adc27e8977 became leader | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Created |
Created container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Started |
Started container kube-controller-manager-recovery-controller | |
openshift-kube-controller-manager |
cert-recovery-controller |
cert-recovery-controller-lock |
LeaderElection |
master1_53de91c7-ee6c-4b67-b793-e5adc27e8977 became leader | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Degraded changed from True to False ("NodeControllerDegraded: All master nodes are ready") | |
(x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
ProbeError |
Startup probe error: Get "https://192.168.126.10:10257/healthz": dial tcp 192.168.126.10:10257: connect: connection refused body: |
(x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Unhealthy |
Startup probe failed: Get "https://192.168.126.10:10257/healthz": dial tcp 192.168.126.10:10257: connect: connection refused |
(x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
ProbeError |
Startup probe error: Get "https://192.168.126.10:10357/healthz": dial tcp 192.168.126.10:10357: connect: connection refused body: |
(x3) | openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Unhealthy |
Startup probe failed: Get "https://192.168.126.10:10357/healthz": dial tcp 192.168.126.10:10357: connect: connection refused |
openshift-kube-controller-manager |
kubelet |
kube-controller-manager-master1 |
Killing |
Container kube-controller-manager failed startup probe, will be restarted | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master1_82ac0d2e-89da-4fd2-8602-e517ac501c88 became leader | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master1 |
FastControllerResync |
Controller "namespace-security-allocation-controller" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-controller-manager |
podsecurity-admission-label-sync-controller-pod-security-admission-label-synchronization-controller-pod-security-admission-label-synchronization-controller |
kube-controller-manager-master1 |
FastControllerResync |
Controller "pod-security-admission-label-synchronization-controller" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master1_82ac0d2e-89da-4fd2-8602-e517ac501c88 became leader | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master1 |
ClusterInfrastructureStatus |
unable to get cluster infrastructure status, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-marketplace |
kubelet |
redhat-operators-pb62l |
Pulling |
Pulling image "registry.redhat.io/redhat/redhat-operator-index:v4.12" | |
openshift-marketplace |
multus |
redhat-operators-pb62l |
AddedInterface |
Add eth0 [10.128.0.99/23] from ovn-kubernetes | |
openshift-marketplace |
kubelet |
redhat-operators-pb62l |
Created |
Created container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-pb62l |
Pulled |
Successfully pulled image "registry.redhat.io/redhat/redhat-operator-index:v4.12" in 1.019693137s | |
openshift-marketplace |
kubelet |
redhat-operators-pb62l |
Started |
Started container registry-server | |
openshift-marketplace |
kubelet |
redhat-operators-pb62l |
Killing |
Stopping container registry-server | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master1_739274ed-d9f9-41d0-a540-acd99ad59dc7 became leader | |
default |
node-controller |
master1 |
RegisteredNode |
Node master1 event: Registered Node master1 in Controller | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master1_739274ed-d9f9-41d0-a540-acd99ad59dc7 became leader | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master1_3b0790f8-8e66-41c7-8e35-5fe70023e75d became leader | |
openshift-kube-controller-manager |
cluster-policy-controller |
cluster-policy-controller-lock |
LeaderElection |
master1_3b0790f8-8e66-41c7-8e35-5fe70023e75d became leader | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master1 |
FastControllerResync |
Controller "namespace-security-allocation-controller" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-controller-manager |
podsecurity-admission-label-sync-controller-pod-security-admission-label-synchronization-controller-pod-security-admission-label-synchronization-controller |
kube-controller-manager-master1 |
FastControllerResync |
Controller "pod-security-admission-label-synchronization-controller" resync interval is set to 0s which might lead to client request throttling | |
openshift-kube-controller-manager |
cluster-policy-controller |
kube-controller-manager-master1 |
ClusterInfrastructureStatus |
unable to get cluster infrastructure status, using HA cluster values for leader election: infrastructures.config.openshift.io "cluster" is forbidden: User "system:kube-controller-manager" cannot get resource "infrastructures" in API group "config.openshift.io" at the cluster scope | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-installer-controller |
kube-controller-manager-operator |
NodeCurrentRevisionChanged |
Updated node "master1" from revision 5 to 6 because static pod is ready | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-status-controller-statussyncer_kube-controller-manager |
kube-controller-manager-operator |
OperatorStatusChanged |
Status for clusteroperator/kube-controller-manager changed: Progressing changed from True to False ("NodeInstallerProgressing: 1 nodes are at revision 6"),Available message changed from "StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 5; 0 nodes have achieved new revision 6" to "StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 6" | |
openshift-kube-controller-manager-operator |
kube-controller-manager-operator-prunecontroller |
kube-controller-manager-operator |
PodCreated |
Created Pod/revision-pruner-6-master1 -n openshift-kube-controller-manager because it was missing | |
openshift-kube-controller-manager |
kubelet |
revision-pruner-6-master1 |
Started |
Started container pruner | |
openshift-kube-controller-manager |
multus |
revision-pruner-6-master1 |
AddedInterface |
Add eth0 [10.128.0.100/23] from ovn-kubernetes | |
openshift-kube-controller-manager |
kubelet |
revision-pruner-6-master1 |
Pulled |
Container image "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:81a043e61c07b8e93c6b082aa920d61ffa69762bcc2ef1018360026d62c11b18" already present on machine | |
openshift-kube-controller-manager |
kubelet |
revision-pruner-6-master1 |
Created |
Created container pruner | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master1_b96a16e8-1872-4d4d-8376-197f952bcbfc became leader | |
kube-system |
kube-controller-manager |
kube-controller-manager |
LeaderElection |
master1_b96a16e8-1872-4d4d-8376-197f952bcbfc became leader | |
default |
node-controller |
master1 |
RegisteredNode |
Node master1 event: Registered Node master1 in Controller | |
openshift-kube-apiserver-operator |
kube-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller |
kube-apiserver-operator |
CustomResourceDefinitionCreateFailed |
Failed to create CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io: customresourcedefinitions.apiextensions.k8s.io "podnetworkconnectivitychecks.controlplane.operator.openshift.io" already exists | |
openshift-apiserver-operator |
openshift-apiserver-operator-connectivity-check-controller-connectivitycheckcontroller |
openshift-apiserver-operator |
CustomResourceDefinitionCreated |
Created CustomResourceDefinition.apiextensions.k8s.io/podnetworkconnectivitychecks.controlplane.operator.openshift.io because it was missing | |
openshift-kube-controller-manager |
cluster-policy-controller-namespace-security-allocation-controller |
kube-controller-manager-master1 |
CreatedSCCRanges |
created SCC ranges for openshift-must-gather-6lzq6 namespace |